Authors: Chloe Qinyu Zhu, Rickard Stureborg, Bhuwan Dhingra
Published on: February 01, 2024
Impact Score: 8.35
Arxiv code: Arxiv:2402.01783
Summary
- What is new: Use of large language models (LLMs) in a zero-shot setting for detecting vaccine concerns in online discussions, without needing large, expensive training datasets.
- Why this is important: Rapid shifts in vaccine concerns and misinformation, challenging for public health efforts to address effectively.
- What the research proposes: Employing LLMs to identify vaccine concerns in online discourse using cost-effective strategies and zero-shot learning.
- Results: GPT-4 outperforms crowdworker accuracy in identifying vaccine concerns, achieving an F1 score of 78.7%.
Technical Details
Technological frameworks used: nan
Models used: Large language models
Data used: VaxConcerns dataset
Potential Impact
Healthcare, public health organizations, social media platforms
Want to implement this idea in a business?
We have generated a startup concept here: VaxTrend Analytics.
Leave a Reply