Decoding News Narratives: A Critical Analysis of Large Language Models in Framing Bias Detection
CoRR(2024)
摘要
This work contributes to the expanding research on the applicability of LLMs
in social sciences by examining the performance of GPT-3.5 Turbo, GPT-4, and
Flan-T5 models in detecting framing bias in news headlines through zero-shot,
few-shot, and explainable prompting methods. A key insight from our evaluation
is the notable efficacy of explainable prompting in enhancing the reliability
of these models, highlighting the importance of explainable settings for social
science research on framing bias. GPT-4, in particular, demonstrated enhanced
performance in few-shot scenarios when presented with a range of relevant,
in-domain examples. FLAN-T5's poor performance indicates that smaller models
may require additional task-specific fine-tuning for identifying framing bias
detection. Our study also found that models, particularly GPT-4, often
misinterpret emotional language as an indicator of framing bias, underscoring
the challenge of distinguishing between reporting genuine emotional expression
and intentionally use framing bias in news headlines. We further evaluated the
models on two subsets of headlines where the presence or absence of framing
bias was either clear-cut or more contested, with the results suggesting that
these models' can be useful in flagging potential annotation inaccuracies
within existing or new datasets. Finally, the study evaluates the models in
real-world conditions ("in the wild"), moving beyond the initial dataset
focused on U.S. Gun Violence, assessing the models' performance on framed
headlines covering a broad range of topics.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要