Large Language Models Reveal Information Operation Goals, Tactics, and Narrative Frames
arxiv(2024)
摘要
Adversarial information operations can destabilize societies by undermining
fair elections, manipulating public opinions on policies, and promoting scams.
Despite their widespread occurrence and potential impacts, our understanding of
influence campaigns is limited by manual analysis of messages and subjective
interpretation of their observable behavior. In this paper, we explore whether
these limitations can be mitigated with large language models (LLMs), using
GPT-3.5 as a case-study for coordinated campaign annotation. We first use
GPT-3.5 to scrutinize 126 identified information operations spanning over a
decade. We utilize a number of metrics to quantify the close (if imperfect)
agreement between LLM and ground truth descriptions. We next extract
coordinated campaigns from two large multilingual datasets from X (formerly
Twitter) that respectively discuss the 2022 French election and 2023 Balikaran
Philippine-U.S. military exercise in 2023. For each coordinated campaign, we
use GPT-3.5 to analyze posts related to a specific concern and extract goals,
tactics, and narrative frames, both before and after critical events (such as
the date of an election). While the GPT-3.5 sometimes disagrees with subjective
interpretation, its ability to summarize and interpret demonstrates LLMs'
potential to extract higher-order indicators from text to provide a more
complete picture of the information campaigns compared to previous methods.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要