Identifying Hate Speech Peddlers in Online Platforms. A Bayesian Social Learning Approach for Large Language Model Driven Decision-Makers
CoRR(2024)
Abstract
This paper studies the problem of autonomous agents performing Bayesian
social learning for sequential detection when the observations of the state
belong to a high-dimensional space and are expensive to analyze. Specifically,
when the observations are textual, the Bayesian agent can use a large language
model (LLM) as a map to get a low-dimensional private observation. The agent
performs Bayesian learning and takes an action that minimizes the expected cost
and is visible to subsequent agents. We prove that a sequence of such Bayesian
agents herd in finite time to the public belief and take the same action
disregarding the private observations. We propose a stopping time formulation
for quickest time herding in social learning and optimally balance privacy and
herding. Structural results are shown on the threshold nature of the optimal
policy to the stopping time problem. We illustrate the application of our
framework when autonomous Bayesian detectors aim to sequentially identify if a
user is a hate speech peddler on an online platform by parsing text
observations using an LLM. We numerically validate our results on real-world
hate speech datasets. We show that autonomous Bayesian agents designed to flag
hate speech peddlers in online platforms herd and misclassify the users when
the public prior is strong. We also numerically show the effect of a threshold
policy in delaying herding.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined