Leveraging Weakly Annotated Data for Hate Speech Detection in Code-Mixed Hinglish: A Feasibility-Driven Transfer Learning Approach with Large Language Models
CoRR(2024)
摘要
The advent of Large Language Models (LLMs) has advanced the benchmark in
various Natural Language Processing (NLP) tasks. However, large amounts of
labelled training data are required to train LLMs. Furthermore, data annotation
and training are computationally expensive and time-consuming. Zero and
few-shot learning have recently emerged as viable options for labelling data
using large pre-trained models. Hate speech detection in mix-code low-resource
languages is an active problem area where the use of LLMs has proven
beneficial. In this study, we have compiled a dataset of 100 YouTube comments,
and weakly labelled them for coarse and fine-grained misogyny classification in
mix-code Hinglish. Weak annotation was applied due to the labor-intensive
annotation process. Zero-shot learning, one-shot learning, and few-shot
learning and prompting approaches have then been applied to assign labels to
the comments and compare them to human-assigned labels. Out of all the
approaches, zero-shot classification using the Bidirectional Auto-Regressive
Transformers (BART) large model and few-shot prompting using Generative
Pre-trained Transformer- 3 (ChatGPT-3) achieve the best results
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要