A Transformer Based Approach For Identification Of Tweet Acts

2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN)(2020)

引用 10|浏览66
暂无评分
摘要
Speech acts help in uncovering and understanding the communicative intention and behavior of a speaker utterance. This is pertinent for communication on any platform that embodies social media platforms like Twitter. This paper presents a supervised speech act (tweet act in our case) classifier for tweets for assessing the content and intent of tweets, thereby, exploring the valuable communication amongst the tweeters. With the recent success of Bi-directional Encoder Representations from Transformers (BERT), a newly introduced language representation model that provides pre-trained deep bi-directional representations of vast unlabeled data, we introduce BERT-extended that is built on top of BERT. Our model is based on calculating attention weights over the representations of tokens of a sequence to identify tweet acts. The proposed model attained a benchmark accuracy of 75.97% and outperformed several strong baselines and state-of-the-art approaches on an opensource, tweet act annotated Twitter dataset.
更多
查看译文
关键词
Twitter, Tweet Acts, BERT, Attention
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要