谷歌浏览器插件
订阅小程序
在清言上使用

A Multitask Multimodal Ensemble Model for Sentiment- and Emotion-Aided Tweet Act Classification

IEEE transactions on computational social systems(2022)

引用 20|浏览14
暂无评分
摘要
Speech act classification determining the communicative intent of an utterance has been studied widely over the years as an independent task. This holds true for discussion in any for a, including social media platforms such as Twitter. However, the tweeter's emotional state has a huge impact on its pragmatic content because communication is fundamentally characterized and mediated by direct emotions. Sentiment as a human behavior often has a strong relation to emotion, and one helps to understand the other better. We hypothesize that the association between emotion and sentiment will provide a clearer understanding of the tweeter's state of mind, aiding the identification of tweet acts (speech acts in Twitter, TAs). As the first step, we create a new multimodal, emotion-TA, EmoTA dataset collected from the open-source Twitter dataset. To incorporate these multiple aspects, we propose a multitask ensemble adversarial learning framework for multimodal TA classification (TAC). In addition, we also incorporate a joint embedding network, with bidirectional constraints to capture and efficiently integrate the shared semantic relationships across modalities and learn generalized features across multiple tasks. Experimental results indicate that the proposed framework boosts the performance of the primary task, TAC, by benefiting from the two secondary tasks, i.e., sentiment and emotion analyses compared to its unimodal and single-task TAC variants.
更多
查看译文
关键词
Social networking (online),Feature extraction,Blogs,Task analysis,Erbium,Encoding,Semantics,Emotion,multitask,sentiment,speech acts,Twitter
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要