Urdu Sentiment Analysis via Multimodal Data Mining Based on Deep Learning Algorithms

IEEE ACCESS(2021)

引用 16|浏览2
暂无评分
摘要
Every day, a massive amount of text, audio, and video data is published on websites all over the world. This valuable data can be used to gauge global trends and public perceptions. Companies are showcasing their preferred advertisements to consumers based on their online behavioral trends. Carefully analyzing this raw data to uncover useful patterns is indeed a challenging task, even more so for a resource-constrained language such as Urdu. A unique Urdu language-based multimodal dataset containing 1372 expressions has been presented in this paper as a first step to address the challenge to reveal useful patterns. Secondly, we have also presented a novel framework for multimodal sentiment analysis (MSA) that incorporates acoustic, visual, and textual responses to detect context-aware sentiments. Furthermore, we have used both decision-level and feature-level fusion methods to improve sentiment polarity prediction. The experimental results demonstrated that integration of multimodal features improves the polarity detection capability of the proposed algorithm from 84.32% (with unimodal features) to 95.35% (with multimodal features).
更多
查看译文
关键词
Feature extraction, Sentiment analysis, Social networking (online), Visualization, Analytical models, Convolutional neural networks, Codes, Multimodal sentiment analysis (MSA), Urdu sentiment analysis (URSA), convolutional neural network (CNN), long short-term memory (LSTM)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要