谷歌浏览器插件
订阅小程序
在清言上使用

Multimodal Sentiment Analysis Based on Composite Hierarchical Fusion

Yu Lei, Keshuai Qu,Yifan Zhao, Qing Han, Xuguang Wang

COMPUTER JOURNAL(2024)

引用 0|浏览5
暂无评分
摘要
In the field of multimodal sentiment analysis, it is an important research task to fully extract modal features and perform efficient fusion. In response to the problems of insufficient semantic information and poor cross-modal fusion effect of traditional sentiment classification models, this paper proposes a composite hierarchical feature fusion method combined with prior knowledge. Firstly, the ALBERT (A Lite BERT) model and the improved ResNet model are constructed for feature extraction of text and image, respectively, and high-dimensional feature vectors are obtained. Secondly, to solve the problem of insufficient semantic information expression in cross-scene, a prior knowledge enhancement model is proposed to enrich the data characteristics of each modality. Finally, to solve the problem of poor cross-modal fusion effect, a composite hierarchical fusion model is proposed, which combines the temporal convolutional network and the attention mechanism to fuse the sequence features of each modality information and realizes the information interaction between different modalities. Experiments on MVSA-Single and MVSA-Multi datasets show that the proposed model is superior to a series of comparison models and has good adaptability in new scenarios.
更多
查看译文
关键词
pre-trained model,attention mechanism,multimodal sentiment analysis,prior knowledge
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要