Fine-Grained Cross-Modal Graph Convolution for Multimodal Aspect-Oriented Sentiment Analysis.

2023 IEEE International Conference on Systems, Man, and Cybernetics (SMC)(2023)

引用 0|浏览5
暂无评分
摘要
Aspect-oriented multimodal sentiment analysis aims to identify the sentiment associated with a given aspect using text and image inputs. Existing methods have focused on the interaction between aspects, text, and images, achieving significant progress through cross-modal transformers. However, they still suffer from three problems: (1) Ignoring the dependency relationships between objects within the image modality; (2) Failing to consider the role of syntactic dependency relationships within the text modality in capturing aspect-related opinion words; (3) Neglecting the inherent dependency relationships between modalities. To address these issues, we propose a fine-grained cross-modal graph convolutional network model (FCGCN). Specifically, we construct intra-modality dependency relationships using syntactic and spatial relationships and fuse the two modalities through semantic similarity calculation. We then design a GCN-Attention layer to capture richer multimodal fusion information. Additionally, an aspect-oriented transformer module is introduced to capture aspect features interactively. Experimental results on the Twitter datasets show that our FCGCN model consistently outperforms state-of-the-art methods.
更多
查看译文
关键词
Multimodal Aspect-oriented Sentiment Analysis,Text-Image Fusion Graph,Graph Convolutional Network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要