Your browser doesn't support javascript.
loading
MIECF: Multi-faceted information extraction and cross-mixture fusion for multimodal aspect-based sentiment analysis.
Weng, Yu; Chen, Lin; Wang, Sen; Ye, Xuming; Liu, Xuan; Liu, Zheng.
Afiliação
  • Weng Y; Key Laboratory of Ethnic Language Intelligent Analysis and Security, Governance of MOE, Minzu University of China, Beijing, 100081, China.
  • Chen L; School of Information Engineering, Minzu University of China, Beijing, 100081, China.
  • Wang S; Key Laboratory of Ethnic Language Intelligent Analysis and Security, Governance of MOE, Minzu University of China, Beijing, 100081, China.
  • Ye X; School of Information Engineering, Minzu University of China, Beijing, 100081, China.
  • Liu X; Key Laboratory of Ethnic Language Intelligent Analysis and Security, Governance of MOE, Minzu University of China, Beijing, 100081, China.
  • Liu Z; School of Information Engineering, Minzu University of China, Beijing, 100081, China.
  • Chaomurilige; Key Laboratory of Ethnic Language Intelligent Analysis and Security, Governance of MOE, Minzu University of China, Beijing, 100081, China.
Heliyon ; 10(12): e32967, 2024 Jun 30.
Article em En | MEDLINE | ID: mdl-39005903
ABSTRACT
Aspect-level sentiment analysis within multimodal contexts, focusing on the precise identification and interpretation of sentiment attitudes linked to the target aspect across diverse data modalities, remains a focal research area that perpetuates the advancement of discourse and innovation in artificial intelligence. However, most existing methods tend to focus on extracting visual features from only one facet, such as face expression, which ignores the value of information from other key facets, such as the textual information presented by the image modality, resulting in information loss. To overcome the aforementioned constraint, we put forth a novel approach designated as Multi-faceted Information Extraction and Cross-mixture Fusion (MIECF) for Multimodal Aspect-based Sentiment Analysis. Our approach captures more comprehensive visual information in the image and integrates these local and global key features from multiple facets. Local features, such as facial expressions and textual features, provide direct and rich emotional cues. By contrast, the global feature often reflects the overall emotional atmosphere and context. To enhance the visual representation, we designed a Cross-mixture Fusion method to integrate this local and global multimodal information. In particular, the method establishes semantic relationships between local and global features to eliminate ambiguity brought by single-facet information and achieve more accurate contextual understanding, providing a richer and more precise manner for sentiment analysis. The experimental findings indicate that our proposed approach achieves a leading level of performance, resulting in an Accuracy of 79.65 % on the Twitter-2015 dataset, and Macro-F1 scores of 75.90 % and 73.11 % for the Twitter-2015 and Twitter-2017 datasets, respectively.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Heliyon Ano de publicação: 2024 Tipo de documento: Article País de afiliação: China

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Heliyon Ano de publicação: 2024 Tipo de documento: Article País de afiliação: China