A multi-label text sentiment analysis model based on sentiment correlation modeling.

Yingying Ni, Wei Ni
Author Information
  1. Yingying Ni: School of Media & Communication Shanghai Jiao Tong University, Shanghai, China.
  2. Wei Ni: Department of Critical Care Medicine, Sir Run Run Shaw Hospital, Hangzhou, Zhejiang, China.

Abstract

Objective: This study proposes an emotion correlation-enhanced sentiment analysis model (ECO-SAM), a sentiment correlation modeling-based multi-label sentiment analysis model.
Methods: The ECO-SAM utilizes a pre-trained BERT encoder to obtain semantic embedding of input texts and then leverages a self-attention mechanism to model the semantic correlation between emotions. Additionally, it utilizes a text emotion matching neural network to make sentiment analysis for input texts.
Results: The experiment results in public datasets demonstrate that compared to baseline models, the ECO-SAM obtains the precision score increasing by 13.33% at most, the recall score increasing by 3.69% at most, and the F1 score increasing by 8.44% at most. Meanwhile, the modeled sentiment semantics are interpretable.
Limitations: The data modeled by the ECO-SAM are limited to text-only modality, excluding multi-modal data that could enhance classification performance. Additionally, the training data are not large-scale, and there is a lack of high-quality large-scale training data for fine-tuning sentiment analysis models.
Conclusion: The ECO-SAM is capable of effectively modeling sentiment semantics and achieving excellent classification performance in many public sentiment analysis datasets.

Keywords

Word Cloud

Created with Highcharts 10.0.0sentimentanalysisECO-SAMmodeldataemotioncorrelationtextscoreincreasingclassificationmulti-labelutilizessemanticinputtextsmechanismAdditionallypublicdatasetsmodelsmodeledsemanticsperformancetraininglarge-scalemodelingObjective:studyproposescorrelation-enhancedmodeling-basedMethods:pre-trainedBERTencoderobtainembeddingleveragesself-attentionemotionsmatchingneuralnetworkmakeResults:experimentresultsdemonstratecomparedbaselineobtainsprecision1333%recall369%F1844%MeanwhileinterpretableLimitations:limitedtext-onlymodalityexcludingmulti-modalenhancelackhigh-qualityfine-tuningConclusion:capableeffectivelyachievingexcellentmanybasedattentiontheorynaturallanguageprocessing

Similar Articles

Cited By