MultiSentiNet: A Deep Semantic Network for Multimodal Sentiment Analysis
Xu, Nan1,2; Mao, Wenji1,2
2017-11
会议日期November 6-10, 2017
会议地点Singapore
英文摘要

With the prevalence of more diverse and multiform user-generated content in social networking sites, multimodal sentiment analysis has become an increasingly important research topic in recent years. Previous work on multimodal sentiment analysis directly extracts feature representation of each modality and fuse these features for classification. Consequently, some detailed semantic information for sentiment analysis and the correlation between image and text have been ignored. In this paper, we propose a deep semantic network, namely MultiSentiNet, for multimodal sentiment analysis. We first identify object and scene as salient detectors to extract deep semantic features of images. We then propose a visual feature guided attention LSTM model to extract words that are important to understand the sentiment of whole tweet and aggregate the representation of those informative words with visual semantic features, object and scene. The experiments on two public available sentiment datasets verify the effectiveness of our MultiSentiNet model and show that our extracted semantic features demonstrate high correlations with human sentiments.

内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/39143]  
专题自动化研究所_复杂系统管理与控制国家重点实验室_互联网大数据与安全信息学研究中心
通讯作者Xu, Nan
作者单位1.Institute of Automation, Chinese Academy of Sciences
2.University of Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Xu, Nan,Mao, Wenji. MultiSentiNet: A Deep Semantic Network for Multimodal Sentiment Analysis[C]. 见:. Singapore. November 6-10, 2017.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace