Conversational Emotion Analysis via Attention Mechanisms
Zheng Lian1,2; Jianhua Tao1,2,3; Bin Liu1; Jian Huang1,2
2019
会议日期15-19 September, 2019
会议地点Graz, Austria
英文摘要

Different from the emotion recognition in individual utterances, we propose a multimodal learning framework using relation and dependencies among the utterances for conversational emotion analysis. The attention mechanism is applied to the fusion of the acoustic and lexical features. Then these fusion representations are fed into the self-attention based bi-directional gated recurrent unit (GRU) layer to capture long-term contextual information. To imitate real interaction patterns of different speakers, speaker embeddings are also utilized as additional inputs to distinguish the speaker identities during conversational dialogs. To verify the effectiveness of the proposed method, we conduct experiments on the IEMOCAP database. Experimental results demonstrate that our method shows absolute 2.42% performance improvement over the state-of-the-art strategies.

语种英语
内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/44724]  
专题模式识别国家重点实验室_智能交互
作者单位1.National Laboratory of Pattern Recognition, CASIA, Beijing, China
2.CAS Center for Excellence in Brain Science and Intelligence Technology, Beijing, China
3.School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
推荐引用方式
GB/T 7714
Zheng Lian,Jianhua Tao,Bin Liu,et al. Conversational Emotion Analysis via Attention Mechanisms[C]. 见:. Graz, Austria. 15-19 September, 2019.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace