Track Your Emotional Perception of 3-D Virtual Talking Head in Human-computer Interaction
Hu Ni; Jianying Wang; Lan Wang; Nan Yan
2018
会议日期2018
会议地点深圳
英文摘要To investigate how emotions are identified from the avatar character in the natural scene during human-computer interaction. In current paper, a novel 3-D virtual talking head system with dynamic emotional facial expression for applying to human-robot communication was developed. Furthermore, eye tracking experiment and subjective evaluation experiment were utilized to explore the emotional perception of the 3-D virtual talking head. The results showed that there was no significant difference of observation mode between audio-visual animation of 3-D virtual talking head videos (AV3D) and audio-visual human face videos (AVHF). Besides, the recognition accuracy of HF was higher than 3D and almost all the accuracy of emotions had been improved when adding audio to videos. Finally, the results demonstrated that happiness was identified the best whether watching 3-D virtual talking head videos (3D) or human face videos (HF). These results implied that the 3-D talking head has potentially been as a suitable natural communication form in human-computer interaction
内容类型会议论文
源URL[http://ir.siat.ac.cn:8080/handle/172644/13720]  
专题深圳先进技术研究院_集成所
推荐引用方式
GB/T 7714
Hu Ni,Jianying Wang,Lan Wang,et al. Track Your Emotional Perception of 3-D Virtual Talking Head in Human-computer Interaction[C]. 见:. 深圳. 2018.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace