A multimodal approach of generating 3D human-like talking agent | |
Yang, Minghao; Tao, Jianhua; Mu, Kaihui; Li, Ya; Che, Jianfeng | |
刊名 | JOURNAL ON MULTIMODAL USER INTERFACES
![]() |
2012-03-01 | |
卷号 | 5期号:1-2页码:61-68 |
关键词 | Multimodal 3d Talking Agent Lip Movement Head Motion Mfcc Facial Expression Gesture Animation |
文献子类 | Article |
英文摘要 | This paper introduces a multimodal framework of generating a 3D human-like talking agent which can communicate with user through speech, lip movement, head motion, facial expression and body animation. In this framework, lip movements are obtained by searching and matching acoustic features which are represented by Mel-frequency cepstral coefficients (MFCC) in audio-visual bimodal database. Head motion is synthesized by visual prosody which maps textual prosodic features into rotational and translational parameters. Facial expression and body animation are generated by transferring motion data to skeleton. A simplified high level Multimodal Marker Language (MML), in which only a few fields are used to coordinate the agent channels, is introduced to drive the agent. The experiments validate the effectiveness of the proposed multimodal framework. |
WOS关键词 | CHARACTER ANIMATION ; FACE |
WOS研究方向 | Computer Science |
语种 | 英语 |
WOS记录号 | WOS:000309997800008 |
内容类型 | 期刊论文 |
源URL | [http://ir.ia.ac.cn/handle/173211/40914] ![]() |
专题 | 模式识别国家重点实验室_智能交互 |
推荐引用方式 GB/T 7714 | Yang, Minghao,Tao, Jianhua,Mu, Kaihui,et al. A multimodal approach of generating 3D human-like talking agent[J]. JOURNAL ON MULTIMODAL USER INTERFACES,2012,5(1-2):61-68. |
APA | Yang, Minghao,Tao, Jianhua,Mu, Kaihui,Li, Ya,&Che, Jianfeng.(2012).A multimodal approach of generating 3D human-like talking agent.JOURNAL ON MULTIMODAL USER INTERFACES,5(1-2),61-68. |
MLA | Yang, Minghao,et al."A multimodal approach of generating 3D human-like talking agent".JOURNAL ON MULTIMODAL USER INTERFACES 5.1-2(2012):61-68. |
个性服务 |
查看访问统计 |
相关权益政策 |
暂无数据 |
收藏/分享 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论