CORC  > 清华大学
基于局部表情参数化的三维表情脸像合成
张申 ; 吴志勇 ; 蔡莲红 ; Shen Zhang ; Zhiyong Wu ; Lianhong Cai
2010-07-15 ; 2010-07-15
会议名称第二届和谐人机环境联合学术会议(HHME2006)——第2届中国人机交互学术会议(CHCI'06)论文集 ; 第二届和谐人机环境联合学术会议(HHME2006)——第2届中国人机交互学术会议(CHCI'06) ; 中国浙江杭州 ; CNKI ; 清华大学计算机科学与技术系、浙江大学计算机科学与技术学院
关键词局部表情状态 人脸动画参数 MPEG4 Face Region, Facial Animation Parameter,MPEG4 TP391.41
其他题名Region-based Facial Expression Synthesis on a Three-dimensional Avatar
中文摘要人脸表情合成是人机交互领域长期以来的研究热点。本文采用MPEG4人脸动画框架,提出了一种基于局部表情参数化的三维表情脸像合成方法。在人脸动画参数(FAP)的基础上,定义了局部表情状态参数(PartialExpressionParameter,简称PEP),建立了PEP与FAP参数之间的关联规则。在此基础上,实现了一个三维表情脸像编辑器,利用PEP参数可以快速编辑合成人脸表情,以适应可视语音、对话系统中人脸表情实时变化的需求。; Facial Expression is one of the most important features in Human-Computer Interaction. MPEG4 Facial Animation Framework provides Facial Animation Parameter (FAP) for parameterized facial expression synthesis. However the implementation of FAP is still an open area. This paper presents a region-based method for fast-generating facial expressions on a three-dimensional avatar, which can be used as talking head in Text to Visual Speech System. A Facial Expression Editor is also implemented for user to generate complex expressions using high-level parameters proposed in this paper. MPEG4 provides an efficient way to animate face model using FAP, but the low-level FAP just define the basic facial actions corresponding to facial feature points; while the high-level FAP only describe expression and viseme semantically. Thus it is still a complicated work to generate expressions using FAP directly. In order to bridge the semantic gaps, we proposed a region based method: First, the face is divided into 11 regions according to MPEG4 Facial Definition Parameters, and 4 key regions are selected which play important role in human expressions. In each key region, Partial Expression Parameter (PEP) is defined, which describe the facial movement patterns in the region, such as eyebrow-raise, eye-blink and mouth-open etc. Based on FAP, PEP captures the correlation between FAP for generating expressions. The relationship between PEP and its corresponding FAP is established by a linear function. Considering the symmetric character of human face and the inter-dependences of face organs, this paper proposes “left-right” and “primary-secondary” relationship between PEP respectively, which makes the facial expression more engaging. There are two parts defined with each PEP parameters, which describes the “Type” and “Amplitude” of facial movement respectively. The 12 PEP parameters defined build up the facial expression primitive set, which can be used to synthesize expression directly instead of using 40 FAP parameters. A facial expression editor is also implemented based on the region-based techniques proposed in this paper. The input of the editor is an original 3D avatar (in VRML form), user can define FDP and FAP on the model manually, and set the PEP value to get specific facial expression automatically. Each synthesized facial expression corresponds to a FAP vector, so that the result expression can be integrated into many scenarios of Human-Computer Interaction, such as dialog system, interactive service etc.; 国家自然科学基金(60418012,60433030)
语种中文 ; 中文
内容类型会议论文
源URL[http://hdl.handle.net/123456789/69887]  
专题清华大学
推荐引用方式
GB/T 7714
张申,吴志勇,蔡莲红,等. 基于局部表情参数化的三维表情脸像合成[C]. 见:第二届和谐人机环境联合学术会议(HHME2006)——第2届中国人机交互学术会议(CHCI'06)论文集, 第二届和谐人机环境联合学术会议(HHME2006)——第2届中国人机交互学术会议(CHCI'06), 中国浙江杭州, CNKI, 清华大学计算机科学与技术系、浙江大学计算机科学与技术学院.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace