Key-point-guided adaptive convolution and instance normalization for continuous transitive face reenactment of any person | |
Xu, Shibiao3; Hua, Miao2; Zhang, Jiguang1; Zhang, Zhaohui1; Zhang, Xiaopeng1 | |
刊名 | COMPUTER ANIMATION AND VIRTUAL WORLDS |
2024-05-01 | |
卷号 | 35期号:3页码:15 |
关键词 | face reenactment human-centered computing visualization visualization application domains |
ISSN号 | 1546-4261 |
DOI | 10.1002/cav.2256 |
通讯作者 | Zhang, Jiguang(jiguang.zhang@ia.ac.cn) |
英文摘要 | Face reenactment technology is widely applied in various applications. However, the reconstruction effects of existing methods are often not quite realistic enough. Thus, this paper proposes a progressive face reenactment method. First, to make full use of the key information, we propose adaptive convolution and instance normalization to encode the key information into all learnable parameters in the network, including the weights of the convolution kernels and the means and variances in the normalization layer. Second, we present continuous transitive facial expression generation according to all the weights of the network generated by the key points, resulting in the continuous change of the image generated by the network. Third, in contrast to classical convolution, we apply the combination of depth- and point-wise convolutions, which can greatly reduce the number of weights and improve the efficiency of training. Finally, we extend the proposed face reenactment method to the face editing application. Comprehensive experiments demonstrate the effectiveness of the proposed method, which can generate a clearer and more realistic face from any person and is more generic and applicable than other methods. This work presents a continuous transitive face reenactment algorithm that uses face key points information to gradually reenact faces based on two stages GAN, which contains the key face points transformation module and the facial expression generation module. The process involves transforming key points from the source face and generating corresponding facial expressions on the target face. image |
资助项目 | Beijing Natural Science Foundation ; National Natural Science Foundation of China[62271074] ; National Natural Science Foundation of China[62171321] ; National Natural Science Foundation of China[62162044] ; National Natural Science Foundation of China[52175493] ; National Natural Science Foundation of China[32271983] ; Open Project Program of State Key Laboratory of Virtual Reality Technology and Systems, Beihang University[VRLAB2023B01] ; Wenzhou Business School 2024 Talent launch program[RC202401] ; [JQ23014] |
WOS关键词 | RECONSTRUCTION |
WOS研究方向 | Computer Science |
语种 | 英语 |
出版者 | WILEY |
WOS记录号 | WOS:001230174100001 |
资助机构 | Beijing Natural Science Foundation ; National Natural Science Foundation of China ; Open Project Program of State Key Laboratory of Virtual Reality Technology and Systems, Beihang University ; Wenzhou Business School 2024 Talent launch program |
内容类型 | 期刊论文 |
源URL | [http://ir.ia.ac.cn/handle/173211/58440] |
专题 | 模式识别国家重点实验室_三维可视计算 |
通讯作者 | Zhang, Jiguang |
作者单位 | 1.Chinese Acad Sci, Inst Automat, Beijing, Peoples R China 2.Beijing Bytedance Technol Co Ltd, Beijing, Peoples R China 3.Beijing Univ Posts & Telecommun, Sch Artificial Intelligence, Beijing, Peoples R China |
推荐引用方式 GB/T 7714 | Xu, Shibiao,Hua, Miao,Zhang, Jiguang,et al. Key-point-guided adaptive convolution and instance normalization for continuous transitive face reenactment of any person[J]. COMPUTER ANIMATION AND VIRTUAL WORLDS,2024,35(3):15. |
APA | Xu, Shibiao,Hua, Miao,Zhang, Jiguang,Zhang, Zhaohui,&Zhang, Xiaopeng.(2024).Key-point-guided adaptive convolution and instance normalization for continuous transitive face reenactment of any person.COMPUTER ANIMATION AND VIRTUAL WORLDS,35(3),15. |
MLA | Xu, Shibiao,et al."Key-point-guided adaptive convolution and instance normalization for continuous transitive face reenactment of any person".COMPUTER ANIMATION AND VIRTUAL WORLDS 35.3(2024):15. |
个性服务 |
查看访问统计 |
相关权益政策 |
暂无数据 |
收藏/分享 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论