Learning Invariant Deep Representation for NIR-VIS Face Recognition | |
Ran He1,2,3,4; Xiang Wu1,2; Zhenan Sun1,2,3,4; Tieniu Tan1,2,3,4 | |
2017 | |
会议日期 | 4 – 9 February, 2017 |
会议地点 | San Francisco, California USA |
英文摘要 |
Visual versus near infrared (VIS-NIR) face recognition is still a challenging heterogeneous task due to large appearance difference between VIS and NIR modalities. This paper presents a deep convolutional network approach that uses only one network to map both NIR and VIS images to a compact Euclidean space. The low-level layers of this network are trained only on large-scale VIS data. Each convolutional layer is implemented by the simplest case of maxout operator. The highlevel layer is divided into two orthogonal subspaces that contain modality-invariant identity information and modalityvariant spectrum information respectively. Our joint formulation leads to an alternating minimization approach for deep representation at the training time and an efficient computation for heterogeneous data at the testing time. Experimental evaluations show that our method achieves 94% verification rate at FAR=0.1% on the challenging CASIA NIR-VIS 2.0 face recognition dataset. Compared with state-of-the-art methods, it reduces the error rate by 58% only with a compact 64-D representation. |
内容类型 | 会议论文 |
源URL | [http://ir.ia.ac.cn/handle/173211/19726] |
专题 | 自动化研究所_智能感知与计算研究中心 |
作者单位 | 1.National Laboratory of Pattern Recognition, CASIA 2.Center for Research on Intelligent Perception and Computing, CASIA 3.Center for Excellence in Brain Science and Intelligence Technology, CAS 4.University of Chinese Academy of Sciences |
推荐引用方式 GB/T 7714 | Ran He,Xiang Wu,Zhenan Sun,et al. Learning Invariant Deep Representation for NIR-VIS Face Recognition[C]. 见:. San Francisco, California USA. 4 – 9 February, 2017. |
个性服务 |
查看访问统计 |
相关权益政策 |
暂无数据 |
收藏/分享 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论