FM2u-Net: Face Morphological Multi-Branch Network for Makeup-Invariant Face Verification

CVPR, pp. 5729-5739, 2020.

Cited by: 0|Bibtex|Views195|DOI:https://doi.org/10.1109/CVPR42600.2020.00577
EI
Other Links: dblp.uni-trier.de|academic.microsoft.com
Weibo:
To tackle the data problem and achieve makeup-invariant face recognition, we propose a unified face morphological multi-branch network, which includes two modules: face morphology network, and attention-based multi-branch network

Abstract:

It is challenging in learning a makeup-invariant face verification model, due to (1) insufficient makeup/non-makeup face training pairs, (2) the lack of diverse makeup faces, and (3) the significant appearance changes caused by cosmetics. To address these challenges, we propose a unified Face Morphological Multi-branch Network (FMMu-Net) ...More

Code:

Data:

0
Introduction
  • This paper studies the task of face verification, which judges whether a pair of face images are the same person or not.
  • The authors evaluate several popular face recognition models on face recognition datasets, results in Fig. 1 (b) demonstrate a dramatic drop when involving the challenge of makeup.
  • This brings them a natural prospect in learning a robust model for makeup-invariant face verification task
Highlights
  • This paper studies the task of face verification, which judges whether a pair of face images are the same person or not
  • To tackle the data problem and achieve makeup-invariant face recognition, we propose a unified face morphological multi-branch network (FM 2 uNet), which includes two modules: face morphology network (FM-Net), and attention-based multi-branch network (AttM-Net)
  • FM 2 u-Net model is clearly better than those widely used general face recognition networks, indicating that AttM-Net is a more effective way of learning makeupinvariant face features adaptively fused by four branches
  • Compared with the BLAN, which uses Generative Adversarial Networks to remove the cosmetics from makeup faces, we enrich the makeup training samples through the swapping of local regions, and let the network generate discriminative features learning from the parts often with heavy cosmetics
  • Data to fine-tune LightCNN-29v2 model which is pre-trained on CASIA-WebFace; ‘Without (w.o.) face morphology network’: train AttM-Net on the original training data without synthetic images; ‘Without (w.o.) AttM-Net’: use LightCNN-29v2 to learn the features from FMNet outputs. we compare the performance of the aforementioned variants of our model on a makeup (M-501) and a general face verification (LFW+) dataset
  • AttM-Net’ in Tab. 3: 98.12% vs. 95.22% and 98.12% vs. 95.98% on M-501, showing the effectiveness of face morphology network and AttM-Net
  • This paper proposes FM 2 u-Net to learn makeupinvariant face representation
Methods
  • In the M-501 dataset, there are about 200 paired faces for testing each round, including positive and negative pairs.
  • (2) General face verification task: The authors use ten-split evaluations with standard protocols as [5, 26], where the authors directly extract the features from the models for the test sets and use cosine similarity score.
  • For LFW+, the authors test the algorithm on 12000 face pairs and mean accuracy is reported.
  • In the IJB-A+ dataset, for 1:1 face verification, the performance is reported using the true accept rates (TAR) vs. false positive rates (FAR), and the performance is reported using the Rank-N as the metrics for 1:N face identification
Results
  • Results on Makeup Face Recognition Datasets

    The authors evaluate several competitors including general and makeup-based face recogntion methods on four makeup datasets.
  • The authors compare FM 2 u-Net with some mainstream models built for general face recognition on LFW+ and IJB-A+ datasets.
  • The superior performance results from (1) the realistic big diverse makeup data generated by FM-Net and (2) the strong feature learning capacity of AttM-Net. Quality of synthesized images.
  • The authors can see the synthetic data are clustered around the original images with the same identities
  • It means the generation method can effectively preserve the identity information, which is essential to train a face recognition model.
  • AttM-Net’ in Tab. 3: 98.12% vs. 95.22% and 98.12% vs. 95.98% on M-501, showing the effectiveness of FM-Net and AttM-Net
Conclusion
  • This paper proposes FM 2 u-Net to learn makeupinvariant face representation.
  • FM 2 u-Net contains FM-Net and AttM-Net. FM-Net can effectively synthesize many diverse makeup faces, and AttM-Net can capture the complementary global and local information.
  • AttM-Net applies AttM-FM to adaptively fuse the features from the different branches.
  • Extensive experiments are conducted and results show the method can achieve competitive performance on makeup and general face recognition benchmarks.
  • The authors do ablation studies to verify the efficacy of each component in the model
Summary
  • Introduction:

    This paper studies the task of face verification, which judges whether a pair of face images are the same person or not.
  • The authors evaluate several popular face recognition models on face recognition datasets, results in Fig. 1 (b) demonstrate a dramatic drop when involving the challenge of makeup.
  • This brings them a natural prospect in learning a robust model for makeup-invariant face verification task
  • Objectives:

    The authors aim to generate realistic facial images while keeping the identity information through FM-Net. Given a testing face pair (Ii , Ij ) ∈ Dm, where one is with cosmetics and the other is not, the goal is to verify whether zi = zj
  • Methods:

    In the M-501 dataset, there are about 200 paired faces for testing each round, including positive and negative pairs.
  • (2) General face verification task: The authors use ten-split evaluations with standard protocols as [5, 26], where the authors directly extract the features from the models for the test sets and use cosine similarity score.
  • For LFW+, the authors test the algorithm on 12000 face pairs and mean accuracy is reported.
  • In the IJB-A+ dataset, for 1:1 face verification, the performance is reported using the true accept rates (TAR) vs. false positive rates (FAR), and the performance is reported using the Rank-N as the metrics for 1:N face identification
  • Results:

    Results on Makeup Face Recognition Datasets

    The authors evaluate several competitors including general and makeup-based face recogntion methods on four makeup datasets.
  • The authors compare FM 2 u-Net with some mainstream models built for general face recognition on LFW+ and IJB-A+ datasets.
  • The superior performance results from (1) the realistic big diverse makeup data generated by FM-Net and (2) the strong feature learning capacity of AttM-Net. Quality of synthesized images.
  • The authors can see the synthetic data are clustered around the original images with the same identities
  • It means the generation method can effectively preserve the identity information, which is essential to train a face recognition model.
  • AttM-Net’ in Tab. 3: 98.12% vs. 95.22% and 98.12% vs. 95.98% on M-501, showing the effectiveness of FM-Net and AttM-Net
  • Conclusion:

    This paper proposes FM 2 u-Net to learn makeupinvariant face representation.
  • FM 2 u-Net contains FM-Net and AttM-Net. FM-Net can effectively synthesize many diverse makeup faces, and AttM-Net can capture the complementary global and local information.
  • AttM-Net applies AttM-FM to adaptively fuse the features from the different branches.
  • Extensive experiments are conducted and results show the method can achieve competitive performance on makeup and general face recognition benchmarks.
  • The authors do ablation studies to verify the efficacy of each component in the model
Tables
  • Table1: Results on four Makeup datasets. Ext represents the extended makeup dataset. FR#: general face recognition (FR) models. MFR∗: makeup face recognition (MFR) models
  • Table2: Results on general face recognition datasets
  • Table3: Results of ablation study on two datasets. ‘w.o.’ means removing this part module from the whole framework FM 2 u-Net
  • Table4: Results of the variants of FM-Net in FM 2 u-Net and results compared with other generative models
  • Table5: Results of the variants of AttM-Net in FM 2 u-Net
Download tables as Excel
Related work
  • Face Recognition. Various deep learning methods have been proposed for general face recognition in the wild, such as FaceNet [40], LightCNN [54], VGGface2 [5], neural tensor networks [19, 21], etc.. Apart from that, some works focus on specific challenges of face recognition, such as pose [33, 47, 10], illumination [59, 18], occlusion [53], etc.. Unlike the extensive explorations of the aforementioned challenges, less attention is paid to one important problem, cosmetics as shown in Fig. 1. It motivates this work to explore effective solutions on makeuped faces. Makeup Face Verification. Cosmetics bring enormous challenges for face verification task, due to significant facial appearance changes. Recent works of analyzing makeup faces focus on makeup transfer [46, 14, 39, 32, 6] and makeup recommendation [31, 9, 3, 2]. Few efforts are made on learning a makeup-invariant face verification model. The deep models for general face recognition may not be robust to heavy makeup (e.g., Fig. 1 (b) ). To achieve a cosmetics robust face recognition system, Sun et al [45] proposed a model pre-trained on free videos and fine-tuned on small makeup datasets. To alleviate negative effects from makeup, Li et al [29] generated non-makeup images from makeup ones using GANs, and then used the synthesized non-makeup images for recognition. Unlike them, we introduce a unified FM 2 u-Net to effectively improve the performance of makeup face verification, which can synthesis many high-quality images with abundant makeup style and extract more cosmetics-robust facial features. Face Morphology. Recently Sheehan et al [42] suggested that the increased diversity and complexity in human facial morphology are the primary medium of individual identification and recognition. And now face morphology has been used to build a photo-realistic virtual human face [4], face detection [16], 3D face analysis [50], etc.. In this work, we aim to generate realistic facial images while keeping the identity information through FM-Net. Patch-based Face Recognition. While global-based face representation approach prevails, more and more researchers attempt to explore local features [43, 24, 58], which are believed more robust to the variations of facial expression, illumination, and occlusion, etc.. For example, at a masquerade party we can identify an acquaintance by the eyes, which are the only visible facial components through mask. Motivated by this, we design AttM-Net to aggregate global and local features by fusion module. Data Augmentation. Deep models are normally data hungry, therefore, data augmentation has widely been used to increase the amount of training data [27, 36], including flipping, rotating, resizing, etc.. Apart from those general data augmentation methods, in the field of face recognition, 3D models [61, 34, 49] and GANs [12, 30, 48, 61] are widely used to synthesizes faces with rich intra-personal variations such as poses, expressions, etc.. Note that the idea of synthesizing new images to help recognition has been explored and verified in many tasks, e.g., person re-id [37, 62], and one-shot learning [51, 8]. In this work, we propose a specialized data augmentation for cosmetics-robust face verification. Specifically, we propose FM-Net that can synthesize new faces by swapping the facial components which are usually covered by heavy cosmetics. Unlike [20] which randomly selects the swapping targets in an offline fashion, we choose the swapping targets from similar faces in an end-to-end (online) way via the proposed generative model.
Funding
  • This work was supported in part by NSFC Projects (U1611461,61702108) , Science and Technology Commission of Shanghai Municipality Projects (19511120700, 19ZR1471800), Shanghai Municipal Science and Technology Major Project (2018SHZDZX01), and Shanghai Research and Innovation Functional Program (17DZ2260900)
Reference
  • https://github.com/AlfredXiangWu/LightCNN.
    Findings
  • Taleb Alashkar, Songyao Jiang, and Yun Fu. Rule-based facial makeup recommendation system. In Automatic Face & Gesture Recognition (FG 2017), 2017 12th IEEE International Conference on, pages 325–330. IEEE, 2017.
    Google ScholarLocate open access versionFindings
  • Taleb Alashkar, Songyao Jiang, Shuyang Wang, and Yun Fu. Examples-rules guided deep neural network for makeup recommendation. In AAAI, pages 941–947, 2017.
    Google ScholarLocate open access versionFindings
  • AF Ayoub, Y Xiao, B Khambay, JP Siebert, and D Hadley. Towards building a photo-realistic virtual human face for craniomaxillofacial diagnosis and treatment planning. International journal of oral and maxillofacial surgery, 36(5):423–428, 2007.
    Google ScholarLocate open access versionFindings
  • Qiong Cao, Li Shen, Weidi Xie, Omkar M Parkhi, and Andrew Zisserman. Vggface2: A dataset for recognising faces across pose and age. In 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), pages 67–74. IEEE, 2018.
    Google ScholarLocate open access versionFindings
  • Huiwen Chang, Jingwan Lu, Fisher Yu, and Adam Finkelstein. Pairedcyclegan: Asymmetric style transfer for applying and removing makeup. In 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
    Google ScholarLocate open access versionFindings
  • Hung-Jen Chen, Ka-Ming Hui, Szu-Yu Wang, Li-Wu Tsao, Hong-Han Shuai, and Wen-Huang Cheng. Beautyglow: Ondemand makeup transfer framework with reversible generative network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10042– 10050, 2019.
    Google ScholarLocate open access versionFindings
  • Zitian Chen, Yanwei Fu, Yu-Xiong Wang, Lin Ma, Wei Liu, and Martial Hebert. Image deformation meta-networks for one-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8680– 8689, 2019.
    Google ScholarLocate open access versionFindings
  • Kyung-Yong Chung. Effect of facial makeup style recommendation on visual sensibility. Multimedia Tools and Applications, 71(2):843–853, 2014.
    Google ScholarLocate open access versionFindings
  • Jiankang Deng, Shiyang Cheng, Niannan Xue, Yuxiang Zhou, and Stefanos Zafeiriou. Uv-gan: Adversarial facial uv map completion for pose-invariant face recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
    Google ScholarLocate open access versionFindings
  • Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. Arcface: Additive angular margin loss for deep face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4690– 4699, 2019.
    Google ScholarLocate open access versionFindings
  • Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014.
    Google ScholarLocate open access versionFindings
  • Qiao Gu, Guanzhi Wang, Mang Tik Chiu, Yu-Wing Tai, and Chi-Keung Tang. Ladn: Local adversarial disentangling network for facial makeup and de-makeup. In Proceedings of the IEEE International Conference on Computer Vision, pages 10481–10490, 2019.
    Google ScholarLocate open access versionFindings
  • Dong Guo and Terence Sim. Digital face makeup by example. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 73–79. IEEE, 2009.
    Google ScholarLocate open access versionFindings
  • Guodong Guo, Lingyun Wen, and Shuicheng Yan. Face authentication with makeup changes. IEEE Transactions on Circuits and Systems for Video Technology, 24(5):814–825, 2014.
    Google ScholarLocate open access versionFindings
  • Chin-Chuan Han, Hong-Yuan Mark Liao, Gwo-Jong Yu, and Liang-Hua Chen. Fast face detection via morphologybased pre-processing. Pattern Recognition, 33(10):1701– 1712, 2000.
    Google ScholarLocate open access versionFindings
  • Zhenliang He, Wangmeng Zuo, Meina Kan, Shiguang Shan, and Xilin Chen. Attgan: Facial attribute editing by only changing what you want. IEEE Transactions on Image Processing, 2019.
    Google ScholarLocate open access versionFindings
  • Guosheng Hu, Chi Ho Chan, Fei Yan, William Christmas, and Josef Kittler. Robust face recognition by an albedo based 3d morphable model. In IEEE International Joint Conference on Biometrics, pages 1–8. IEEE, 2014.
    Google ScholarLocate open access versionFindings
  • Guosheng Hu, Yang Hua, Yang Yuan, Zhihong Zhang, Zheng Lu, Sankha S Mukherjee, Timothy M Hospedales, Neil M Robertson, and Yongxin Yang. Attribute-enhanced face recognition with neural tensor fusion networks. In ICCV, pages 3744–3753, 2017.
    Google ScholarLocate open access versionFindings
  • Guosheng Hu, Xiaojiang Peng, Yongxin Yang, Timothy M Hospedales, and Jakob Verbeek. Frankenstein: Learning deep face representations using small data. IEEE Transactions on Image Processing, 27(1):293–303, 2018.
    Google ScholarLocate open access versionFindings
  • Guosheng Hu, Yongxin Yang, Dong Yi, Josef Kittler, William Christmas, Stan Z Li, and Timothy Hospedales. When face recognition meets with deep learning: an evaluation of convolutional neural networks for face recognition. In Proceedings of the IEEE international conference on computer vision workshops, pages 142–150, 2015.
    Google ScholarLocate open access versionFindings
  • Junlin Hu, Yongxin Ge, Jiwen Lu, and Xin Feng. Makeuprobust face verification. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 2342–2346. IEEE, 2013.
    Google ScholarLocate open access versionFindings
  • Gary B Huang, Manu Ramesh, Tamara Berg, and Erik Learned-Miller. Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Technical report, 2007.
    Google ScholarFindings
  • Tai-Xiang Jiang, Ting-Zhu Huang, Xi-Le Zhao, and TianHui Ma. Patch-based principal component analysis for face recognition. Computational intelligence and neuroscience, 2017, 2017.
    Google ScholarLocate open access versionFindings
  • Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In European conference on computer vision, pages 694–711.
    Google ScholarLocate open access versionFindings
  • Brendan F Klare, Ben Klein, Emma Taborsky, Austin Blanton, Jordan Cheney, Kristen Allen, Patrick Grother, Alan Mah, and Anil K Jain. Pushing the frontiers of unconstrained face detection and recognition: Iarpa janus benchmark a. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1931–1939, 2015.
    Google ScholarLocate open access versionFindings
  • Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
    Google ScholarLocate open access versionFindings
  • Tingting Li, Ruihe Qian, Chao Dong, Si Liu, Qiong Yan, Wenwu Zhu, and Liang Lin. Beautygan: Instance-level facial makeup transfer with deep generative adversarial network. In 2018 ACM Multimedia Conference on Multimedia Conference, pages 645–653. ACM, 2018.
    Google ScholarLocate open access versionFindings
  • Yi Li, Lingxiao Song, Xiang Wu, Ran He, and Tieniu Tan. Anti-makeup: Learning a bi-level adversarial network for makeup-invariant face verification. Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
    Google ScholarLocate open access versionFindings
  • Xiaodan Liang, Zhiting Hu, Hao Zhang, Chuang Gan, and Eric P Xing. Recurrent topic-transition gan for visual paragraph generation. In Proceedings of the IEEE International Conference on Computer Vision, pages 3362–3371, 2017.
    Google ScholarLocate open access versionFindings
  • Luoqi Liu, Junliang Xing, Si Liu, Hui Xu, Xi Zhou, and Shuicheng Yan. Wow! you are so beautiful today! ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), 11(1s):20, 2014.
    Google ScholarLocate open access versionFindings
  • Si Liu, Xinyu Ou, Ruihe Qian, Wei Wang, and Xiaochun Cao. Makeup like a superstar: Deep localized makeup transfer network. arXiv preprint arXiv:1604.07102, 2016.
    Findings
  • Iacopo Masi, Stephen Rawls, Gerard Medioni, and Prem Natarajan. Pose-aware face recognition in the wild. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
    Google ScholarLocate open access versionFindings
  • Iacopo Masi, Anh Tuao¥n Trao§n, Tal Hassner, Jatuporn Toy Leksut, and Gerard Medioni. Do we really need to collect millions of faces for effective face recognition? In European Conference on Computer Vision, pages 579–596.
    Google ScholarLocate open access versionFindings
  • Hieu V Nguyen and Li Bai. Cosine similarity metric learning for face verification. In Asian conference on computer vision, pages 709–720.
    Google ScholarLocate open access versionFindings
  • Xuelin Qian, Yanwei Fu, Yu-Gang Jiang, Xiangyang Xue, and Tao Xiang. Multi-scale deep learning architectures for person re-identification. ICCV, 2017.
    Google ScholarLocate open access versionFindings
  • Xuelin Qian, Yanwei Fu, Wenxuan Wang, Tao Xiang, Yang Wu, Yu-Gang Jiang, and Xiangyang Xue. Pose-normalized image generation for person re-identification. ECCV, 2018.
    Google ScholarLocate open access versionFindings
  • Gillian Rhodes, Alex Sumich, and Graham Byatt. Are average facial configurations attractive only because of their symmetry? Psychological Science, 10(1):52–58, 1999.
    Google ScholarLocate open access versionFindings
  • Kristina Scherbaum, Tobias Ritschel, Matthias Hullin, Thorsten Thormahlen, Volker Blanz, and Hans-Peter Seidel. Computer-suggested facial makeup. In Computer Graphics Forum, volume 30, pages 485–492. Wiley Online Library, 2011.
    Google ScholarLocate open access versionFindings
  • Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for face recognition and clustering. In CVPR, 2015.
    Google ScholarLocate open access versionFindings
  • Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, pages 618–626, 2017.
    Google ScholarLocate open access versionFindings
  • Michael J Sheehan and Michael W Nachman. Morphological and population genomic evidence that human faces have evolved to signal individual identity. Nature communications, 5:4800, 2014.
    Google ScholarLocate open access versionFindings
  • Yu Su, Shiguang Shan, Xilin Chen, and Wen Gao. Hierarchical ensemble of global and local classifiers for face recognition. IEEE Transactions on image processing, 18(8):1885– 1896, 2009.
    Google ScholarLocate open access versionFindings
  • Yi Sun, Yuheng Chen, Xiaogang Wang, and Xiaoou Tang. Deep learning face representation by joint identificationverification. In NIPS, pages 1988–1996, 2014.
    Google ScholarLocate open access versionFindings
  • Yao Sun, Lejian Ren, Zhen Wei, Bin Liu, Yanlong Zhai, and Si Liu. A weakly supervised method for makeup-invariant face verification. Pattern Recognition, 66:153–159, 2017.
    Google ScholarLocate open access versionFindings
  • Wai-Shun Tong, Chi-Keung Tang, Michael S Brown, and Ying-Qing Xu. Example-based cosmetic transfer. In Computer Graphics and Applications, 2007. PG’07. 15th Pacific Conference on, pages 211–218. IEEE, 2007.
    Google ScholarLocate open access versionFindings
  • Luan Tran, Xi Yin, and Xiaoming Liu. Disentangled representation learning gan for pose-invariant face recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
    Google ScholarLocate open access versionFindings
  • Luan Tran, Xi Yin, and Xiaoming Liu. Disentangled representation learning gan for pose-invariant face recognition. In CVPR, volume 3, page 7, 2017.
    Google ScholarLocate open access versionFindings
  • Anh Tuao¥n Trao§n, Tal Hassner, Iacopo Masi, Eran Paz, Yuval Nirkin, and Gerard Medioni. Extreme 3d face reconstruction: Seeing through occlusions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3935–3944, 2018.
    Google ScholarLocate open access versionFindings
  • Enrico Vezzetti and Federica Marcolin. Geometry-based 3d face morphology analysis: soft-tissue landmark formalization. Multimedia tools and applications, 68(3):895–929, 2014.
    Google ScholarLocate open access versionFindings
  • Yuxiong Wang, Ross Girshick, Martial Hebert, and Bharath Hariharan. Low-shot learning from imaginary data. CVPR, 2018.
    Google ScholarLocate open access versionFindings
  • Zhanxiong Wang, Keke He, Yanwei Fu, Rui Feng, Yu-Gang Jiang, and Xiangyang Xue. Multi-task deep neural network for joint face recognition and facial attribute prediction. In ICMR. ACM, 2017.
    Google ScholarFindings
  • John Wright, Allen Y Yang, Arvind Ganesh, S Shankar Sastry, and Yi Ma. Robust face recognition via sparse representation. IEEE transactions on pattern analysis and machine intelligence, 31(2):210–227, 2009.
    Google ScholarLocate open access versionFindings
  • Xiang Wu, Ran He, Zhenan Sun, and Tieniu Tan. A light cnn for deep face representation with noisy labels. IEEE Transactions on Information Forensics and Security, pages 2884–2896, 2018.
    Google ScholarLocate open access versionFindings
  • Fen Xiao, Wenzheng Deng, Liangchan Peng, Chunhong Cao, Kai Hu, and Xieping Gao. Msdnn: Multi-scale deep neural network for salient object detection. arXiv preprint arXiv:1801.04187, 2018.
    Findings
  • Dong Yi, Zhen Lei, Shengcai Liao, and Stan Z Li. Learning face representation from scratch. arXiv preprint arXiv:1411.7923, 2014.
    Findings
  • Kaipeng Zhang, Zhanpeng Zhang, Zhifeng Li, and Yu Qiao. Joint face detection and alignment using multi-task cascaded convolutional networks. IEEE Signal Processing Letters, 2016.
    Google ScholarLocate open access versionFindings
  • Lingfeng Zhang, Pengfei Dou, and Ioannis A Kakadiaris. Patch-based face recognition using a hierarchical multi-label matcher. Image and Vision Computing, 73:28–39, 2018.
    Google ScholarLocate open access versionFindings
  • Wuming Zhang, Xi Zhao, Jean-Marie Morvan, and Liming Chen. Improving shadow suppression for illumination robust face recognition. IEEE transactions on pattern analysis and machine intelligence, 41(3):611–624, 2019.
    Google ScholarLocate open access versionFindings
  • Xiao Zhang, Rui Zhao, Yu Qiao, Xiaogang Wang, and Hongsheng Li. Adacos: Adaptively scaling cosine logits for effectively learning deep face representations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10823–10832, 2019.
    Google ScholarLocate open access versionFindings
  • Jian Zhao, Lin Xiong, Panasonic Karlekar Jayashree, Jianshu Li, Fang Zhao, Zhecan Wang, Panasonic Sugiri Pranata, Panasonic Shengmei Shen, Shuicheng Yan, and Jiashi Feng. Dual-agent gans for photorealistic and identity preserving profile face synthesis. In Advances in Neural Information Processing Systems, pages 66–76, 2017.
    Google ScholarLocate open access versionFindings
  • Zhedong Zheng, Liang Zheng, and Yi Yang. Unlabeled samples generated by gan improve the person re-identification baseline in vitro. In Proceedings of the IEEE International Conference on Computer Vision, pages 3754–3762, 2017.
    Google ScholarLocate open access versionFindings
  • Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycleconsistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, pages 2223– 2232, 2017.
    Google ScholarLocate open access versionFindings
Full Text
Your rating :
0

 

Tags
Comments