AI帮你理解科学

AI 生成解读视频

AI抽取解析论文重点内容自动生成视频


pub
生成解读视频

AI 溯源

AI解析本论文相关学术脉络


Master Reading Tree
生成 溯源树

AI 精读

AI抽取本论文的概要总结


微博一下
We introduce multiple regression analysis, a statistical analysis approach widely used in psychological experiments, to study the relative contribution of each facial part to gender perception

High-Resolution Face Fusion for Gender Conversion

IEEE Transactions on Systems, Man, and Cybernetics, Part A, no. 2 (2011): 226-237

引用35|浏览18
EI WOS
下载 PDF 全文
引用
微博一下

摘要

This paper presents an integrated face image fusion framework, which combines a hierarchical compositional paradigm with seamless image-editing techniques, for gender conversion. In our framework a high-resolution face is represented by a probabilistic graphical model that decomposes a human face into several parts (facial components) con...更多

代码

数据

0
简介
  • F ACE image fusion is attracting increasing attention from both computer vision and graphics due to its many interesting applications, such as psychological experiment, forensics, digital makeup, face image editing, etc. [29].
  • The central objective of face image fusion is to integrate information from multiple face images to achieve task-oriented visual results.
  • The authors propose an automatic gender conversion approach that is able to convert any given face to the opposite gender visibly and preserve its face identity subjectively.
  • Manuscript received November 14, 2007; revised January 25, 2009 and July 17, 2009; accepted December 19, 2009.
  • Date of publication August 30, 2010; date of current version January 19, 2011.
重点内容
  • F ACE image fusion is attracting increasing attention from both computer vision and graphics due to its many interesting applications, such as psychological experiment, forensics, digital makeup, face image editing, etc. [29]
  • We propose an automatic gender conversion approach that is able to convert any given face to the opposite gender visibly and preserve its face identity subjectively
  • We model first the texture of each part with appearance model (AAM) models, learn the distribution of AAM parameters in two gender groups, and transform image parameters toward the distribution of the opposite gender
  • We introduce multiple regression analysis (MRA), a statistical analysis approach widely used in psychological experiments, to study the relative contribution of each facial part to gender perception
  • We have proposed a fusion strategy for gender conversion
  • Due to the nonexistence of ground truth in gender conversion, three task-oriented criteria have been proposed for result evaluation, based on which both subjective and objective experiments have been conducted to validate the proposed strategy
结果
  • The authors collect 8000 high-resolution Asian face images, among which 4000 are males and 4000 are females.
  • For each image in this database, 90 landmarks are labeled manually.
  • Based on these labels, the authors build the graphical face model and learn the transition probabilities between two gender groups.
  • The authors display the results enhanced with external features in Fig. 13.
  • In the following experiments, only experiment five is conducted on the enhanced faces, while the results of other experiments are from images without external features
结论
  • CONCLUSION AND FUTURE WORK

    In this paper, the authors have proposed a fusion strategy for gender conversion.
  • The visually photorealistic and statistically reasonable results would potentially benefit some real-world applications: 1) providing some reference templates to help look for the lost opposite-sex siblings of the given subjects; 2) generating transsexual makeup results that can be applied in entertainments such as filmmaking and computer games; 3) producing stimuli for gender-related psychological experiments; and 4) extending to fusion between other groups and introducing some interesting applications, e.g., fusion between two age groups, between film stars and ordinary people, etc.
表格
  • Table1: SUMMARY OF PREVIOUS WORK ON GENDER CLASSIFICATION
  • Table2: RELATIVE CONTRIBUTION OF GENDER CLASSIFICATION
  • Table3: PERCENTAGE OF SYNTHETIC FACES WITH NOTICEABLE ARTIFACTS
Download tables as Excel
基金
  • This work was supported by the National Natural Science Foundation of China under Grants 60970156 and 60728203, by the National High-Technology Research and Development Program of China (863 Program) under Grant 2007AA01Z340, and by the National Program on Key Basic Research Projects (973 Program) under Grant 2009CB320902
引用论文
  • S. Baluja and H. A. Rowley, “Boosting sex identification performance,” Int. J. Comput. Vis., vol. 71, no. 1, pp. 111–119, Jan. 2007.
    Google ScholarLocate open access versionFindings
  • S. Belongie, J. Malik, and J. Puzicha, “Shape matching and object recognition using shape contexts,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, no. 4, pp. 509–522, Apr. 2002.
    Google ScholarLocate open access versionFindings
  • W. Cao, B. Li, and Y. Zhang, “A remote sensing image fusion method based on PCA transform and wavelet packet transform,” in Proc. Int. Conf. Neural Netw. Signal Process., 2003, pp. 978–981.
    Google ScholarLocate open access versionFindings
  • H. Chen, “A multiresolution image fusion based on principal component analysis,” in Proc. Int. Conf. Image Graph., 2007, pp. 737–741.
    Google ScholarLocate open access versionFindings
  • H. Chen, Z. Xu, Z. Liu, and S.C. Zhu, “Composite templates for cloth modeling and sketching,” in Proc. Int. Conf. Comput. Vis. Pattern Recog., 2005, pp. 943–950.
    Google ScholarLocate open access versionFindings
  • L. J. Chipman, T. M. Orr, and L. N. Graham, “Wavelets and image fusion,” in Proc. Int. Conf. Image Process., 1995, pp. 248–251.
    Google ScholarLocate open access versionFindings
  • T. F. Cootes, G. J. Edwards, and C. J. Taylor, “Active appearance models,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 23, no. 6, pp. 681–685, Jun. 2001.
    Google ScholarLocate open access versionFindings
  • N. P. Costen, M. Brown, and S. Akamatsu, “Sparse models for gender classification,” in Proc. Int. Conf. Autom. Face Gesture Recog., 2004, pp. 201–206.
    Google ScholarLocate open access versionFindings
  • J. M. Fellous, “Gender discrimination and prediction on the basis of facial metric information,” Vis. Res., vol. 37, no. 14, pp. 1961–1973, Jul. 1997.
    Google ScholarLocate open access versionFindings
  • W. Gao, B. Cao, S. Shan, X. Chen, D. Zhou, X. Zhang, and D. Zhao, “The CAS-PEAL large-scale Chinese face database and baseline evaluations,” IEEE Trans. Syst., Man, Cybern. A, Syst., Humans, vol. 38, no. 1, pp. 149– 161, Jan. 2008.
    Google ScholarLocate open access versionFindings
  • A. B. Graf and F. A. Wichmann, “Gender classification of human faces,” in Proc. Int. Workshop Biologically Motivated Comput. Vis., 2002, pp. 491–500.
    Google ScholarLocate open access versionFindings
  • C. Guo, S. C. Zhu, and Y. Wu, “Primal sketch: Integrating texture and structure,” Comput. Vis. Image Understanding, vol. 106, no. 1, pp. 5–19, Apr. 2007.
    Google ScholarLocate open access versionFindings
  • S. Gutta, J. Huang, P. Jonathon, and H. Wechsler, “Mixture of experts for classification of gender, ethnic origin, and pose of human faces,” IEEE Trans. Neural Netw., vol. 11, no. 4, pp. 948–960, Jul. 2000.
    Google ScholarLocate open access versionFindings
  • S. Gutta, H. Wechsler, and P. J. Phillips, “Gender and ethnic classification of face images,” in Proc. Int. Conf. Autom. Face Gesture Recog., 1998, pp. 194–199.
    Google ScholarLocate open access versionFindings
  • P. Hill, N. Canagarajah, and D. Bull, “Image fusion using complex wavelets,” in Proc. 7th. Int. Conf. Inf. Fusion, 2002, pp. 487–496.
    Google ScholarLocate open access versionFindings
  • Z. Ji, X. Lian, and B. Lu, “Gender classification by information fusion of hair and face,” in State of the Art in Face Recognition. Vienna, Austria: IN-TECH, 2009.
    Google ScholarFindings
  • H. Kim, D. Kim, Z. Ghahramani, and S. Bang, “Appearance-based gender classification with Gaussian processes,” Pattern Recognit. Lett., vol. 27, no. 6, pp. 618–626, Apr. 2006.
    Google ScholarLocate open access versionFindings
  • X. Leng and Y. Wang, “Improving generalization for gender classification,” in Proc. Int. Conf. Image Process., 2008, pp. 1656–1659.
    Google ScholarLocate open access versionFindings
  • J. Lewis, R. J. Callaghan, S. G. Nikolov, D. R. Bull, and C. N. Canagarajah, “Region-based image fusion using complex wavelets,” in Proc. Int. Conf. Image Fusion, 2004, pp. 555–562.
    Google ScholarLocate open access versionFindings
  • H. Li, B. S. Manjunath, and S. K. Mitra, “Multi-sensor image fusion using the wavelet transform,” in Proc. Int. Conf. Image Process., 1994, pp. 51–55.
    Google ScholarLocate open access versionFindings
  • H. Lian, B. Lu, E. Takikawa, and S. Hosoi, “Gender recognition using a min-max modular support vector machine,” in Proc. 1st Int. Conf. Natural Comput., 2005, pp. 438–441.
    Google ScholarLocate open access versionFindings
  • L. Lin, S. Peng, J. Porway, S. C. Zhu, and Y. Wang, “An empirical study of object category recognition: Sequential testing with generalized samples,” in Proc. Int. Conf. Comput. Vis., 2007, pp. 1–8.
    Google ScholarLocate open access versionFindings
  • L. Lin, S. C. Zhu, and Y. Wang, “Layered graph match with graph editing,” in Proc. Int. Conf. Comput. Vis. Pattern Recog., 2007, pp. 1–8.
    Google ScholarLocate open access versionFindings
  • H. Lu, Y. Huang, Y. Chen, and D. Yang, “Automatic gender recognition based on pixel-pattern-based texture feature,” J. Real-Time Image Process., vol. 3, no. 1/2, pp. 109–116, Mar. 2008.
    Google ScholarLocate open access versionFindings
  • E. Makinen and R. Raisamo, “Evaluation of gender classification methods with automatically detected and aligned faces,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 30, no. 3, pp. 541–547, Mar. 2008.
    Google ScholarLocate open access versionFindings
  • E. Makinen and R. Raisamo, “An experimental comparison of gender classification methods,” Pattern Recognit. Lett., vol. 29, no. 10, pp. 1544– 1556, Jul. 2008.
    Google ScholarLocate open access versionFindings
  • N. Mitianoudis and P. T. Stathaki, “Pixel-based and region-based image fusion schemes using ICA bases,” in Proc. Int. Conf. Inf. Fusion, 2007, pp. 131–142.
    Google ScholarLocate open access versionFindings
  • B. Moghaddam and M. Yang, “Learning gender with support faces,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, no. 5, pp. 707–711, May 2002.
    Google ScholarLocate open access versionFindings
  • U. Mohammed, S. J. D. Prince, and J. Kautz, “Visio-lization: Generating novel facial images,” ACM Trans. Graphics, vol. 28, no. 3, article 57, Aug. 2009.
    Google ScholarLocate open access versionFindings
  • A. F. Norcio and J. Stanley, “Adaptive human–computer interfaces: A literature survey and perspective,” IEEE Trans. Syst., Man, Cybern., vol. 19, no. 2, pp. 399–408, Mar./Apr. 1989.
    Google ScholarLocate open access versionFindings
  • J. Nunez, X. Otazu, O. Fors, A. Prades, V. Pala, and R. Arbiol, “Multiresolution-based image fusion with additive wavelet decomposition,” IEEE Trans. Geosci. Remote Sens., vol. 37, no. 3, pp. 1204–1211, May 1999.
    Google ScholarLocate open access versionFindings
  • P. Perez, M. Gangnet, and A. Blake, “Poisson image editing,” ACM Trans. Graph., vol. 22, no. 3, pp. 313–318, Jul. 2003.
    Google ScholarLocate open access versionFindings
  • V. S. Petrovic and C. S. Xydeas, “Gradient-based multiresolution image fusion,” IEEE Trans. Image Process., vol. 13, no. 2, pp. 228–237, Feb. 2004.
    Google ScholarLocate open access versionFindings
  • G. Piella, “A general framework for multiresolution image fusion,” Inf. Fusion, vol. 4, no. 4, pp. 258–280, Dec. 2003.
    Google ScholarLocate open access versionFindings
  • C. Ramesh and T. Ranjith, “Fusion performance measures and a lifting wavelet transform based algorithm for image fusion,” in Proc. Int. Conf. Inf. Fusion, 2002, pp. 317–320.
    Google ScholarLocate open access versionFindings
  • D. A. Rowland and D. I. Perrett, “Manipulating facial appearance through shape and color,” IEEE Comput. Graph. Appl., vol. 15, no. 5, pp. 70–76, Sep. 1995.
    Google ScholarLocate open access versionFindings
  • F. Sadjadi, “Comparative image fusion analysis,” in Proc. 2nd Int. Workshop Object Tracking Classification, 2005, pp. 8–15.
    Google ScholarLocate open access versionFindings
  • A. Samal, V. Subramani, and D. Marx, “Analysis of sexual dimorphism in human face,” J. Vis. Commun. Image Represent., vol. 18, no. 6, pp. 453– 463, Dec. 2007.
    Google ScholarLocate open access versionFindings
  • Y. Su, S. Shan, X. Chen, and W. Gao, “Hierarchical ensemble of global and local classifiers for face recognition,” in Proc. IEEE Int. Conf. Comput. Vis., 2007, pp. 1–8.
    Google ScholarLocate open access versionFindings
  • Z. Sun, G. Bebis, X. Yuan, and S. J. Louis, “Genetic feature subset selection for gender classification: A comparison study,” in Proc. 6th IEEE Workshop Appl. Comput. Vis., 2002, pp. 165–170.
    Google ScholarLocate open access versionFindings
  • J. Suo, F. Min, S. C. Zhu, S. Shan, and X. Chen, “A multi-resolution dynamic model for face aging simulation,” in Proc. Int. Conf. Comput. Vis. Pattern Recog., 2007, pp. 1–8.
    Google ScholarLocate open access versionFindings
  • V. Thomas, N. V. Chawla, K. W. Bowyer, and P. J. Flynn, “Learning to predict gender from iris images,” in Proc. Int. Conf. Biometrics: Theory, Appl. Syst., 2007, pp. 1–5.
    Google ScholarLocate open access versionFindings
  • B. Tiddeman, M. Burt, and D. Perrett, “Prototyping and transforming facial textures for perception research,” IEEE Comput. Graph. Appl., vol. 21, no. 5, pp. 42–50, Sep./Oct. 2001.
    Google ScholarLocate open access versionFindings
  • B. P. Tiddeman, M. Stirrat, and D. Perrett, “Towards realism in facial transformation: Results of a wavelet MRF method,” Comput. Graph. Forum, vol. 24, no. 3, pp. 449–456, Sep. 2005.
    Google ScholarLocate open access versionFindings
  • K. Ueki, H. Komatsu, S. Imaizumi, K. Kaneko, N. Sekine, J. Katto, and T. Kobayashi, “A method of gender classification by integrating facial, hairstyle, and clothing images,” in Proc. Int. Conf. Pattern Recog., 2004, pp. 446–449.
    Google ScholarLocate open access versionFindings
  • M. E. Ulug, “A quantitative metric for comparison of night vision fusion algorithms,” Proc. SPIE, vol. 4051, pp. 80–88, 2000.
    Google ScholarLocate open access versionFindings
  • J. Wen, Y. Li, and H. Gong, “Remote sensing image fusion on gradient field,” in Proc. Int. Conf. Pattern Recog., 2006, pp. 643–646.
    Google ScholarLocate open access versionFindings
  • L. Wiskott and J. M. Fellous, “Face recognition and gender determination,” in Proc. Int. Conf. Autom. Face Gesture Recog., 1995, pp. 92–97.
    Google ScholarLocate open access versionFindings
  • D. B. Wright and B. Sladden, “An own gender bias and the importance of hair in face recognition,” Acta Psychologica, vol. 114, no. 1, pp. 101–114, Sep. 2003.
    Google ScholarLocate open access versionFindings
  • Z. Xu, H. Chen, S. C. Zhu, and J. Luo, “A hierarchical compositional model for face representation and sketching,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 30, no. 6, pp. 955–969, Jun. 2008.
    Google ScholarLocate open access versionFindings
  • B. Yao, X. Yang, and S. C. Zhu, “Introduction to a large-scale general purpose ground truth database: Methodology, annotation tool and benchmarks,” in Proc. 6th Int. Workshop Energy Minimization Methods Comput. Vis. Pattern Recog., 2007, pp. 169–183.
    Google ScholarLocate open access versionFindings
  • S. C. Zhu and D. Mumford, “A stochastic grammar of images,” Found. Trends Comput. Graph. Vis., vol. 2, no. 4, pp. 259–362, Jul. 2006.
    Google ScholarLocate open access versionFindings
  • S. C. Zhu and A. L. Yuille, “A flexible object recognition and modeling system,” Int. J. Comput. Vis., vol. 20, no. 3, pp. 187–212, Dec. 1996. Jinli Suo received the B.S. degree from Shandong University, Jinan, China, in 2004. She is currently working toward the Ph.D. degree at the Graduate University of the Chinese Academy of Sciences, Beijing, China.
    Google ScholarLocate open access versionFindings
  • Liang Lin was born in 1981. He received the B.S. and Ph.D. degrees from the Beijing Institute of Technology, Beijing, China, in 1999 and 2008, respectively. He was a joint Ph.D. student in the Department of Statistics, University of California, Los Angeles (UCLA), during 2006–2007.
    Google ScholarLocate open access versionFindings
  • He was a Postdoctoral Research Fellow in the Center for Image and Vision Science, UCLA, and a Senior Research Scientist with the Lotus Hill Research Institute, Wuhan, China, during 2007–2009. He is currently an Associate Professor with the School of Software, Sun Yat-Sen University, Guangzhou, China. His research interests include but not limited to computer vision, statistical modeling and computing, and pattern recognition.
    Google ScholarLocate open access versionFindings
  • Shiguang Shan (M’04) received the M.S. degree in computer science from the Harbin Institute of Technology, Harbin, China, in 1999 and the Ph.D. degree in computer science from the Institute of Computing Technology (ICT), Chinese Academy of Sciences (CAS), Beijing, China, in 2004.
    Google ScholarLocate open access versionFindings
  • He has been with ICT since 2002, where he has been an Associate Professor with the Key Laboratory of Intelligent Information Processing since 2005 and is also the Vice Director of the ICT-ISVISION Joint Research and Development Laboratory for Face Recognition. His research interests include image analysis, pattern recognition, and computer vision. He is particularly focusing on face-recognition-related research topics and has published more than 120 papers on related research topics. Dr. Shan received the State Scientific and Technological Progress Awards in 2005 in China for his work on face recognition technologies. One of his coauthored CVPR 2008 papers won the Best Student Poster Award Runner-up. He also won the Silver Medal of the Scopus’ Future Star of Science Award in 2009.
    Google ScholarLocate open access versionFindings
  • Xilin Chen (M’00–SM’09) received the B.S., M.S., and Ph.D. degrees in computer science from the Harbin Institute of Technology, Harbin, China, in 1988, 1991, and 1994, respectively.
    Google ScholarLocate open access versionFindings
  • He was a Professor with the Harbin Institute of Technology from 1999 to 2005. He was a Visiting Scholar with Carnegie Mellon University, Pittsburgh, PA, from 2001 to 2004. Since August 2004, he has been with the Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China, where he is also with the Key Laboratory of Intelligent Information Processing and the ICT-ISVISION Joint Research and Development Laboratory for Face Recognition. His research interests include image processing, pattern recognition, computer vision, and multimodal interfaces. Dr. Chen has served as a program committee member for more than 20 international and national conferences. He has received several awards, including the State Scientific and Technological Progress Award in 2000, 2003, and 2005 in China for his research work.
    Google ScholarLocate open access versionFindings
  • Wen Gao (M’92–SM’05–F’09) received the M.S. degree in computer science from the Harbin Institute of Technology, Harbin, China, in 1985 and the Ph.D. degree in electronics engineering from the University of Tokyo, Tokyo, Japan, in 1991.
    Google ScholarFindings
0
您的评分 :

暂无评分

标签
评论
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn