AI帮你理解科学

AI 生成解读视频

AI抽取解析论文重点内容自动生成视频


pub
生成解读视频

AI 溯源

AI解析本论文相关学术脉络


Master Reading Tree
生成 溯源树

AI 精读

AI抽取本论文的概要总结


微博一下
We investigate the works with respect to facial expression recognition in recent years

A Comparison Of Methods Of Facial Expression Recognition

2018 WRC SYMPOSIUM ON ADVANCED ROBOTICS AND AUTOMATION (WRC SARA), pp.261-268, (2018)

引用0|浏览38
下载 PDF 全文
引用
微博一下

摘要

Emotional recognition based on facial expressions is a very active research topic in the field of human-computer interaction. There has been a great body of work with in-depth of study in this area. In this paper, we analyze and compare the state-of-the-art facial expression recognition methods, propose some evaluation dimensions and disc...更多

代码

数据

0
简介
  • Companion robots capable of emotion recognition are arising extensive attention. Studies indicate that facial expressions play a major role in human emotional expression [1].
  • The authors investigate the works with respect to facial expression recognition in recent years.
  • By analyzing and comparing the related works, the authors aim at finding out different methods suitable for recognizing facial expressions in different scenes, and make suggestions for future work.
  • This paper has three main contributions: first, the authors carry on a detailed research to the works with respect to facial expression recognition in recent years.
  • On the basis of analyzing the comparative results, the authors figure out different methods that are suitable for recognizing facial expressions in different scenes
重点内容
  • Companion robots capable of emotion recognition are arising extensive attention
  • EVALUATION DIMENSIONS After analyzing and summarizing the related works, we propose several reasonable dimensions of evaluating different facial expression recognition methods: ● Real-time: Due to the complexity and variability of human emotion expression, it is necessary to have a strong real-time facial expression recognition method, in order to response to user’s emotions in time. ● Practicality: The generalization of a facial expression recognition method is of much significance, especially for companion robots
  • Real world scenarios should be taken into careful consideration, such as illumination variation, face occlusion and so on. ● Accuracy: Recognition accuracy is an objective indicator while comparing different methods, and no matter what methods are compared, the ultimate goal is to design a system which can achieve higher accuracy on facial expression recognition. ● Posed or spontaneous expressions: Facial expression datasets can be broadly divided into two categories, posed facial expression dataset and spontaneous facial expression dataset
  • COMPARISON OF RELATED WORKS This section compares the related work in the field of facial expression recognition in recent years based on the proposed evaluation dimensions
  • Future work should pay more attention to real-time spontaneous expression recognition tasks in real world scenarios, taking full account of the impact of problems such as variation of illumination, head motion, partial occlusion to enhance the robustness of facial expressions recognition system
  • Few works consider recognizing facial expressions from images or video sequences obtained in real world scenarios
方法
  • The selected feature vectors can be used as input of the selected classifier, which are divided into basic expression classes by the classifier.
  • General type-2 fuzzy set takes full account of these two uncertainties by introducing the Secondary Membership Function, but it is obvious that the computational complexity is greatly increased.
  • These two methods have their own advantages and disadvantages in practical application, which is a trade-off between computational complexity and recognition accuracy.
  • In the aspect of the recognition on non-frontal facial images, [9] and [83] are all opted to use 3D databases to obtain facial images from different views. [9] uses VGG-Face for classification, with a recognition rate of 78% on BP4D datasets and [83] uses SVM for classification, with a 78% recognition rate on BU-3DFE datasets, and both of them have large improving space
结果
  • EVALUATION DIMENSIONS

    After analyzing and summarizing the related works, the authors propose several reasonable dimensions of evaluating different facial expression recognition methods: ● Real-time: Due to the complexity and variability of human emotion expression, it is necessary to have a strong real-time facial expression recognition method, in order to response to user’s emotions in time. ● Practicality: The generalization of a facial expression recognition method is of much significance, especially for companion robots.
  • On the basis of analyzing and comparing the results, the authors evaluate various representative related work from all aspects and draw the conclusion
结论
  • CONCLUSION & FUTURE WORK

    In recent years, many researches have made in-depth progress in the field of facial expression recognition.
  • On the basis of analyzing and summarizing the related works, some reasonable evaluation dimensions for the comparison of facial expression recognition methods are proposed.
  • Various works are compared under the proposed dimensions, and the advantages and disadvantages are analyzed according to the comparative results.
  • Few works consider recognizing facial expressions from images or video sequences obtained in real world scenarios.
  • Stable facial expression recognition should be implemented in unconstrained real world scenarios
表格
  • Table1: COMPARISON OF RELATED WORKS ON FACIAL EXPRESSION RECOGNITION
Download tables as Excel
基金
  • This work is in part supported by the PKU-NTU Joint Research Institute (JRI) sponsored by a donation from the Ng Teng Fong Charitable Foundation
引用论文
  • 1 Kaihao Zhang et al.[86], 2017
    Google ScholarFindings
  • Posed al.[13], 2007 geometrical displacement of
    Google ScholarFindings
  • No al.[80], 2013
    Google ScholarFindings
  • al.[12], 2007 dynamic haar-like features
    Google ScholarFindings
  • No al.[81], 2009
    Google ScholarFindings
  • Posed et al. [79], 2009
    Google ScholarFindings
  • Posed al.[82], 2013 features methods (IT2FS, GT2FS)
    Google ScholarFindings
  • Subjectet al. [79], 2009
    Google ScholarFindings
  • 14 Wenming Zheng et al. [83], 2015
    Google ScholarFindings
  • Basic practicality KDEF, Posed al.[5], 2017
    Google ScholarFindings
  • 1* Kaihao Zhang et al.[86], 2017
    Google ScholarFindings
  • [1] Marian Stewart Bartlett, Gwen C. Littlewort, Mark G. Frank, Claudia Lainscsek, Ian R. Fasel, Javier R. Movellan. Automatic Recognition of Facial Actions in Spontaneous Expressions. Journal of Multimedia, Vol. 1, No. 6, Sep. 2006, pp. 22-35.
    Google ScholarLocate open access versionFindings
  • [2] Gianluca Donato, Marian Stewart Bartlett, Joseph C. Hager, Paul Ekman, and Terrence J. Sejnowski. Classifying Facial Actions. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 21, No. 10, Oct. 1999, pp. 974-989.
    Google ScholarLocate open access versionFindings
  • [3] James Jenn-Jier Lien, Takeo Kanade, Ph.D. Jeffrey F. Cohn, ChingChung Li. Detection, tracking, and classification of action units in facial expression. Robotics and Autonomous Systems 31 (2000) 131– 146, 1999, pp. 131-146.
    Google ScholarLocate open access versionFindings
  • [4] Bihan Jiang, Michel F. Valstar and Maja Pantic. Action Unit detection using sparse appearance descriptors in space-time video volumes. 2007, pp. 314-321.
    Google ScholarFindings
  • [5] Yuqian ZHOU, Bertram E. SHI. Action Unit Selective Feature Maps in Deep Networks for Facial Expression Recognition. IEEE, 978-15090-6182-2/17, 2017, pp. 2031-2038.
    Google ScholarLocate open access versionFindings
  • [6] Irene Kotsia, Stefanos Zafeiriou, Ioannis Pitas. Texture and shape information fusion for facial expression and facial action unit recognition. Pattern Recognition Society, 0031-3203, 2007, pp. 833851.
    Google ScholarLocate open access versionFindings
  • [7] Wen-Sheng Chu, Fernando De la Torre, Jeffery F. Cohn. Selective Transfer Machine for Personalized Facial Action Unit Detection. IEEE Conference on Computer Vision and Pattern Recognition, 2013, pp. 3515-3522.
    Google ScholarLocate open access versionFindings
  • [8] Maja Pantic, & Leon J. M. Rothkrantz. Facial Action Recognition for Facial Expression Analysis from Static Face Images. IEEE Transaction on Systems, Man, and Cybernetic - Part B: Cybernetics, Vol. 34, No. 3, Jun. 2004, pp. 1449-1460.
    Google ScholarLocate open access versionFindings
  • [9] Chuangao Tang, Wenming Zheng, Jingwei Yan, Qiang Li, Yang Li, Tong Zhang, Zhen Cui. View-Independent Facial Action Unit Detection. IEEE 12th International Conference on Automatic Face & Gesture Recognition, 2017, pp. 878-882.
    Google ScholarLocate open access versionFindings
  • [10] Amit Konar & Aruna Chakraborty. Emotion Recognition: A Pattern Analysis Approach, First Edition, 2015.
    Google ScholarFindings
  • [11] L.F. Chen, Y.S. Yen, Taiwanese Facial Expression Image Database. Brain Mapping Laboratory, Institute of Brain Science, National YangMing University, Taipei, Taiwan, 2007.
    Google ScholarLocate open access versionFindings
  • [12] Peng Yang, Qingshan Liu, Dimitris N. Metaxas. Boosting Coded Dynamic Features for Facial Action Units and Facial Expression Recognition. IEEE, 1-4244-1180-7/07, 2007.
    Google ScholarLocate open access versionFindings
  • [13] Irene Kotsia and Ioannis Pitas. Facial Expression Recognition in Image Sequences Using Geometric Deformation Features and Support Vector Machines. IEEE Transactions on Image Processing, Vol. 16, No. 1, Jan. 2007, pp. 172-187.
    Google ScholarLocate open access versionFindings
  • [14] M. Pantic, L.J.M. Rothkrantz. Expert system for automatic analysis of facial expressions. Image and Vision Computing 18 (2000) 881–905, Jan. 2000, pp.881-905.
    Google ScholarLocate open access versionFindings
  • [15] K. L. Schmidt, J. F. Cohn. Dynamics of Facial Expression: Normative Characteristic and Individual Differences. IEEE International Conference on Multimedia and Expo ISBN 0-7695-1198-8/01, 2001, pp. 728-731.
    Google ScholarLocate open access versionFindings
  • [16] Yongmian Zhang, and Qiang Ji. Active and Dynamic Information Fusion for Facial Expression Understanding from Image Sequences. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 27, No. 5, May 2005, pp. 699-714.
    Google ScholarLocate open access versionFindings
  • [17] M.F. Valstar and M. Pantic. Biologically vs. Logic Inspired Encoding of Facial Action and Emotions in Video. IEEE 1-4244-0367-7/06, 2006, pp. 325-328.
    Google ScholarLocate open access versionFindings
  • [18] Shih-Chung Hsu, Hsin-Hui Huang, Chung-Lin Huang. Facial Expression Recognition for Human-Robot Interaction. IEEE International Conference on Robotic Computing, 2017, pp. 1-7.
    Google ScholarLocate open access versionFindings
  • [19] T. Kanade, J. F. Cohn, and Y. Tian, “Comprehensive database for facial expression analysis,” in Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition, 2000, pp. 46–53.
    Google ScholarLocate open access versionFindings
  • [20] M. Pantic, M.F. Valstar, R. Rademaker and L. Maat, “Web-based database for facial expression analysis”, In ICME’05, pp. 317-321, 2005.
    Google ScholarLocate open access versionFindings
  • [21] M. Bartlett, G. Littlewort, M. Frank, C. Lainscsek, I. Fasel, and J. Movellan. Automatic recognition of facial actions in spontaneous expressions. Journal of Multimedia, 1(6): 22–35, 2006.
    Google ScholarLocate open access versionFindings
  • [22] P. Ekman, W. V. Friesen, and P. Ellsworth, “Emotion in the Human Face,” Oxford University Press, 1972.
    Google ScholarFindings
  • [23] C. M. Whissell, the Dictionary of Affect in Language, Emotion: Theory, Research and Experience, vol. 4, Academic Press, 1989.
    Google ScholarLocate open access versionFindings
  • [24] P. Ekman, W. V. Friesen, and J. C. Hager, Facial Action Coding System: The Manual, Research Nexus Division, Network Information Research Corporation, Salt Lake City, UT, 2002.
    Google ScholarFindings
  • [25] Guodong Guo and Charles R. Dyer. Simultaneous Feature Selection and Classifier Training via Linear Programming: A Case Study for Face Expression Recognition. Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’03) 1063-6919/03, 2003.
    Google ScholarLocate open access versionFindings
  • [26] M. J. Lyons, S. Akamatsu, M. Kamachi, and J. Gyoba, “Coding facial expressions with Gabor wavelets,” in Proceedings of the 3rd IEEE International Conference on Automatic Face and Gesture Recognition, 1998, pp. 200–205.
    Google ScholarLocate open access versionFindings
  • [27] Shichuan Du, Yong Tao, and Aleix M. Martinez. Compound facial expressions of emotion. PNAS published online March 31, 2014, pp. 1454-1462.
    Google ScholarLocate open access versionFindings
  • [28] P. Ekman, W.V. Friesen, Unmasking the Face, Prentice Hall, New Jersey, 1975.
    Google ScholarFindings
  • [29] P. Ekman, Emotion in the Human Face, Cambridge University Press, Cambridge, 1982.
    Google ScholarFindings
  • [30] R. W. Picard, Affective Computing, M.I.T Media Laboratory Perceptual Computing Section Technical Report No.321, 1995, pp. 126.
    Google ScholarFindings
  • [31] M. Pantic and M. S. Bartlett, Machine analysis of facial expressions in Face Recognition, I-Tech Education and Publishing, Vienna, Austria, 2007, pp. 377–416.
    Google ScholarFindings
  • [32] M. Pantic and I. Patras, Dynamics of facial expression: recognition of facial actions and their temporal segments from face profile image sequences, IEEE Transactions on Systems, Man, and Cybernetics – Part B: Cybernetics, 36(2): 433–449, April 2006.
    Google ScholarLocate open access versionFindings
  • [33] Y. Zhang and Q. Ji, “Active and dynamic information fusion for facial expression under-standing from image sequences,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(5): 699–714, May 2005.
    Google ScholarLocate open access versionFindings
  • [34] K. B. Korb and A. E. Nicholson, Bayesian Artificial Intelligence, Chapman & Hall/CRC, London, UK, 2004. Last accessed on Sept. 20, 2014.
    Google ScholarFindings
  • [35] C.Y. Chang, J. S. Tsai, C. J. Wang, and P. C. Chung, “Emotion recognition with consideration of facial expression and physiological signals,” in Proceedings of the IEEE Symposium Series on Computational Intelligence, 2009, pp. 278–283.
    Google ScholarLocate open access versionFindings
  • [36] S. Bashyal and G. K. Venayagamoorthy, “Recognition of facial expressions using Gabor wavelets and learning vector quantization,” Eng. Appl. Artif. Intell, 21: 1056–1064, 2008.
    Google ScholarLocate open access versionFindings
  • [37] Y. Zilu, L. Jingwen, and Z. Youwei, “Facial expression recognition based on two dimensional feature extraction,” in Proceedings of the 2008 International Conference on Soft-ware Process (ICSP 2008), 2008 pp. 1440–1444.
    Google ScholarLocate open access versionFindings
  • [38] S. M. Lajevardi and M. Lech, “Averaged gabor filter features for facial expression recognition,” in Proceedings of the IEEE 2008, Digital Image Computing: Technique and Applications, 2008, pp. 71–76.
    Google ScholarLocate open access versionFindings
  • [39] R. Xiao, Q. Zhao, D. Zhang, and P. Shi, “Facial expression recognition on multiple manifolds,” Pattern Recogn., 44: 107–116, 2011.
    Google ScholarLocate open access versionFindings
  • [40] C. Chao-Fa and F. Y. Shin, “Recognizing facial action units using independent component analysis and support vector machine,” Pattern Recogn., 39: 1795–1798, 2006.
    Google ScholarLocate open access versionFindings
  • [41] A. Chakraborty and A. Konar, Emotional Intelligence: A Cybernetic Approach, Springer, 2009.
    Google ScholarFindings
  • [42] Z. H. Zeng, M. Pantic, G. I. Roisman, and T. S. Huang, “A survey of affect recognition methods: audio, visual, and spontaneous expressions,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(1): 39–58, 2009.
    Google ScholarLocate open access versionFindings
  • [43] R. Jensen and Q. Shen, “Tolerance-based and fuzzy-rough feature selection,” in Proceedings of the 16th International Conference on Fuzzy Systems, 2007.
    Google ScholarLocate open access versionFindings
  • [44] G. Y. Wang and Y. Wang, “3DM: domain-oriented data-driven data mining,” Fundam. Inform., 90(4): 395–426, 2009.
    Google ScholarLocate open access versionFindings
  • [45] A. Chakraborty, A. Konar, P. Bhowmik, and A. K. Nagar, “Stability, chaos and limit cycles in recurrent cognitive reasoning systems,” in The Handbook on Reasoning-Based Intelligent Systems (eds K. Nakamatsu and L. C. Jain), 2013.
    Google ScholarLocate open access versionFindings
  • [46] A. Chakraborty, A. Konar, U. K. Chakraborty, and A. Chatterjee, “Emotion recognition from facial expressions and its control using fuzzy logic,” IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans, 39(4): 726–743, 2009.
    Google ScholarLocate open access versionFindings
  • [47] G. U. Kharat and S. V. Dudul, “Neural network classifier for human emotion recognition from facial expressions using discrete cosine transform,” in Proceedings of the 1st International Conference on Emerging Trends in Engineering and Technology, 2008, pp. 653–658.
    Google ScholarLocate open access versionFindings
  • [48] J. M. Sun, X. S. Pei, and S. S. Zhou, “Facial emotion recognition in modern distant system using SVM,” in Proceedings of the Seventh International Conference on Machine Learning and Cybernetics, Kunming, China, 2008, pp. 3545–3548.
    Google ScholarLocate open access versionFindings
  • [49] H. Tsai, Y. Lai, and Y. Zhang, “Using SVM to design facial expression recognition for shape and texture features,” in Proceedings of the Ninth International Conference on Machine Learning and Cybernetics, Qingdao, China, July 11–14, 2010, pp. 2697–2704.
    Google ScholarLocate open access versionFindings
  • [50] M. Paleari, R. Chellali, and B. Huet, “Features for multimodal emotion recognition: an extensive study,” in Proceedings of the CIS, 2010, pp. 90–95.
    Google ScholarLocate open access versionFindings
  • [51] H. Wu, Y. Wu, and J. Luo, “An interval type-2 fuzzy rough set model for attribute reduction,” IEEE Transactions on Fuzzy Systems, 17(2): 301–315, 2009.
    Google ScholarLocate open access versionFindings
  • [52] K. Huang, S. Huang, and Y. Kuo, “Emotion recognition based on a novel triangular facial feature extraction method,” in Proceedings of IJCNN, 2010, pp. 1–6.
    Google ScholarLocate open access versionFindings
  • [53] A. Halder, A. Chakraborty, A. Konar, and A. K. Nagar, “Computing with words model for emotion recognition by facial expression analysis using interval type-2 fuzzy sets,” in Proceedings of FUZZIEEE, Hyderabad, India, 2013.
    Google ScholarLocate open access versionFindings
  • [54] E. Murphy-Chutorian and M. M. Trivedi, “Head pose estimation in computer vision: a survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(4): 607–626, 2009.
    Google ScholarLocate open access versionFindings
  • [55] Y. Hu, Z. Zeng, L. Yin, X. Wei, J. Tu, and T. S. Huang, “A study of non-frontal-view facial expressions recognition,” in Proceedings of ICPR, 2008, pp. 1–4.
    Google ScholarLocate open access versionFindings
  • [56] O. Rudovic, I. Patras, and M. Pantic, “Coupled Gaussian process regression for pose-invariant facial expression recognition,” European Conference on Computer Vision (ECCV), 350–363, 2010.
    Google ScholarLocate open access versionFindings
  • [57] H. Tang and T. S. Huang, “3D facial expression recognition based on automatically selected features,” in CVPR 2008 Workshop on 3D Face Processing (CVPR-3DFP’08), Anchorage, Alaska, June 2008.
    Google ScholarLocate open access versionFindings
  • [58] W. Zheng, H. Tang, Z. Lin, and T. S. Huang, “A novel approach to expression recognition from non-frontal face images,” in Proceedings of IEEE ICCV, 2009, pp. 1901–1908.
    Google ScholarLocate open access versionFindings
  • [59] H. Tang and T. S. Huang, “3D facial expression recognition based on properties of line segments connecting facial feature points,” in IEEE International Conference on Automatic Face and Gesture Recognition (FG’08), Amsterdam, The Netherlands, September 2008.
    Google ScholarLocate open access versionFindings
  • [60] W. Zheng, H. Tang, Z. Lin, and T. S. Huang, “Emotion recognition from arbitrary view face images,” in Proceedings of European Conference on Computer Vision (ECCV2010), 2010, pp. 490–503.
    Google ScholarLocate open access versionFindings
  • [61] H. Tang, M. Hasegawa-Johnson, and T. S. Huang, “Non-frontal view facial expression recognition based on ergodic hidden Markov model supervectors,” in IEEE Conference on Multimedia & Expo (ICME), 2010, pp. 1202–1207.
    Google ScholarLocate open access versionFindings
  • [62] L. Zhang, D. W. Tjondronegoro, and V. Chandran, “Evaluation of texture and geometry for dimensional facial expression recognition,” in 2011 International Conference on Digital Image Computing: Techniques and Applications (DICTA2011), 2011.
    Google ScholarLocate open access versionFindings
  • [63] H. Soyel and H. Demirel, “3D facial expression recognition with geometrically localized facial features,” in 23rd International Symposium on Computer and Information Sciences (ISCIS2008), 2008, pp. 1–4.
    Google ScholarLocate open access versionFindings
  • [64] G. Zhao and M. Pietikainen, “Dynamic texture recognition using local binary pattern with an application to facial expressions,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(6): 915–928, 2007.
    Google ScholarLocate open access versionFindings
  • [65] L. Yin, X. Chen, Y. Sun, T. Worm, and M. Reale, “A high-resolution 3D dynamic facial expression database,” in The Eighth International Conference on Automatic Face and Gesture Recognition (FGR08), 2008.
    Google ScholarLocate open access versionFindings
  • [66] S. Berretti, A. D. Bimbo, P. Pala, B. B. Amor, and M. Daoudi, “A set of selected SIFT features for 3D facial expression recognition,” in 2010 International Conference on Pattern Recognition, 2010, pp. 4125–4128.
    Google ScholarLocate open access versionFindings
  • [67] I. Mpiperis, S. Malassiotis, and M. G. Strintzis, “Bilinear models for 3D face and facial expression recognition,” IEEE Transactions on Information Forensics and Security, 3(3): 498–511, 2008.
    Google ScholarLocate open access versionFindings
  • [68] S. Moore and R. Bowden, “Local binary patterns for multi-view facial expression recognition,” Comput. Vis. Image Underst., 115: 541–558, 2011.
    Google ScholarLocate open access versionFindings
  • [69] Z. Zeng, M. Pantic, and T. S. Huang, “Emotion recognition based on multimodal information,” in Affective Information Processing (eds J. Tao and T. Tan), Springer, London, 2009, pp. 241–265.
    Google ScholarLocate open access versionFindings
  • [70] Z. Hammal, L. Couvreur, A. Caplier, and M. Rombaut, “Facial expression classification: an approach based on the fusion of facial deformations using the transferable belief model,” Int. J. Approx. Reason., 46(3): 542–567, 2007.
    Google ScholarLocate open access versionFindings
  • [71] I. Hupont, S. Baldassarri, R. Del-Hoyo, and E. Cerezo, “Effective emotional classification combining facial classifiers and user assessment,” Articul. Motion Deform. Objects, 5098: 431–440, 2008.
    Google ScholarLocate open access versionFindings
  • [72] M.F. Valstar, I. Patras, and M. Pantic, “Facial Action Unit Detection Using Probabilistic Actively Learned Support Vector Machines on Tracked Facial Point Data,” Proc. IEEE Int’l Conf. Computer Vision and Pattern Recognition, Workshop Vision for Human-Computer Interaction, 2005.
    Google ScholarLocate open access versionFindings
  • [73] Z. Pawlak, Rough sets, International Journal of Computer and Information Sciences (1982) 11, pp.341-356.
    Google ScholarLocate open access versionFindings
  • [74] Z. Pawlak, Rough sets theory and its applications, Journal of telecommunications and information technology, 3/2002, pp. 7-10.
    Google ScholarLocate open access versionFindings
  • [75] Z. Pawlak and A. Skowron, Rudiments of rough sets, Information Sciences 177 (2007) 3–27.
    Google ScholarLocate open access versionFindings
  • [76] Z. Pawlak and A. Skowron, Rough sets: Some extensions, Information Sciences 177 (2007) 28–40.
    Google ScholarLocate open access versionFindings
  • [77] Sergio Ballano, Isabelle Hupont, Eva Cerezo and Sandra Baldassarri, Continuous Facial Affect Recognition from Videos, Actas del XII Congreso Internacional Interacción 2011, pp. 357-366.
    Google ScholarFindings
  • [78] Eva Cerezo, Isabelle Hupont, Sandra Baldassarri and Sergio Ballano, Emotional facial sensing and multimodal fusion in a continuous 2D affective space, Ambient Intell Human Comput (2012) 3: pp. 31–46.
    Google ScholarFindings
  • [79] Chuan-Yu Chang, Yan-Chiang Huang, and Chi-Lu Yang, Personalized Facial Expression Recognition in Color Image, 2009 Fourth International Conference on Innovative Computing, Information and Control, pp. 1164-1167.
    Google ScholarLocate open access versionFindings
  • [80] Md. Zia Uddin, Tae-Seong Kim and Byung Cheol Song, An Optical Flow Feature-Based Robust Facial Expression Recognition with HMM from Video, International Journal of Innovative Computing, Information and Control, Volume 9, Number 4, April 2013, pp.14091421.
    Google ScholarLocate open access versionFindings
  • [81] Yong Yang, Guoyin Wang and Hao Kong, Self-Learning Facial Emotional Feature Selection Based on Rough Set Theory, Mathematical Problems in Engineering, Volume 2009, pp. 1-16.
    Google ScholarLocate open access versionFindings
  • [82] Anisha Halder, Amit Konar, Rajshree Mandal, Aruna Chakraborty, Pavel Bhowmik, Nikhil R. Pal and Atulya K. Nagar, General and Interval Type-2 Fuzzy Face-Space Approach to Emotion Recognition, IEEE Transactions on Systems, Man, and Cybernetics: System, vol. 43, No. 3, May 2013, pp. 587-605.
    Google ScholarLocate open access versionFindings
  • [83] Wenming Zheng, Hao Tang, and Thomas S. Huang, “Emotion recognition from non-frontal facial images”, in Emotion Recognition: A Pattern Analysis Approach, Amit Konar, Aruna Chakraborty (eds.), Wiley-Blackwell. ISBN: 9781118130667, 2015.
    Google ScholarFindings
  • [84] Patrick Lucey, Jeffrey F. Cohn, Takeo Kanade, Jason Saragih, Zara Ambadar, Iain Matthews. The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression. IEEE, 978-1-4244-7030-3/10, 2010, pp. 94-101.
    Google ScholarLocate open access versionFindings
  • [85] Zeng, Nianyin, et al. "Facial expression recognition via learning deep sparse autoencoders." Neurocomputing 273 (2018): 643-649.
    Google ScholarFindings
  • [86] Zhang, Kaihao, et al. "Facial expression recognition based on deep evolutional spatial-temporal networks." IEEE Transactions on Image Processing 26.9 (2017): 4193-4203.
    Google ScholarLocate open access versionFindings
  • [87] Lopes, André Teixeira, et al. "Facial expression recognition with convolutional neural networks: coping with few data and the training sample order." Pattern Recognition 61 (2017): 610-628.
    Google ScholarLocate open access versionFindings
0
您的评分 :

暂无评分

标签
评论
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn