# Kernel-Induced Label Propagation by Mapping for Semi-Supervised Classification

IEEE Transactions on Big Data, pp. 1-1, 2019.

EI WOS

Weibo:

Abstract:

Kernel methods have been successfully applied to the areas of pattern recognition and data mining. In this paper, we mainly discuss the issue of propagating labels in kernel space. A Kernel-Induced Label Propagation (Kernel-LP) framework by mapping is proposed for high-dimensional data classification using the most informative patterns of...More

Code:

Data:

Introduction

- Many types of real data, such as images, often contains high-dimensional attributes, redundant information and unfavorable features, how to represent and classify the real data automatically by machine learning efficiently and effectively is still a challenging task.
- A lot of real data are unlabeled, whose labels are needed to be estimated.
- Label Propagation (LP), which is one of the most popular graph based SSL algorithms [1][35,36,37], has aroused much attention the areas of data mining and pattern recognition in recent years because of its effectiveness and efficiency.
- LP is a process of propagating label information of labeled data to the unlabeled data based on their intrinsic geometry relationships, which is mainly performed via trading-off the manifold smoothness term over neighborhood preservation and the label fitness term [3,4,5,6,7,8,9,10,11,12,13], where the label fitness is to measure the predicted soft labels and the initial states, and the manifold smoothness term enables LP to determine the labels of samples by receiving partial information from neighbors

Highlights

- In the practical applications, many types of real data, such as images, often contains high-dimensional attributes, redundant information and unfavorable features, how to represent and classify the real data automatically by machine learning efficiently and effectively is still a challenging task
- Label Propagation is a process of propagating label information of labeled data to the unlabeled data based on their intrinsic geometry relationships, which is mainly performed via trading-off the manifold smoothness term over neighborhood preservation and the label fitness term [3,4,5,6,7,8,9,10,11,12,13], where the label fitness is to measure the predicted soft labels and the initial states, and the manifold smoothness term enables Label Propagation to determine the labels of samples by receiving partial information from neighbors
- We mainly evaluate the proposed Kernel-Label Propagation model for image classification and segmentation, along with illustrating the comparison results with the related methods for transductive and inductive learning
- We have proposed a novel kernel-induced label propagation framework termed Kernel-Label Propagation by mapping for semi-supervised classification
- The core idea of our Kernel-Label Propagation is to change the scenario of label propagation from commonly-used Euclidean distance to kernel space so that more informative patterns and relations of samples can be accurately discovered for learning useful knowledge based on the mapping assumption of kernel trick
- For kernel-induced label propagation, the seamless integration of adaptive graph weight construction and kernelized label propagation can ensure the weights to be joint-optimal for data representation and classification in kernel space

Methods

- Mean/Best/Time Mean/Best/Time Mean/Best/Time Mean/Best/Time Mean/Best/Time Mean/Best/Time

SparseNP 26.54/58.85/0.4117 27.97/64.50/0.4536 66.53/74.112/0.9039 86.61/94.93/0.8619 89.12/96.05/0.8555 65.05/71.51/0.6484

ProjLP SLP LNP LLGC

25.78/58.55/0.3923 27.22/63.90/0.4343 65.23/73.41/0.8712 86.05/94.10/0.8415 81.33/92.00/0.8239 64.29/71.32/0.6262 26.56/56.70/0.4015 27.90/62.15/0.4476 66.01/74.33/0.8835 86.05/95.21/0.8503 83.90/95.45/0.8228 64.62/71.90/0.6411 26.14/53.35/0.3260 27.32/58.45/0.3527 64.38/72.84/0.8494 84.76/94.38/0.7747 85.89/94.10/0.8235 63.48/71.41/0.5441 22.95/46.45/0.4718 24.63/49.20/0.5219 55.07/61.57/0.9237 77.07/85.14/0.8513 81.82/92.00/0.8513 53.60/59.42/0.6228

LapLDA 35.02/63.30/0.3993 38.45/66.70/0.4470 63.80/71.64/0.9131 83.91/89.17/0.8524 88.20/94.41/0.8717 63.04/67.90/0.6439

GFHF CD-LNP PN-LP

26.03/56.70/0.2660 27.61/62.35/0.2879 65.81/73.85/0.8259 85.88/95.07/0.7819 89.28/95.35/0.8114 64.31/71.80/0.5905 20.05/31.55/0.3033 20.75/34.75/0.3196 59.95/67.85/0.9225 79.11/86.74/0.7920 82.91/91.76/0.8469 59.35/66.53/0.5357 21.49/35.80/0.4146 22.64/40.90/0.4558 54.99/62.47/0.9146 76.13/83.96/0.8639 84.56/90.55/0.8625 53.95/61.29/0.6536 53.08/75.00/0.5873 53.24/74.55/0.6033 67.76/75.39/0.8515 88.88/98.19/0.8818 90.13/96.46/0.8152 66.46/73.28/0.6692 each object class for recognition. - The results are averaged over the first 15 best records based on 20 realizations of training and test sets.
- The authors describe the classification results in Table 2, including the mean accuracy, standard deviation the best accuracy and mean running time of each algorithm in each setting, in which the simulation settings are shown.
- The authors have the following observations: (1) The performance of each method goes up with the increasing numbers of training samples.
- SparseNP performs well by delivering better results than other methods

Results

- The authors mainly evaluate the proposed Kernel-LP model for image classification and segmentation, along with illustrating the comparison results with the related methods for transductive and inductive learning.
- The label prediction power of Kernel-LP is mainly compared with those of LNP, SparseNP, CD-LNP, ProjLP, LLGC, GFHF, SLP and PN-LP.
- The authors mainly evaluate the inclusion performance of the I-Kernel-LP-map and I-Kernel-LP-recons by comparing the results with the two widely-used inductive schemes, i.e., label reconstruction of LNP [8] and the direct label embedding method (including.

Conclusion

- The authors have proposed a novel kernel-induced label propagation framework termed Kernel-LP by mapping for semi-supervised classification.
- The core idea of the Kernel-LP is to change the scenario of label propagation from commonly-used Euclidean distance to kernel space so that more informative patterns and relations of samples can be accurately discovered for learning useful knowledge based on the mapping assumption of kernel trick.
- To enable Kernel-LP to process new data efficiently, two novel out-ofsample methods by direct kernel mapping and kernel-induced label reconstruction are presented.
- The proposed new data inclusion methods only depends on the kernel matrix between training set and testing set, which is simple and efficient

Summary

## Introduction:

Many types of real data, such as images, often contains high-dimensional attributes, redundant information and unfavorable features, how to represent and classify the real data automatically by machine learning efficiently and effectively is still a challenging task.- A lot of real data are unlabeled, whose labels are needed to be estimated.
- Label Propagation (LP), which is one of the most popular graph based SSL algorithms [1][35,36,37], has aroused much attention the areas of data mining and pattern recognition in recent years because of its effectiveness and efficiency.
- LP is a process of propagating label information of labeled data to the unlabeled data based on their intrinsic geometry relationships, which is mainly performed via trading-off the manifold smoothness term over neighborhood preservation and the label fitness term [3,4,5,6,7,8,9,10,11,12,13], where the label fitness is to measure the predicted soft labels and the initial states, and the manifold smoothness term enables LP to determine the labels of samples by receiving partial information from neighbors
## Objectives:

The authors aim to solve the problem by updating the variables alternately. Denote V as Vt at the t-th iteration, if the authors aim to calculate Qt 1 and Wt 1 at the (k+1)-th times iteration, the authors can have the following inequality held:.## Methods:

Mean/Best/Time Mean/Best/Time Mean/Best/Time Mean/Best/Time Mean/Best/Time Mean/Best/Time

SparseNP 26.54/58.85/0.4117 27.97/64.50/0.4536 66.53/74.112/0.9039 86.61/94.93/0.8619 89.12/96.05/0.8555 65.05/71.51/0.6484

ProjLP SLP LNP LLGC

25.78/58.55/0.3923 27.22/63.90/0.4343 65.23/73.41/0.8712 86.05/94.10/0.8415 81.33/92.00/0.8239 64.29/71.32/0.6262 26.56/56.70/0.4015 27.90/62.15/0.4476 66.01/74.33/0.8835 86.05/95.21/0.8503 83.90/95.45/0.8228 64.62/71.90/0.6411 26.14/53.35/0.3260 27.32/58.45/0.3527 64.38/72.84/0.8494 84.76/94.38/0.7747 85.89/94.10/0.8235 63.48/71.41/0.5441 22.95/46.45/0.4718 24.63/49.20/0.5219 55.07/61.57/0.9237 77.07/85.14/0.8513 81.82/92.00/0.8513 53.60/59.42/0.6228

LapLDA 35.02/63.30/0.3993 38.45/66.70/0.4470 63.80/71.64/0.9131 83.91/89.17/0.8524 88.20/94.41/0.8717 63.04/67.90/0.6439

GFHF CD-LNP PN-LP

26.03/56.70/0.2660 27.61/62.35/0.2879 65.81/73.85/0.8259 85.88/95.07/0.7819 89.28/95.35/0.8114 64.31/71.80/0.5905 20.05/31.55/0.3033 20.75/34.75/0.3196 59.95/67.85/0.9225 79.11/86.74/0.7920 82.91/91.76/0.8469 59.35/66.53/0.5357 21.49/35.80/0.4146 22.64/40.90/0.4558 54.99/62.47/0.9146 76.13/83.96/0.8639 84.56/90.55/0.8625 53.95/61.29/0.6536 53.08/75.00/0.5873 53.24/74.55/0.6033 67.76/75.39/0.8515 88.88/98.19/0.8818 90.13/96.46/0.8152 66.46/73.28/0.6692 each object class for recognition.- The results are averaged over the first 15 best records based on 20 realizations of training and test sets.
- The authors describe the classification results in Table 2, including the mean accuracy, standard deviation the best accuracy and mean running time of each algorithm in each setting, in which the simulation settings are shown.
- The authors have the following observations: (1) The performance of each method goes up with the increasing numbers of training samples.
- SparseNP performs well by delivering better results than other methods
## Results:

The authors mainly evaluate the proposed Kernel-LP model for image classification and segmentation, along with illustrating the comparison results with the related methods for transductive and inductive learning.- The label prediction power of Kernel-LP is mainly compared with those of LNP, SparseNP, CD-LNP, ProjLP, LLGC, GFHF, SLP and PN-LP.
- The authors mainly evaluate the inclusion performance of the I-Kernel-LP-map and I-Kernel-LP-recons by comparing the results with the two widely-used inductive schemes, i.e., label reconstruction of LNP [8] and the direct label embedding method (including.
## Conclusion:

The authors have proposed a novel kernel-induced label propagation framework termed Kernel-LP by mapping for semi-supervised classification.- The core idea of the Kernel-LP is to change the scenario of label propagation from commonly-used Euclidean distance to kernel space so that more informative patterns and relations of samples can be accurately discovered for learning useful knowledge based on the mapping assumption of kernel trick.
- To enable Kernel-LP to process new data efficiently, two novel out-ofsample methods by direct kernel mapping and kernel-induced label reconstruction are presented.
- The proposed new data inclusion methods only depends on the kernel matrix between training set and testing set, which is simple and efficient

- Table1: Performance comparison of each algorithm under different settings based on the six face image databases
- Table2: Performance comparison of each algorithm under different settings based on the ETH80 database
- Table3: Descriptions of the used real-world databases
- Table4: Performance comparison of each algorithm under different settings based on the three face databases
- Table5: Performance comparison of each algorithm under different settings based on the three handwriting digit databases
- Table6: Performance comparison of each algorithm under different settings based on the three object databases

Related work

- In this section, we briefly review the closely related works to our method, i.e., PN-LP [13] and Kernel method [14,15].

A. Positive and Negative Label Propagation (PN-LP)

PN-LP extends the existing LP framework to the scenario of label propagation with both positive and negative labels [13]. (1)

where is the mean edge length distance among neighbors, and xi xj is the Euclidean distance between xi and x j . By extending the traditional LP algorithm to incorporate both positive and negative label information in the process of label estimation, PN-LP defines the following objective function: F 1 2 tr LF , (2) 1tr F Y F Y

2tr where F f1, f2, , fl u c l u is the predicted soft label matrix to be obtained, L D 1/2WD 1/2 is the normalized graph

Funding

- This work is partially supported by National Natural Science Foundation of China (61672365, 61502238, 61622305, 61432019, 61772171, 61601112), Major Program of Natural Science Foundation of Jiangsu Higher Education Institutions of China (15KJA520002), "Qing-Lan Project" of Jiangsu Province, Natural Science Foundation of Jiangsu Province of China (BK20160040), and High-Level Talent of "Six Talent Peak" Project of Jiangsu Province of China (XYDXX-055)

Reference

- X. Zhu, “Semi-Supervised Learning Literature Survey,” Technical Report, University of Wisconsin - Madison, 2008.
- O. Chapelle, B. Scholkopf, A. Zien, “Semi-Supervised Learning,” Cambridge: MIT Press, 2006.
- D. Zhou, O. Bousquet, T. N. Lal, J. Weston and B. Scholkopf, “Learning with Local and Global Consistency,” Neural Information Processing Systems, vol. 16, no. 4, pp. 321-328, March 2004.
- F. Nie, D. Xu, and W. H. Tsang, “Flexible Manifold Embedding: A Framework for Semi-Supervised and Unsupervised Dimension Reduction,” IEEE Transactions on Image Processing, vol. 19, no. 7, pp. 1921-1932, 2010.
- X. Chang, F. P. Nie, Y. Yang, and H. Huang, “Refined Spectral Clustering via Embedded Label Propagation,” Neural Computation, pp.1-16, Sep 2017.
- H. Tang, T. Fang and P. F. Shi, “Laplacian linear discriminant analysis,” Pattern Recognition, vol.39, no.1, pp.136-139, 2006.
- X. Zhu, Z. Ghahramani, J. Lafferty, “Semi-Supervised Learning Using Gaussian Fields and Harmonic Functions,” In: Proceeding of the 20th International Conference on Machine Learning, vol. 2, pp. 912-919, Jan 2003.
- F. Wang and C. S. Zhang, “Label propagation through linear neighborhoods,” IEEE Trans. on Knowledge and Data Engineering, vol. 20, no. 1, pp. 985-992, Jan 2006.
- C. Zhang, S. Wang and D. Li, “Prior class dissimilarity based linear neighborhood propagation,” Knowledge-Based Systems, vol. 83, pp. 58-65, 2015.
- F. P. Nie, S. M. Xiang and Y. Liu, “A general graph-based semisupervised learning with novel class discovery,” Neural Computing & Applications, vol.19, no. 4, pp. 549-555, 2010.
- Z. Zhang, W. Jiang and F. Li, “Projective Label Propagation by Label Embedding,” In: Proceeding of International Conference on Computer Analysis of Images and Patterns, vol. 9257, pp. 470-481, Sept 2015.
- Z. Zhang, L. Zhang, M. Zhao, Y. Liang, F. Li, “Semi-supervised image classification by nonnegative sparse neighborhood propagation,” In: Proceedings of ACM International Conference on Multimedia Retrieval, Shanghai, pp.139-146, 2015.
- O. Zoidi, A. Tefas, and N. Nikolaidis, “Positive and negative label propagation,” IEEE Trans. on Circuits and Systems for Video Technology, 2016. DOI: 10.1109/TCSVT.2016.2598671.
- J. Shawetaylor, “Kernel Methods for Pattern Analysis: Kernels from generative models”, Journal of the American Statistical Association, vol. 101, no. 2, 2004.
- D. Zhang, Z. Zhou H and S. Chen, “Adaptive Kernel Principal Component Analysis with Unsupervised Learning of Kernels,” IEEE International Conf. on Data Mining, pp.1178-1182, 2006.
- S. A. Nene, S. K. Nayar, and H. Murase. Columbia object image library (COIL-20), Technical Report CUCS-005-96, Columbia University, New York, NY, 1996.
- Z. Zhang, M. Zhao, T. Chow, “Graph based Constrained SemiSupervised Learning Framework via Label Propagation over Adaptive Neighborhood,” IEEE Transactions on Knowledge & Data Engineering, vol. 27, no. 9, pp.2362-2376, 2015.
- F. Nie, S. Xiang, Y. Jia, C. Zhang, “Semi-supervised orthogonal discriminant analysis via label propagation,” Pattern Recognition, vol.42, no.11, pp.2615-2627, 2009.
- N. Yang, Y. Sang, R. He, X. Wang, “Label propagation algorithm based on non-negative sparse representation,” Lecture Notes in Computer Science, pp.348-357, 2010.
- H. Cheng, Z. Liu, J. Yang, “Sparsity induced similarity measure for label propagation,” In: Proceedings of IEEE International Conf.on Computer Vision. IEEE, pp. 317-324, 2009.
- Z. Zhang, F. Z. Li, and M. B. Zhao, “Transformed Neighborhood Propagation”. In: Proceedings of the International Conference on Pattern Recognition. IEEE, pp. 3792-3797, 2014.
- L. Jia, Z. Zhang and Y. Zhang, “ Semi-Supervised Classification by Nuclear-norm based Transductive Label Propagation”, In: Proceedings of International Conference on Neural Information Processing, Kyoto, Japan, Oct 2016.
- L. Jia, Z. Zhang and W. M. Jiang, “Transductive Classification by Robust Linear Neighborhood Propagation,” In: Proceedings of the Pacific Rim Conference on Multimedia, Xi'an, China, 2016.
- F. Zang, and J. S. Zhang, “Label propagation through sparse neighborhood and its applications,” Neurocomputing, vol.97, no.1, pp.267-277, 2012.
- F. Nie, H. Huang, X. Cai, C. Ding, “Efficient and robust feature selection via joint l2,1-norms minimization,” In: Proceeding of the Neural Information Processing Systems (NIPS), 2010.
- Y. Yang, H.T. Shen, Z.G. Ma, Z. Huang, X.F. Zhou, “L2, 1-Norm Regularized Discriminative Feature Selection for Unsupervised Learning,” In: Proceeding of the International Joint Conferences on Artificial Intelligence, 2011.
- D. Martin, C. Fowlkes, D. Taland, and J. Malik, “A Database of Human Segmented Natural Images and Its Application to Evaluating Segmentation Algorithms and Measuring Ecological Statistics,” Proc. IEEE Int’l Conf. Computer Vision (ICCV), pp. 416-423, 2001.
- D. Graham, N. Allinson, “Characterizing virtual eigensignatures for general purpose face recognition,” NATO ASI Series F, pp. 446–456, 1998.
- A. Georghiades, P. Belhumeur, and D. Kriegman, “From few to many: Illumination cone models for face recognition under variable lighting and pose,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 23, no. 6, pp. 643–660, 2001.
- A. Martinez and R. Benavente, “The AR Face Database,” CVC Technical Report 24, 1998.
- B. Leibe and B. Schiele, “Analyzing appearance and contour based methods for object categorization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.409-415, 2003.
- C. Liu, F. Yin, D. Wang, Q. Wang, “Online and offline handwritten Chinese character recognition: Benchmarking on new databases,” Pattern Recognition, vol.46, no.1, pp. 155-162, 2013.
- Z. Zhang, C. Liu and M. Zhao, “A Sparse Projection and Low-Rank Recovery Framework for Handwriting Representation and Salient Stroke Feature Extraction.” ACM Trans. Intelligent Systems and Technology, vol.6, no.1, pp.9:1-9:26, April 2015.
- C. Hou, F. Nie, X. Li, D. Yi, Y. Wu, “Joint Embedding Learning and Sparse Regression: A Framework for Unsupervised Feature Selection. IEEE Trans. Cybernetics, vol.44, no.6, pp.793-804, 2014.
- F. Wu, Z. Wang; Z. Zhang, Y. Yang, J. Luo, W. Zhu, Y. Zhuang, "Weakly Semi-Supervised Deep Learning for Multi-Label Image Annotation," IEEE Transactions on Big Data, vol.1, no.3, pp.109122, 2015.
- F. Dornaika, and Y. El Traboulsi, "Learning Flexible Graph-Based Semi-Supervised Embedding," IEEE Transactions on Cybernetics, vol.46, no.1, pp.206-218, 2016.
- Z. Zhang, M. Zhao and T. W. S. Chow, "Graph based Constrained Semi-Supervised Learning Framework via Label Propagation over Adaptive Neighborhood," IEEE Transactions on Knowledge and Data Engineering, vol.27, no.9, pp.2362-2376, Sep 2015.
- S. T. Roweis, L. K. Saul, “Nonlinear Dimensionality Reduction by Locally Linear Embedding,” Science, vol. 290, no. 5500, pp. 23232326, 2000.
- D. Singh, D. Roy, and C. K. Mohan, “DiP-SVM: Distribution Preserving Kernel Support Vector Machine for Big Data”, IEEE Transactions on Big Data, vol.3, no.1, pp.79-90, 2017.
- J. Yang, A. F. Frangi, J. Yang, D. Zhang, and J. Zhong, “KPCA plus LDA: a Complete Kernel Fisher Discriminant Framework for Feature Extraction and Recognition,” IEEE Transactions on Pattern analysis and machine intelligence, vol.27, no.2, pp.230 -244, 2005.
- O. Delalleu, Y. Bengio, N. Le Roux, “Non-Parametric Function Induction in Semi-Supervised Learning,” In: Proc. 10th Int’l Workshop Artificial Intelligence and Statistics, pp.96-103, 2005.
- F. P. Nie, D. Xu, X. L. Li, and S. M. Xiang, “Semi-supervised dimensionality reduction and classification through virtual label regression,” IEEE Trans. Syst. Man Cybernet. Part B, vol.41, no.3, pp.675–685, 2011.
- M. Sugiyama, "Dimensionality Reduction of Multimodal Labeled Data by Local Fisher Discriminant Analysis," Journal of Machine Learning Research, vol.8, pp.1027-1061, 2007.
- L. Zelnik-Manor, P. Perona, “Self-tuning spectral clustering,” In: Advances in Neural Information Processing Systems, vol.17, pp. 1601-1608, MIT Press, Cambridge, MA, 2005.
- F. Alimoglu, “Combining Multiple Classifiers for Pen-Based Handwritten Digit Recognition,” MSc Thesis, Institute of Graduate Studies in Science and Engineering, Bogazici University, 1996.
- J. Hull, “A database for handwritten text recognition research,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.16, no.5, pp.550-554, 1994.
- Z. Zhang, Y. Zhang, F. Li, M. Zhao, L. Zhang and S. Yan, “Discriminative Sparse Flexible Manifold Embedding with Novel Graph for Robust Visual Representation and Label Propagation,” Pattern Recognition, vol.61, pp.492-510, 2017.
- M. Wang, W. Fu, Sh. Hao, H. Liu, X. Wu, “Learning on Big Graph: Label Inference and Regularization with Anchor Hierarchy,” IEEE Transactions on Knowledge and Data Engineering, vol. 29, no. 5, pp. 1101-1114, 2017.
- J. H. Krijthe, and M. Loog, “Projected estimators for robust semisupervised classification,” Machine Learning, vol.106, no.7, pp.9931008, 2017.
- M. Wang, X. Liu, X. Wu, “Visual Classification by l1-Hypergraph Modeling,” IEEE Transactions on Knowledge and Data Engineering, vol. 27, no. 9, pp. 2564-2574, 2015.
- W. Hu, J. Gao, J. Xing, C. Zhang, S. Maybank, “Semi-Supervised Tensor-Based Graph Embedding Learning and Its Application to Visual Discriminant Tracking,” IEEE Trans. Pattern Anal. Mach. Intell., vol.39, no.1, pp.172-188, 2017. Lei Jia is currently working toward the research degree at the School of Computer Science and Technology, Soochow University, China. His current research interests include machine learning, pattern recognition, data mining and their real applications. More specifically, he is interested in designing effective semi-supervised learning algorithms for classification. He has published several papers in Neural Networks, and conference proceedings of ICDM, ICONIP and PCM.
- Mingbo Zhao (M’13- ) received the Ph.D. degree from the Department of Electronic Engineering at City University of Hong Kong, Kowloon, Hong Kong SAR, in 2013. He is now a Senior Researcher at the City University of Hong Kong. His current research interests mainly include pattern recognition, machine learning and data mining. He has authored/co-authored more than 50 papers published at prestigious international journals and conferences, e.g., IEEE TIP, IEEE TII, IEEE TKDE, ACM TIST, Pattern Recognition, Computer Vision and Image Understanding (CVIU), Information Sciences, Neural Networks, etc.
- Guangcan Liu (M’11- ) received the bachelor's degree in mathematics and the Ph.D. degree in computer science and engineering from Shanghai Jiao Tong University, Shanghai, China, in 2004 and 2010, respectively. He was a PostDoctoral Researcher with the National University of Singapore, Singapore, from 2011 to 2012, the University of Illinois at Urbana-Champaign, Champaign, IL, USA, from 2012 to 2013, Cornell University, Ithaca, NY, USA, from 2013 to 2014, and Rutgers University, Piscataway, NJ, USA, in 2014. Since 2014, he has been a Professor with the School of Information and Control, Nanjing University of Information Science and Technology, Nanjing, China. His research interests touch on the areas of pattern recognition and signal processing. He obtained the National Excellent Youth Fund in 2016 and was designated as the global Highly Cited Researchers in 2017.

Full Text

Tags

Comments