Kernel Reverse Neighborhood Discriminant Analysis

ELECTRONICS(2023)

引用 0|浏览7
暂无评分
摘要
Currently, neighborhood linear discriminant analysis (nLDA) exploits reverse nearest neighbors (RNN) to avoid the assumption of linear discriminant analysis (LDA) that all samples from the same class should be independently and identically distributed (i.i.d.). nLDA performs well when a dataset contains multimodal classes. However, in complex pattern recognition tasks, such as visual classification, the complex appearance variations caused by deformation, illumination and visual angle often generate non-linearity. Furthermore, it is not easy to separate the multimodal classes in lower-dimensional feature space. One solution to these problems is to map the feature to a higher-dimensional feature space for discriminant learning. Hence, in this paper, we employ kernel functions to map the original data to a higher-dimensional feature space, where the nonlinear multimodal classes can be better classified. We give the details of the deduction of the proposed kernel reverse neighborhood discriminant analysis (KRNDA) with the kernel tricks. The proposed KRNDA outperforms the original nLDA on most datasets of the UCI benchmark database. In high-dimensional visual recognition tasks of handwritten digit recognition, object categorization and face recognition, our KRNDA achieves the best recognition results compared to several sophisticated LDA-based discriminators.
更多
查看译文
关键词
linear discriminant analysis,kernel trick,reverse nearest neighbors,Gaussian kernel
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要