Robust Low Rank Representation via Feature and Sample Scaling

Neurocomputing(2020)

引用 2|浏览36
暂无评分
摘要
Low-rank representation (LRR) is a very competitive technique in various real-world applications for its powerful capability in discovering latent structure of noisy or corrupted data set. However, traditional low-rank models treat each data point and feature equally so that noisy data cannot be detected and suppressed effectively and have obvious deterioration in performance, especially in heavy noisy scenario. In this paper, to address this problem, we develop a method of feature and sample scaling for low rank representation. The importance of data points and their features in both feature and sample spaces are considered, as such, clean data points and noisy data points and their features can be distinguished. In addition, based on the observation that noisy data points are usually deviated far away from the principal projection of the data set, a cosine similarity metric between data vector and the principal projection vector is developed to measure the importance of each sample. Applying our method into two classical low rank models such as Low Rank Representation (LRR) and Bilinear Factorization (BF), we can learn better low-rank structure of clean data, while the outliers or missing data being suppressed. Extensive experimental results on ORL, COIL20 and video surveillance, demonstrate that our proposed method can outperform state-of-the-art low rank methods in image clustering tasks with various levels of corruptions, especially in a heavy noisy scenario.
更多
查看译文
关键词
Low rank representation,Feature and sample scaling,Cosine similarity metric,Bilinear factorization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要