Deep Latent Low-Rank Fusion Network for Progressive Subspace Discovery

IJCAI 2020, pp. 2762-2768, 2020.

Cited by: 0|Bibtex|Views56|DOI:https://doi.org/10.24963/ijcai.2020/383
EI
Other Links: dblp.uni-trier.de|academic.microsoft.com
Weibo:
We present a new and effective strategy to extend the sin-gle-layer latent low-rank models into multi-ple-layers, and propose a new and progressive Deep Latent Low-Rank Fusion Network to uncover deep features and struc-tures embedded in input data

Abstract:

Low-rank representation is powerful for recover-ing and clustering the subspace structures, but it cannot obtain deep hierarchical information due to the single-layer mode. In this paper, we present a new and effective strategy to extend the sin-gle-layer latent low-rank models into multi-ple-layers, and propose a new and progressive Deep...More

Code:

Data:

0
Introduction
  • Representation learning is always a fundamental problem to obtain the underlying explanatory factors and features for the subsequent data classification or clustering tasks [Chen et al, 2019 and 2020] [Zhang et al, 2016 and 2019].
  • Low-Rank Representation (LRR) [Liu et al, 2013] is one of the most classical algorithms to discover multi-subspaces, but it is essentially a transductive method failing to handle new data efficiently.
  • LatLRR resolves the insufficient sampling issue and obtains enhanced performance over LRR, it still suffers from a high computational cost due to using Nuclear-norm to approximate the rank function to constrain the subspaces, while the computation of the Nuclear-norm needs the time-consuming Singular Value Decomposition of matrices at each iteration, especially for large-scale datasets.
  • The Frobenius-norm is sensitive to noise and outliers, which may produce inaccurate representations
Highlights
  • Representation learning is always a fundamental problem to obtain the underlying explanatory factors and features for the subsequent data classification or clustering tasks [Chen et al, 2019 and 2020] [Zhang et al, 2016 and 2019]
  • It is clear that when we use the Frobenius-norm to constrain the matrices Z1 and L1, the problem identifies Frobenius-norm based LatLRR; while we use the Nuclear-norm as constraints, the resulting problem is identical to Latent LRR
  • Both Frobenius-norm based LatLRR and Latent LRR are the special causes of our DLRF-Net framework
  • We conduct experiments to evaluate the effectiveness of our fDLRF-Net and nDLRF-Net, and show the comparison results with other related methods, including Frobenius-norm based LatLRR, Latent LRR, Low-Rank Representation, Robust Latent LRR [Zhang et al, 2014b], Laplacian Regularized Low-Rank Representation [Zhang et al, 2014a], Sa-Latent LRR [Wang et al, 2018] and PLrSC [Li et al, 2017b]
  • We evaluate fDLRF-Net and nDLRF-Net by visualizing recovered deep features XZ
Methods
  • LRR rLatLRR SA-LatLRR rLRR PLrSC.
  • Clustering Accuracy (%) on UMIST K=2 K=4 K=6 K=8.
  • Clustering Accuracy (%) on Fashion MNIST.
  • FLLRR fDLRF-Net (2 layers) fDLRF-Net (3 layers) fDLRF-Net (4 layers) fDLRF-Net (5 layers).
  • LatLRR nDLRF-Net (2 layers) nDLRF-Net (3 layers) nDLRF-Net (4 layers) nDLRF-Net (5 layers).
  • (a) fDLRF-Net (b) nDLRF-Net (c) Examples (d) fDLRF-Net (e) nDLRF-Net. 6.3 Noisy Image Clustering Against Corruptions
Results
  • Experimental Results and Analysis

    The authors conduct experiments to evaluate the effectiveness of the fDLRF-Net and nDLRF-Net, and show the comparison results with other related methods, including FLLRR, LatLRR, LRR, Robust LatLRR [Zhang et al, 2014b], Laplacian Regularized LRR [Zhang et al, 2014a], Sa-LatLRR [Wang et al, 2018] and PLrSC [Li et al, 2017b].
  • Three real image databases are involved, including two face datasets (i.e., CMU PIE [Sim et al, 2003], UMIST [Graham et al, 1998]) and the Fashion MNIST database [Xiao et al, 2017].
  • The authors follow the common procedure to resize each face image into 32×32 pixels and images of the Fashion MNIST dataset are resized into 28×28 pixels.
  • Each block denotes the coefficients for certain subject so that each sample can be reconstructed by the samples of one class as much as possible.
  • The authors follow [Liu et al, 2011] to construct 10 independent subspaces
Conclusion
  • 5.1 Relationship Analysis

    The authors mainly discuss the relations of the DLRF-Net to LatLRR and FLLRR.
  • It is clear that when the authors use the Frobenius-norm to constrain the matrices Z1 and L1, the problem identifies FLLRR; while the authors use the Nuclear-norm as constraints, the resulting problem is identical to LatLRR.
  • That is, both FLLRR and LatLRR are the special causes of the DLRF-Net framework.
  • More effective deep low-rank fusion strategies will be explored
Summary
  • Introduction:

    Representation learning is always a fundamental problem to obtain the underlying explanatory factors and features for the subsequent data classification or clustering tasks [Chen et al, 2019 and 2020] [Zhang et al, 2016 and 2019].
  • Low-Rank Representation (LRR) [Liu et al, 2013] is one of the most classical algorithms to discover multi-subspaces, but it is essentially a transductive method failing to handle new data efficiently.
  • LatLRR resolves the insufficient sampling issue and obtains enhanced performance over LRR, it still suffers from a high computational cost due to using Nuclear-norm to approximate the rank function to constrain the subspaces, while the computation of the Nuclear-norm needs the time-consuming Singular Value Decomposition of matrices at each iteration, especially for large-scale datasets.
  • The Frobenius-norm is sensitive to noise and outliers, which may produce inaccurate representations
  • Methods:

    LRR rLatLRR SA-LatLRR rLRR PLrSC.
  • Clustering Accuracy (%) on UMIST K=2 K=4 K=6 K=8.
  • Clustering Accuracy (%) on Fashion MNIST.
  • FLLRR fDLRF-Net (2 layers) fDLRF-Net (3 layers) fDLRF-Net (4 layers) fDLRF-Net (5 layers).
  • LatLRR nDLRF-Net (2 layers) nDLRF-Net (3 layers) nDLRF-Net (4 layers) nDLRF-Net (5 layers).
  • (a) fDLRF-Net (b) nDLRF-Net (c) Examples (d) fDLRF-Net (e) nDLRF-Net. 6.3 Noisy Image Clustering Against Corruptions
  • Results:

    Experimental Results and Analysis

    The authors conduct experiments to evaluate the effectiveness of the fDLRF-Net and nDLRF-Net, and show the comparison results with other related methods, including FLLRR, LatLRR, LRR, Robust LatLRR [Zhang et al, 2014b], Laplacian Regularized LRR [Zhang et al, 2014a], Sa-LatLRR [Wang et al, 2018] and PLrSC [Li et al, 2017b].
  • Three real image databases are involved, including two face datasets (i.e., CMU PIE [Sim et al, 2003], UMIST [Graham et al, 1998]) and the Fashion MNIST database [Xiao et al, 2017].
  • The authors follow the common procedure to resize each face image into 32×32 pixels and images of the Fashion MNIST dataset are resized into 28×28 pixels.
  • Each block denotes the coefficients for certain subject so that each sample can be reconstructed by the samples of one class as much as possible.
  • The authors follow [Liu et al, 2011] to construct 10 independent subspaces
  • Conclusion:

    5.1 Relationship Analysis

    The authors mainly discuss the relations of the DLRF-Net to LatLRR and FLLRR.
  • It is clear that when the authors use the Frobenius-norm to constrain the matrices Z1 and L1, the problem identifies FLLRR; while the authors use the Nuclear-norm as constraints, the resulting problem is identical to LatLRR.
  • That is, both FLLRR and LatLRR are the special causes of the DLRF-Net framework.
  • More effective deep low-rank fusion strategies will be explored
Tables
  • Table1: Descriptions of used image datasets
  • Table2: Numerical clustering evaluation results on the UMIST and Fashion MNIST databases
Download tables as Excel
Related work
  • We describe the closely-related low-rank coding algorithms.

    2.1 LatLRR and FLLRR

    Given a data matrix X x1, x2 , xN n N , where xi n is a sample represented using an n-dimensional vector and N is the number of samples, then LatLRR improves LRR using the unobserved hidden data XH to extend the dictionary and overcome the insufficient data sampling issue. Specifically, LatLRR considers the following coding formulation: min Z rank Z , s.t. XO XO, XH (1)

    where rank is rank function and XO is the observed data matrix. Supposing that XO and XH are sampled from the same collection of low-rank subspaces, by using the Nuclear-norm to approximate the rank function and using sparse L1-norm on error term E, LatLRR recovers the hidden effects by min Z ,P,E * L *
Funding
  • This work is supported by the NSFC (61672365, 61806035 and U1936217) and the Fundamental Research Funds for the Central Universities of China (JZ2019HGPA0102), and New Generation AI Major Project of Ministry of Science and Technology of China (2018AAA0100601)
Reference
  • [Acharya et al., 2019] Anish Acharya, Rahul Goel, Angeliki Metallinou, Inderjit S Dhillon. Online Embedding Compression for Text Classification Using Low Rank Matrix Factorization. In: AAAI, Honolulu, Hawaii, 2019.
    Google ScholarFindings
  • [Bao et al., 2012] Bingkun Bao, Guangcan Liu, Changsheng Xu, Shuicheng Yan. Inductive robust principal component analysis. IEEE TIP, 21(8):3794-3800, 2012.
    Google ScholarLocate open access versionFindings
  • [Cai et al., 2010] Jianfeng Cai, Emmanuel J. Candes, and Zuowei Shen. A singular value thresholding algorithm for matrix completion. SIAM Journal on optimization, 20(4): 1956-1982. 2010.
    Google ScholarLocate open access versionFindings
  • [Cai et al., 2017] Deng Cai, Xiaofei He, Jiawei Han. Document clustering using locality preserving indexing. IEEE TKDE, 17(12): 1624-1637, 2005.
    Google ScholarLocate open access versionFindings
  • [Chen et al., 2019] Xi Chen, Jie Li, Yun Song, Feng Li, Jianjun Chen, Kun Yang. Low-Rank Tensor Completion for Image and Video Recovery via Capped Nuclear Norm. IEEE Access, 7:112142-112153, 2019.
    Google ScholarLocate open access versionFindings
  • [Chen et al., 2020] Yongyong Chen, Xiaolin Xiao, Yicong Zhou. Low-Rank Quaternion Approximation for Color Image Processing. IEEE TIP, 29:1426-1439, 2020.
    Google ScholarLocate open access versionFindings
  • [Ding et al., 2018] Zhengming Ding, Yun Fu. Deep Transfer Low-Rank Coding for Cross-Domain Learning. IEEE TNNLS, 30(6):1768-1779, 2018.
    Google ScholarLocate open access versionFindings
  • [Graham et al., 1998] Daniel B. Graham, Nigel M. Allinson. Characterizing virtual eigensignatures for general purpose face recognition. In: NATO ASI Series F, Computer and Systems Sciences, Berlin, Heidelberg, 1998.
    Google ScholarLocate open access versionFindings
  • [Kim et al., 2019] Byungju Kim, Hyunwoo Kim, Kyungsu Kim, Sungjin Kim, Junmo Kim. Learning Not to Learn: Training Deep Neural Networks With Biased Data. In: IEEE CVPR, pp.9012-9020, 2019.
    Google ScholarLocate open access versionFindings
  • [Li et al., 2015] Jun Li, Heyou Chang, Jian Yang. Sparse deep stacking network for image classification. In: AAAI, Hyatt Regency in Austin, Texas, USA, pp.3804-3810, 2015.
    Google ScholarLocate open access versionFindings
  • [Li et al., 2017a] Zechao Li, Jinhui Tang. Weakly-supervised deep nonnegative low-rank model for social image tag refinement and assignment. In: AAAI, San Francisco, California, USA, pp.4154-4160, 2017.
    Google ScholarFindings
  • [Li et al., 2017b] Jun Li, Hongfu Liu, Handong Zhao, Yun Fu. Projective low-rank subspace clustering via learning deep encoder. In: IJCAI, Melbourne, Australia, pp.21452151, 2017.
    Google ScholarFindings
  • [Lin et al., 2009] Zhouchen Lin, Minming Chen, Leqin Wu, Yi Ma. The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices. Tech. Tep., 2009.
    Google ScholarLocate open access versionFindings
  • [Liu et al., 2011] Guangcan Liu and Shuicheng Yan. Latent low-rank representa- tion for subspace segmentation and feature extraction. In: ICCV, Barcelona, Spain, 2011.
    Google ScholarFindings
  • [Liu et al., 2013] Guangcan Liu, Zhouchen Lin, Shuicheng Yan, Ju Sun, Yong Yu, Yi Ma. Robust Recovery of Subspace Structures by Low-Rank Representation. IEEE TPAMI, 35(1):171-627, 2013.
    Google ScholarLocate open access versionFindings
  • [Liu et al., 2019] Guangcan Liu, Zhao Zhang, Qingshan Liu, Hongkai Xiong. Robust Subspace Clustering with Compressed Data. IEEE TIP, 28(10): 5161-5170, 2019.
    Google ScholarLocate open access versionFindings
  • [Lu et al., 2019] Canyi Lu, Xi Peng and Yunchao Wei. Low-Rank Tensor Completion With a New Tensor Nuclear Norm Induced by Invertible Linear Transforms. In: IEEE CVPR, Long Beach, USA, pp.5996-6004, 2019.
    Google ScholarFindings
  • [Ren et al., 2019] Jiahuan Ren, Zhao Zhang, Sheng Li, Yang Wang, Guangcan Liu, Shuicheng Yan and Meng Wang. Learning Hybrid Representation by Robust Dictionary Learning in Factorized Compressed Space. IEEE TIP, 29(1): 3941-3956, 2020.
    Google ScholarLocate open access versionFindings
  • [Shi et al., 2000] Jianbo Shi and Jitendra Malik. Normalized cuts and image segmentation. IEEE TPAMI, 22(8): 888905, 2000.
    Google ScholarLocate open access versionFindings
  • [Sim et al., 2003] Terence Sim, Simon Baker, Maan Bsat. The CMU pose, illuminlation, and expression database. IEEE TPAMI, 25(12): 1615- 1618, 2003.
    Google ScholarLocate open access versionFindings
  • [Su et al., 2019] Fang Su, Haiyang Shang, Jingyan Wang. Low-Rank Deep Convolutional Neural Network for Multitask Learning. Computational Intelligence and Neuroscience, 7410701:1-7410701:10, 2019.
    Google ScholarLocate open access versionFindings
  • [Wang et al., 2018] Lei Wang, Zhao Zhang, Sheng Li, Guangcan Liu, Chenping Hou and Jie Qin. Similarity-Adaptive Latent Low-Rank Representation for Robust Data Representation. In: PRICAI, Nanjing, China, 2018.
    Google ScholarFindings
  • [Xiao et al., 2017] Han Xiao, Kashif Rasul, Roland Vollgraf. Fashion-Mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv:1708.07747v2, 2017.
    Findings
  • [Xie et al., 2017] Jianchun Xie, Jian Yang, Jianjun Qian, Ying Tai, Hengmin M Zhang. Robust Nuclear NormBased Matrix Regression With Applications to Robust Face Recognition. IEEE TIP, 26(5):2286-2295, 2017.
    Google ScholarLocate open access versionFindings
  • [Xue et al., 2019] Zhe Xue, Junping Du, Dawei Du, Siwei Lyu. Deep low-rank subspace ensemble for multi-view clustering. Information Sciences, 482: 210-227, 2019.
    Google ScholarLocate open access versionFindings
  • [Yu et al., 2018] Yu Song, Yiquan Wu. Subspace clustering based on latent low rank representation with Frobeniusnorm. Neurocomputing, 275:2479-2489, 2018.
    Google ScholarLocate open access versionFindings
  • [Zhang et al., 2014a] Zhao Zhang. Shuicheng Yan, Mingbo Zhao. Similarity preserving low-rank representation for enhanced data representation and effective subspace learning. Neural Networks, 53: 81-94, 2014.
    Google ScholarLocate open access versionFindings
  • [Zhang et al., 2014b] Hongyang Zhang, Zhouchen Lin, Chao Zhang, Junbin Gao. Robust latent low rank representation for subspace clustering. Neurocomputing, 145:369-373, 2014.
    Google ScholarLocate open access versionFindings
  • [Zhang et al., 2016] Zhao Zhang, Fanzhang Li, Mingbo Zhao, Li Zhang, Shuicheng Yan. Joint Low-rank and Sparse Principal Feature Coding for Enhanced Robust Representation and Visual Classification. IEEE TIP, 25(6): 2429-2443, June 2016.
    Google ScholarLocate open access versionFindings
  • [Zhang et al., 2019] Zhao Zhang, Jiahuan Ren, Sheng Li, Richang Hong, Zhengjun Zha, Meng Wang. Robust subspace discovery by block-diagonal adaptive locality-constrained representation. In: ACM MM, 2019.
    Google ScholarLocate open access versionFindings
Full Text
Your rating :
0

 

Tags
Comments