Non-negative Contrastive Learning
arxiv(2024)
摘要
Deep representations have shown promising performance when transferred to
downstream tasks in a black-box manner. Yet, their inherent lack of
interpretability remains a significant challenge, as these features are often
opaque to human understanding. In this paper, we propose Non-negative
Contrastive Learning (NCL), a renaissance of Non-negative Matrix Factorization
(NMF) aimed at deriving interpretable features. The power of NCL lies in its
enforcement of non-negativity constraints on features, reminiscent of NMF's
capability to extract features that align closely with sample clusters. NCL not
only aligns mathematically well with an NMF objective but also preserves NMF's
interpretability attributes, resulting in a more sparse and disentangled
representation compared to standard contrastive learning (CL). Theoretically,
we establish guarantees on the identifiability and downstream generalization of
NCL. Empirically, we show that these advantages enable NCL to outperform CL
significantly on feature disentanglement, feature selection, as well as
downstream classification tasks. At last, we show that NCL can be easily
extended to other learning scenarios and benefit supervised learning as well.
Code is available at https://github.com/PKU-ML/non_neg.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要