LEARNING SPARSE GRAPH LAPLACIAN WITH K EIGENVECTOR PRIOR VIA ITERATIVE GLASSO AND PROJECTION

2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021)(2021)

引用 4|浏览23
暂无评分
摘要
Learning a suitable graph is an important precursor to many graph signal processing (GSP) pipelines, such as graph signal compression and denoising. Previous graph learning algorithms either i) make assumptions on graph connectivity (e.g., graph sparsity), or ii) make edge weight assumptions such as positive edges only. In this paper, given an empirical covariance matrix (C) over bar computed from data as input, we consider an eigen-structural assumption on the graph Laplacian matrix L: the first K eigenvectors of L are pre-selected, e.g., based on domain-specific criteria, and the remaining eigenvectors are then learned from data. One example use case is image coding, where the first eigenvector is pre-chosen to be constant, regardless of available observed data. We first prove that the subspace H-u(+) of symmetric positive semi-definite (PSD) matrices with the first K eigenvectors being {u(k)} in a defined Hilbert space is a convex cone. We then construct an operator to project a given positive definite (PD) matrix L to H-u(+), inspired by the Gram-Schmidt procedure. Finally, we design an efficient hybrid graphical lasso / projection algorithm to compute the most suitable graph Laplacian matrix L* is an element of H-u(+) given (C) over bar. Experimental results show that given the first K eigenvectors as a prior, our algorithm outperforms competing graph learning schemes using a variety of graph comparison metrics.
更多
查看译文
关键词
Graph learning, graph signal processing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要