DAGs with No Fears: A Closer Look at Continuous Optimization for Learning Bayesian Networks

yue yu
yue yu

NIPS 2020, 2020.

Cited by: 0|Views13
EI
Weibo:
We provide a characterization involving the gradient of functions in this class, which is essential to proving later results and has an intuitive graphical interpretation

Abstract:

This paper re-examines a continuous optimization framework dubbed NOTEARS for learning Bayesian networks. We first generalize existing algebraic characterizations of acyclicity to a class of matrix polynomials. Next, focusing on a one-parameter-per-edge setting, it is shown that the Karush-Kuhn-Tucker (KKT) optimality conditions for the...More

Code:

Data:

0
Your rating :
0

 

Tags
Comments