DAGs with No Fears: A Closer Look at Continuous Optimization for Learning Bayesian Networks
NIPS 2020, 2020.
We provide a characterization involving the gradient of functions in this class, which is essential to proving later results and has an intuitive graphical interpretation
This paper re-examines a continuous optimization framework dubbed NOTEARS for learning Bayesian networks. We first generalize existing algebraic characterizations of acyclicity to a class of matrix polynomials. Next, focusing on a one-parameter-per-edge setting, it is shown that the Karush-Kuhn-Tucker (KKT) optimality conditions for the...More
PPT (Upload PPT)