Ranking by Dependence - A Fair Criteria

Uncertainty in Artificial Intelligence(2012)

引用 23|浏览10
暂无评分
摘要
Estimating the dependences between random variables, and ranking them accordingly, is a prevalent problem in machine learning. Pur- suing frequentist and information-theoretic approaches, we first show that the p-value and the mutual information can fail even in simplistic situations. We then propose two conditions for regularizing an estimator of de- pendence, which leads to a simple yet eec- tive new measure. We discuss its advantages and compare it to well-established model- selection criteria. Apart from that, we derive a simple constraint for regularizing parame- ter estimates in a graphical model. This re- sults in an analytical approximation for the optimal value of the equivalent sample size, which agrees very well with the more involved Bayesian approach in our experiments.
更多
查看译文
关键词
random variable,sample size,graphical model,mutual information,machine learning,bayesian approach
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要