Robust Forward Algorithms via PAC-Bayes and Laplace Distributions.
JMLR Workshop and Conference Proceedings(2014)
摘要
Laplace random variables are commonly used to model extreme noise in many fields, while systems trained to deal with such noises are often characterized by robustness properties. We introduce new learning algorithms that minimize objectives derived directly from PAC-Bayes bounds, incorporating Laplace distributions. The resulting algorithms are regulated by the Huber loss fuiction and are robust to noise, as the Laplace distribution integrated large deviation of parameters. We analyze the convexity properties of the objective, and propose a few bounds which are fully convex, two of which jointly convex in the mean and standard-deviation under certain conditions. We derive new forward algorithms analogous to recent boosting algorithms, providing novel relations between boosting and PAC-Bayes analysis. Experiments show that our algorithms outperform AdaBoost, Li-LogBoost [10], and RobustBoost [11] in a wide range of input noise.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络