Geometry Aware Constrained Optimization Techniques For Deep Learning

2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)(2018)

引用 56|浏览55
暂无评分
摘要
In this paper, we generalize the Stochastic Gradient Descent (SGD) and RMSProp algorithms to the setting of Riemannian optimization. SGD is a popular method for large scale optimization. In particular, it is widely used to train the weights of Deep Neural Networks. However, gradients computed using standard SGD can have large variance, which is detrimental for the convergence rate of the algorithm. Other methods such as RMSProp and ADAM address this issue. Nevertheless, these methods cannot be directly applied to constrained optimization problems. In this paper, we extend some popular optimization algorithm to the Riemannian (constrained) setting. We substantiate our proposed extensions with a range of relevant problems in machine learning such as incremental Principal Component Analysis, computating the Riemannian centroids of SPD matrices, and Deep Metric Learning. We achieve competitive results against the state of the art for fine-grained object recognition datasets.
更多
查看译文
关键词
geometry aware constrained optimization techniques,Riemannian optimization,standard SGD,convergence rate,constrained optimization problems,Riemannian centroids,deep learning,deep neural networks,optimization algorithm,deep metric learning,stochastic gradient descent,RMSProp algorithms
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要