谷歌浏览器插件
订阅小程序
在清言上使用

A Representer Theorem for Deep Kernel Learning

arXiv (Cornell University)(2017)

引用 54|浏览43
暂无评分
摘要
In this paper we provide a finite-sample and an infinite-sample representer theorem for the concatenation of (linear combinations of) kernel functions of reproducing kernel Hilbert spaces. These results serve as mathematical foundation for the analysis of machine learning algorithms based on compositions of functions. As a direct consequence in the finite-sample case, the corresponding infinite-dimensional minimization problems can be recast into (nonlinear) finite-dimensional minimization problems, which can be tackled with nonlinear optimization algorithms. Moreover, we show how concatenated machine learning problems can be reformulated as neural networks and how our representer theorem applies to a broad class of state-of-the-art deep learning methods.
更多
查看译文
关键词
deep kernel learning,representer theorem,artificial neural networks,multi-layer kernel,regularized least-squares regression
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要