Efficiency Of Shallow Cascades For Improving Deep Learning Ai Systems

2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN)(2018)

引用 1|浏览40
暂无评分
摘要
This paper presents a technology for simple and non-iterative improvements of Multilayer and Deep Learning neural networks and Artificial Intelligence (AI) systems. The improvements are, in essence, shallow networks constructed on top of the existing Deep Learning architecture. Theoretical foundation of the technology is based on Stochastic Separation Theorems and the ideas of measure concentration. We show that, subject to mild technical assumptions on statistical properties of internal signals in Deep Learning AI, with probability close to one the technology enables instantaneous "learning away" of spurious and systematic errors. The method is illustrated with numerical examples.
更多
查看译文
关键词
Deep Learning, Stochastic Separation Theorems, Linear Separability, Perceptron, Shallow Networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要