Compressing Deep Neural Networks: A New Hashing Pipeline Using Kac's Random Walk Matrices.

arXiv: Learning(2018)

引用 23|浏览19
暂无评分
摘要
The popularity of deep learning is increasing by the day. However, despite the recent advancements in hardware, deep neural networks remain computationally intensive. Recent work has shown that by preserving the angular distance between vectors, random feature maps are able to reduce dimensionality without introducing bias to the estimator. We test a variety of established hashing pipelines as well as a new approach using Kacu0027s random walk matrices. We demonstrate that this method achieves similar accuracy to existing pipelines.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要