Winograd Convolution for DNNs: Beyond Linear Polynomials

AI*IA(2019)

引用 4|浏览0
暂无评分
摘要
Winograd convolution is widely used in deep neural networks (DNNs). Existing work for DNNs considers only the subset Winograd algorithms that are equivalent to Toom-Cook convolution. We investigate a wider range of Winograd algorithms for DNNs and show that these additional algorithms can significantly improve floating point (FP) accuracy in many cases. We present results for three FP formats: fp32, fp16 and bf16 (a truncated form of fp32) using 2000 inputs from the ImageNet dataset. We found that in fp16 this approach gives us up to 6.5 times better image recognition accuracy in one important case while maintaining the same number of elementwise multiplication operations in the innermost loop. In bf16 the convolution can be computed using \(5\%\) fewer innermost loop multiplications than with currently used Winograd algorithms while keeping the accuracy of image recognition the same as for direct convolution method.
更多
查看译文
关键词
DNN,Convolution,Winograd convolution,Accuracy,Floating point
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要