Chrome Extension
WeChat Mini Program
Use on ChatGLM

Deferred Dropout: An Algorithm-Hardware Co-Design DNN Training Method Provisioning Consistent High Activation Sparsity

2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD)(2021)

Cited 1|Views1
No score
Abstract
This paper proposes a deep neural network training method that provisions consistent high activation sparsity and the ability to adjust the sparsity. To improve training performance, prior work reduces the memory footprint for training by exploiting input activation sparsity which is observed due to the ReLU function. However, the previous approach relies solely on the inherent sparsity caused by ...
More
Translated text
Key words
Training,Deep learning,Design automation,Computational modeling,Neural networks,Memory management,Hardware
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined