HASS: Hardware-Aware Sparsity Search for Dataflow DNN Accelerator
2024 34TH INTERNATIONAL CONFERENCE ON FIELD-PROGRAMMABLE LOGIC AND APPLICATIONS, FPL 2024(2024)
Key words
Deep Neural Network,Computational Efficiency,Set Of Models,Energy Efficiency,Memory Storage,Deep Neural Network Model,Hardware Accelerators,Objective Function,Convolutional Layers,Network Layer,Simulated Annealing,Accuracy Loss,Green Curve,Hardware Implementation,Multiple Technologies,Network Throughput,Parallel Data,Hardware Resources,Clock Frequency,Hardware Architecture,Design Space Exploration,Sparse Weight,Hardware Performance,Dot Product Of Vector,Performance Of Pipelines,Blue Nodes,Performance Bottleneck,Iteration Step,Tree Structure,Achievable Throughput
AI Read Science
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined