Chrome Extension
WeChat Mini Program
Use on ChatGLM

End-to-End Neural Network Compression Via L1/l2 Regularized Latency Surrogates

Computer Vision and Pattern Recognition(2024)

Cited 0|Views64
Key words
Neural Network,Network Compression,Neural Compression,Neural Network Compression,Amount Of Time,Transfer Learning,Floating-point Operations,Accuracy Drop,Neural Architecture Search,Low-rank Factorization,Optimization Problem,Building Blocks,Source Code,Sparsity,Weight Matrix,Search Space,ImageNet,Feed-forward Network,Normalization Layer,Accuracy Trade-off,L1-norm,Recent Techniques,Vision Tasks,Edge Devices,Low-rank Structure,Black Box,Standard Language,Optimization Issues,Accuracy Loss
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined