Inference on the Edge: Performance Analysis of an Image Classification Task Using Off-The-Shelf CPUs and Open-Source ConvNets

2019 Sixth International Conference on Social Networks Analysis, Management and Security (SNAMS)(2019)

引用 5|浏览6
暂无评分
摘要
The portability of Convolutional Neural Networks (ConvNets) on the mobile edge of the Internet has proven extremely challenging. Embedded CPUs commonly adopted on portable devices were designed and optimized for different kinds of applications, hence they suffer high latency when dealing with the parallel workload of ConvNets. Reduction techniques playing at the algorithmic level are viable options to improve performance, e.g. topology optimization using alternative forms of convolution and arithmetic relaxation via fixed-point quantization. However, their efficacy is hardware sensitive. This paper provides an overview of these issues using as a case study an image classification task implemented through open-source resources, namely different architectures of MobileNet (vl), scaled, trained and quantized for the ImageNet dataset. In this work, we quantify the accuracy-performance trade-off on a commercial board hosting an ARM Cortex-A big. LITTLE system-on-chip. Experimental results reveal mismatches which arise from the hardware.
更多
查看译文
关键词
fixed-point quantization,image classification task,open-source resources,performance analysis,off-the-shelf CPUs,open-source ConvNets,convolutional neural networks,mobile edge,embedded CPUs,portable devices,topology optimization,arithmetic relaxation,Internet,ARM Cortex-A big. LITTLE system-on-chip
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要