Optimized CNN Architectures Benchmarking in Hardware-Constrained Edge Devices in IoT Environments

IEEE Internet of Things Journal(2024)

引用 0|浏览6
暂无评分
摘要
Internet of Things (IoT) and Edge devices have grown in their application fields due to Machine learning (ML) models and their capacity to classify images into previously known labels, working close to the end-user. However, the model might be trained with several convolutional neural network (CNN) architectures that can affect its performance when developed in hardware-constrained environments, such as: Edge devices. In addition, new training trends suggest using transfer learning techniques to get an excellent feature extractor obtained from one domain and use it in a new domain, which has not enough images to train the whole model. In light of these trends, this work benchmarks the most representative CNN architectures on emerging Edge devices, some of which have hardware accelerators. The ML models were trained and optimized using a small set of images obtained in IoT environments and using transfer learning. Our results show that unfreezing until the last 20 layers of the model’s architecture can be fine-tuned correctly to the new set of IoT images depending on the CNN architecture. Additionally, quantization is a suitable optimization technique to shrink 2x or 3x times the model leading to a lighter memory footprint, lower execution time, and battery consumption. Finally, the Coral Dev Board can boost 100x the inference process, and the EfficientNet model architecture keeps the same classification accuracy even when the model is adopted to a hardware-constrained environment.
更多
查看译文
关键词
Edge Devices,Edge Computing,Transfer Learning,Convolutional Neural Network Architectures,Model Optimization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要