A Multimodel Edge Computing Offloading Framework for Deep-Learning Application Based on Bayesian Optimization

IEEE Internet of Things Journal(2023)

引用 0|浏览3
With the rapid development of the Internet of Things (IoT), data generated by IoT devices are also increasing exponentially. The edge computing has alleviated the problems of limited network and transmission delay when processing tasks of IoT devices in traditional cloud computing. And with the popularity of deep-learning, more and more terminal devices are embedded with artificial intelligence (AI) processors for higher processing capability at the edge. However, the problems of deep-learning task offloading in a heterogeneous edge computing environment have not been fully investigated. In this article, a multimodel edge computing offloading framework is proposed, using NVIDIA Jetson edge devices (Jetson TX2, Jetson Xavier NX, and Jetson Nano) and GeForce RTX GPU servers (RTX3080 and RTX2080) to simulate the edge computing environment, and make binary computational offloading decisions for face detection tasks. We also introduce a Bayesian optimization algorithm, namely, modified tree-structured Parzen estimator (MTPE), to reduce the total cost of edge computation within a time slot including response time and energy consumption, and ensure the accuracy requirements of face detection. In addition, we employ the Lyapunov model to obtain the harvesting energy between time slots to keep the energy queue stable. Experiments reveal that MTPE algorithm can achieve the globally optimal solution in fewer iterations. The total cost of multimodel edge computing framework is reduced by an average of 17.94% compared to a single-model framework. In contrast to the double deep Q-network (DDQN), our proposed algorithm can decrease the computational consumption by 23.01% for obtaining the offloading decision.
Bayesian optimization,deep-learning,edge computing,Lyapunov drift function,modified tree-structured Parzen estimator (MTPE),multimodel
AI 理解论文