Joint multi-user DNN partitioning and task offloading in mobile edge computing.

Ad Hoc Networks(2023)

Cited 3|Views23
No score
Abstract
Mobile edge computing is conducive to artificial intelligence computing near terminals, in which Deep Neural Networks (DNNs) should be partitioned to allocate tasks partially to the edge for execution to reduce latency and save energy. Most of the existing studies assume that the tasks are of the same type or the computing resources of the server are the same. In real life, Mobile Devices (MDs) and Edge Servers (ESs) are heterogeneous in type and computing resources, it is challenging to find the optimal partition point for each DNN and offload it to an appropriate ES. To fill this gap, we propose a partitioning-and-offloading scheme for the heterogeneous tasks-server system to reduce the overall system latency and energy consumption on DNN inference. The scheme has four steps. First, it establishes a partitioning and task offloading model for adaptive DNN model. Second, to reduce the solution space, the scheme designs a Partition Point Retain (PPR) algorithm. After that, the scheme gives an Optimal Partition Point (OPP) Algorithm to find the optimal partition point with the minimum cost for each ES corresponding to each MD. Based on the partition points, an offloading of DNN tasks for each MD is presented to finish the whole scheme. Simulations show that the proposed scheme reduces the total cost by 77.9% and 59.9% on average compared to Only-Local and Only-Server respectively in the heterogeneous edge computing environment.
More
Translated text
Key words
Mobile edge computing,Deep neural network (DNN),DNN partitioning and offloading,Heterogeneous edge computing
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined