Chrome Extension
WeChat Mini Program
Use on ChatGLM

EosDNN: an Efficient Offloading Scheme for DNN Inference Acceleration in Local-Edge-Cloud Collaborative Environments

IEEE transactions on green communications and networking(2022)

Cited 18|Views35
Key words
Servers,Computational modeling,Delays,Collaboration,Partitioning algorithms,Mobile handsets,Genetic algorithms,Mobile computing,local-edge-cloud collaboration,computation offloading,DNN inference,intelligent applications
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined