Hardware Resource Analysis in Distributed Training with Edge Devices

Sihyeong Park
Sihyeong Park

Electronics, pp. 282019.

Cited by: 0|Bibtex|Views20|DOI:https://doi.org/10.3390/electronics9010028
Other Links: academic.microsoft.com

Abstract:

When training a deep learning model with distributed training, the hardware resource utilization of each device depends on the model structure and the number of devices used for training. Distributed training has recently been applied to edge computing. Since edge devices have hardware resource limitations such as memory, there is a need ...More

Code:

Data:

Your rating :
0

 

Tags
Comments