On Model Parallelization and Scheduling Strategies for Distributed Machine Learning

ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 27 (NIPS 2014), pp. 2834-2842, 2014.

Cited by: 114|Views40
EI

Abstract:

Distributed machine learning has typically been approached from a data parallel perspective, where big data are partitioned to multiple workers and an algorithm is executed concurrently over different data subsets under various synchronization schemes to ensure speed-up and/or correctness. A sibling problem that has received relatively le...More

Code:

Data:

Full Text
Bibtex
Your rating :
0

 

Tags
Comments