Primitives for Dynamic Big Model Parallelism

CoRR, 2014.

Cited by: 7|Views24
EI

Abstract:

When training large machine learning models with many variables or parameters, a single machine is often inadequate since the model may be too large to fit in memory, while training can take a long time even with stochastic updates. A natural recourse is to turn to distributed cluster computing, in order to harness additional memory and...More

Code:

Data:

Your rating :
0

 

Tags
Comments