FEC: Efficient Deep Recommendation Model Training with Flexible Embedding Communication.

Proc. ACM Manag. Data(2023)

引用 0|浏览18
暂无评分
摘要
Embedding-based deep recommendation models (EDRMs), which contain small dense models and large embedding tables, are widely used in industry. Embedding communication constitutes the main cost for the distributed training of EDRMs, and thus we propose two strategies to improve its efficiency, i.e.,embedding tiering andpre-fetching. In particular, embedding tiering uses AllReduce to communicate popular embeddings that are accessed frequently. This is counter-intuitive as embeddings belong to the sparse embedding tables, but reasonable because the access pattern of popular embeddings resembles dense models. Pre-fetching starts communication early for embeddings that receive no updates such that they are removed from the critical path of training. We implement embedding tiering and pre-fetching in a system called FEC and compare it with the state-of-the-art systems on real datasets. The results show that FEC consistently outperforms the existing methods on all datasets, and its speed can be up to 6.65x and 2.42x in terms of embedding communication time and training throughput compared with the best performing baseline.
更多
查看译文
关键词
fec,recommendation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要