Parallel Latent Dirichlet Allocation On Gpus

COMPUTATIONAL SCIENCE - ICCS 2018, PT II(2018)

引用 3|浏览58
暂无评分
摘要
Latent Dirichlet Allocation (LDA) is a statistical technique for topic modeling. Since it is very computationally demanding, its parallelization has garnered considerable interest. In this paper, we systematically analyze the data access patterns for LDA and devise suitable algorithmic adaptations and parallelization strategies for GPUs. Experiments on large-scale datasets show the effectiveness of the new parallel implementation on GPUs.
更多
查看译文
关键词
Parallel topic modeling, Parallel Latent Dirichlet Allocation, Parallel machine learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要