COMET: A Recipe for Learning and Using Large Ensembles on Massive Data

Data Mining(2011)

引用 22|浏览2
暂无评分
摘要
COMET is a single-pass MapReduce algorithm for learning on large-scale data. It builds multiple random forest ensembles on distributed blocks of data and merges them into a mega-ensemble. This approach is appropriate when learning from massive-scale data that is too large to fit on a single machine. To get the best accuracy, IVoting should be used instead of bagging to generate the training subset for each decision tree in the random forest. Experiments with two large datasets (5GB and 50GB compressed) show that COMET compares favorably (in both accuracy and training time) to learning on a subsample of data using a serial algorithm. Finally, we propose a new Gaussian approach for lazy ensemble evaluation which dynamically decides how many ensemble members to evaluate per data point; this can reduce evaluation cost by 100X or more.
更多
查看译文
关键词
Gaussian processes,cost reduction,data handling,decision trees,distributed processing,learning (artificial intelligence),COMET,Gaussian approach,IVoting,data distributed blocks,decision tree,evaluation cost reduction,importance-sampled voting,lazy ensemble evaluation,massive data learning,multiple random forest,serial algorithm,single-pass MapReduce algorithm,training subset generation,Decision Tree Ensembles,Lazy Ensemble Evaluation,MapReduce,Massive Data,
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要