谷歌浏览器插件
订阅小程序
在清言上使用

Hadoop Characterization

Trust, Security And Privacy In Computing And Communications(2015)

引用 8|浏览14
暂无评分
摘要
In the last decade, Warehouse Scale Computers (WSC) have grown in number and capacity while Hadoop became the de facto standard framework for Big data processing. Despite the existence of several benchmark suites, sizing guides, and characterization studies, there are few concrete guidelines for WSC designers and engineers who need to know how real Hadoop workloads are going to stress the different hardware subsystems of their servers. Available studies have shown execution statistics of Hadoop benchmarks but have not being able to extract meaningful and reusable results. Secondly, existing sizing guides provide hardware acquisition lists without considering the workloads. In this study, we propose a simple Big data workload differentiation, deliver general and specific conclusions about how demanding the different types of Hadoop workloads are for several hardware subsystems, and show how power consumption is influenced in each case. HiBench and Big-Bench suites were used to capture real time memory traces, and CPU, disk, and power consumption statistics of Hadoop. Our results show that CPU intensive and disk intensive workloads have a different behavior. CPU intensive workloads consume more power and memory bandwidth while disk intensive workloads usually require more memory. These and other conclusions presented in the paper are expected to help WSC designers to decide the hardware characteristics of their Hadoop systems, and better understand the behavior of big data workloads in Hadoop.
更多
查看译文
关键词
hadoop,big data,characterization,power consumption,workloads,benchmarks,hibench,big-bench
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要