MespaConfig: Memory-Sparing Configuration Auto-Tuning for Co-Located In-Memory Cluster Computing Jobs

IEEE Transactions on Services Computing(2022)

引用 4|浏览26
暂无评分
摘要
Distributed in-memory computing frameworks usually have lots of parameters (e.g., the buffer size of shuffle) to form a configuration for each execution. A well-tuned configuration can bring large improvements of performance. However, to improve resource utilization, jobs are often share the same cluster, which causes dynamic cluster load conditions. According to our observation, the variation of cluster load reduces effectiveness of configuration tuning. Besides, as a common problem of cluster computing jobs, overestimation of resources also occurs during configuration tuning. It is challenging to efficiently find the optimal configuration in a shared cluster with the consideration of memory-sparing. In this article, we introduce MespaConfig, a job-level configuration optimizer for distributed in-memory computing jobs. Advancements of MespaConfig over previous work are features including memory-sparing and load-sensitive. We evaluate MespaConfig by 6 typical Spark programs under different load conditions. The evaluation results show that MespaConfig improves the performance of six typical programs by up to 12× compared with default configurations. MespaConfig also achieves at most 41 percent reduction of configuration memory usage and reduces the optimization time overhead by 10.8× compared with the state-of-the-art approach.
更多
查看译文
关键词
Configuration tuning,in-memory computing,memory-sparing,performance optimization,co-locate
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要