Suppressing the Interference Within a Datacenter: Theorems, Metric and Strategy
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS(2024)
Abstract
As the paradigm of cloud computing, a datacenter accommodates many co-running applications sharing system resources. Although highly concurrent applications improve resource utilization, the resulting resource contention can increase the uncertainty of quality of services (QoS). Previous studies have shown that achieving high resource utilization and high QoS simultaneously is challenging. Moreover, quantifying the intensity of interference across multiple concurrent applications in a datacenter, where applications can be either latency-critical (LC) or best-effort (BE), poses a significant challenge. To address these issues, we propose Ah-Q, which comprises two theorems, a metric, and a scheduling strategy. First, we present the necessary and sufficient conditions to precisely test whether a datacenter is both QoS guaranteed and high-throughput. We also present a theorem that reveals the relationship between tail latency and throughput. Our theoretical results are insightful and useful for building datacenters that have desirable performance. Second, we propose the "System Entropy" (E-S) to quantitatively measure the interference within a datacenter. Interference arises due to resource scarcity or irrational scheduling, and effective scheduling can alleviate resource scarcity. To assess the effectiveness of a resource scheduling strategy, we introduce the concept of "resource equivalence". We evaluate various resource scheduling strategies to demonstrate the correctness and effectiveness of the proposed theory. Third, we introduce a new resource scheduling strategy, ARQ, that leverages both isolation and sharing of resources. Our evaluations show that ARQ significantly outperforms state-of-the-art strategies PARTIES and CLITE in reducing the tail latency of LC applications and increasing the IPC of BE applications.
MoreTranslated text
Key words
Interference,Tail,Quality of service,Entropy,Throughput,Cloud computing,Resource management,Datacenter,high-throughput,performance uncertainty,quality of services (QoS),resource contention
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined