Fast Reconstruction for Large Disk Enclosures Based on RAID2.0.
ICPP(2021)
Abstract
In the era of explosive data growth, RAID2.0 architecture with dozens or even hundreds of disks is commonly used to provide large capacity data storage. Due to limited resources, such as memory and CPU, the reconstruction for disk failures in RAID2.0 is executed in batches. Traditional random data placement and recovery scheme make the I/O access highly skewed within a batch, which slows down the reconstruction speed. We propose DR-RAID, an efficient reconstruction scheme that balances local rebuildingworkloads across all surviving disks within a batch. Instead of executing tasks sequentially from all pending recovery tasks, we dynamically select a batch of tasks with an almost balanced read load. Furthermore, we transform the problem of distributing the reconstructed data to surviving disks into a bipartite graph model, and achieve uniform write load distribution by finding a maximum matching in the graph. DR-RAID can be applied in a large disk pool with homogeneous or heterogeneous rebuilding bandwidth. We implemented DR-RAID into a pool of 50 disks and conducted extensive experiments. DR-RAID increases the rebuilding throughput by up to 61.90% compared with the random data placement scheme in offline rebuilding, and up to 59.28% with varied rebuilding bandwidth. DR-RAID accelerates rebuilding by effectively eliminating the local load imbalance within a batch, and greatly shortens the period of interference to user requests.
MoreTranslated text
Key words
RAID, RAID2.0, Disk Failure, Reconstruction
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined