While mean time to data loss (MTTDL) provides an easy way to estimate the reliability of redundant disk arrays, it fails to take into account the relatively short lifetime of these arrays. We analyzed five different disk array organizations and compared the reliability estimates obtained using their mean times to data loss with the more exact values obtained by directly solving their corresponding Markov model. We observed that the conventional MTTDL approach generally provided a good estimate of the long-term reliability of arrays— with the exception of non-repairable arrays—while significantly underestimating the short-term reliability of these arrays. 1
Disk arrays (RAID) have been proposed as a possi-ble approach to solving the emerging I/O bottleneck...
Abstract—Disk failure rates vary so widely among different makes and models that designing storage s...
Disk scrubbing periodically scans the contents of a disk array to detect the presence of irrecoverab...
While mean time to data loss (MTTDL) provides an easy way to estimate the reliability of redundant d...
We investigate the impact of irrecoverable read errors - also known as bad blocks - on the MTTDL of ...
Several researchers [1], [4] have noted that the orig-inal RAID reliability equation formulated by G...
Component failure in large-scale IT installations such as cluster supercomputers or internet service...
Today's most reliable data storage systems are made of redundant arrays of inexpensive disks (RAID)....
Redundancy based on a parity encoding has been proposed for insuring that disk arrays provide highly...
Abstract—We present a general method for estimating the risk of data loss in arbitrary two-dimension...
Archiving and systematic backup of large digital data generates a quick demand for multi-petabyte sc...
International audienceWe consider the problem of data durability in low-bandwidth large-scale distri...
Magnetic disks are the least reliable component of most computer systems. In addition, their failure...
Abstract—Disk drives are known to fail at a higher rate during their first year of operation than du...
We present a disk array architecture that does not require users to perform any maintenance tasks ov...
Disk arrays (RAID) have been proposed as a possi-ble approach to solving the emerging I/O bottleneck...
Abstract—Disk failure rates vary so widely among different makes and models that designing storage s...
Disk scrubbing periodically scans the contents of a disk array to detect the presence of irrecoverab...
While mean time to data loss (MTTDL) provides an easy way to estimate the reliability of redundant d...
We investigate the impact of irrecoverable read errors - also known as bad blocks - on the MTTDL of ...
Several researchers [1], [4] have noted that the orig-inal RAID reliability equation formulated by G...
Component failure in large-scale IT installations such as cluster supercomputers or internet service...
Today's most reliable data storage systems are made of redundant arrays of inexpensive disks (RAID)....
Redundancy based on a parity encoding has been proposed for insuring that disk arrays provide highly...
Abstract—We present a general method for estimating the risk of data loss in arbitrary two-dimension...
Archiving and systematic backup of large digital data generates a quick demand for multi-petabyte sc...
International audienceWe consider the problem of data durability in low-bandwidth large-scale distri...
Magnetic disks are the least reliable component of most computer systems. In addition, their failure...
Abstract—Disk drives are known to fail at a higher rate during their first year of operation than du...
We present a disk array architecture that does not require users to perform any maintenance tasks ov...
Disk arrays (RAID) have been proposed as a possi-ble approach to solving the emerging I/O bottleneck...
Abstract—Disk failure rates vary so widely among different makes and models that designing storage s...
Disk scrubbing periodically scans the contents of a disk array to detect the presence of irrecoverab...