We survey a set of algorithmic techniques that make it possible to build a high performance storage server from a network of cheap components. Such a storage server offers a very simple programming model. To the clients it looks like a single very large disk that can handle many requests in parallel with minimal interference between the requests. The algorithms use randomization, redundant storage, and sophisticated scheduling strategies to achieve this goal. The focus is on algorithmic techniques and open questions. The paper summarizes several previous papers and presents a new strategy for handling heterogeneous disks
Several algorithms for parallel disk systems have appeared in the literature recently, and they are ...
This paper jointly addresses the issues of load balancing, fault tolerance, responsiveness, agility,...
The ever-growing amount of data requires highly scalable storage solutions. The most flexible approa...
We survey a set of algorithmic techniques that make it possible to build a high performance storage ...
In this paper, we investigate the composition of cheap network storage resources to meet specific av...
In this paper, we investigate the composition of cheap network storage resources to meet specific av...
Heterogeneity in cloud environments is a fact of life—from workload skews and network path changes, ...
For the design and analysis of algorithms that process huge data sets, a machine model is needed tha...
Declustering is a well known strategy to achieve maximum I/O parallelism in multi-disk systems. Many...
For the design and analysis of algorithms that process huge data sets, a machine model is needed tha...
Emerging applications such as data warehousing, multimedia content distribution, electronic commerce...
A multimedia storage system may consist of heterogeneous disks as a result of system configuration a...
Abstract—We consider a distributed storage system where the storage nodes have heterogeneous access ...
This paper explores three algorithms for high-performance downloads of wide-area, replicated data. T...
IBM estimates that 2.5 quintillion bytes are being created every day and that 90% of the data in the...
Several algorithms for parallel disk systems have appeared in the literature recently, and they are ...
This paper jointly addresses the issues of load balancing, fault tolerance, responsiveness, agility,...
The ever-growing amount of data requires highly scalable storage solutions. The most flexible approa...
We survey a set of algorithmic techniques that make it possible to build a high performance storage ...
In this paper, we investigate the composition of cheap network storage resources to meet specific av...
In this paper, we investigate the composition of cheap network storage resources to meet specific av...
Heterogeneity in cloud environments is a fact of life—from workload skews and network path changes, ...
For the design and analysis of algorithms that process huge data sets, a machine model is needed tha...
Declustering is a well known strategy to achieve maximum I/O parallelism in multi-disk systems. Many...
For the design and analysis of algorithms that process huge data sets, a machine model is needed tha...
Emerging applications such as data warehousing, multimedia content distribution, electronic commerce...
A multimedia storage system may consist of heterogeneous disks as a result of system configuration a...
Abstract—We consider a distributed storage system where the storage nodes have heterogeneous access ...
This paper explores three algorithms for high-performance downloads of wide-area, replicated data. T...
IBM estimates that 2.5 quintillion bytes are being created every day and that 90% of the data in the...
Several algorithms for parallel disk systems have appeared in the literature recently, and they are ...
This paper jointly addresses the issues of load balancing, fault tolerance, responsiveness, agility,...
The ever-growing amount of data requires highly scalable storage solutions. The most flexible approa...