In this paper, we present aggressive, proactive mechanisms that tailor file system resource management to the needs of I/O-intensive applications. In particular, we show how to use application-disclosed access patterns (hints) to expose and exploit I/O parallelism, and to dynamically allocate file buffers among three competing demands: prefetching hinted blocks, caching hinted blocks for reuse, and caching recently used data for unhinted accesses. Our approach estimates the impact of alternative buffer allocations on application execution time and applies cost-benefit analysis to allocate buffers where they will have the greatest impact. We have implemented informed prefetching and caching in Digital's OSF/1 operating system and measur...
High performance computing has become one of the fundamental contributors to the progress of science...
Device independent I/O has been a holy grail to OS designers since the early days of UNIX. Unfortuna...
Informed prefetching and caching based on application dis� closure of future I�O accesses �hints � c...
This paper focuses on extending the power of caching and prefetching to reduce file read latencies b...
This paper focuses on extending the power of caching and prefetching to reduce file read latencies b...
This paper focuses on extending the power of caching and prefetching to reduce file read latencies b...
Aggressive prefetching is an effective technique for reducing the execution times of disk-bound appl...
Despite impressive advances in file system throughput resulting from technologies such as high-bandw...
We have previously shown that the patterns in which files are accessed offer information that can ac...
I/O performance is lagging No current solution fully addresses read latency TIP to reduce latency • ...
Improvements in the processing speed of multiprocessors are outpacing improvements in the speed of d...
Although file caching and prefetching are known techniques to improve the performance of file system...
In parallel I/O systems the I/O buffer can be used to improve I/O parallelism by improving I/O laten...
Recently two groups of researchers have proposed systems that exploit application knowledge to impro...
High-performance I/O systems depend on prefetching and caching in order to deliver good performance ...
High performance computing has become one of the fundamental contributors to the progress of science...
Device independent I/O has been a holy grail to OS designers since the early days of UNIX. Unfortuna...
Informed prefetching and caching based on application dis� closure of future I�O accesses �hints � c...
This paper focuses on extending the power of caching and prefetching to reduce file read latencies b...
This paper focuses on extending the power of caching and prefetching to reduce file read latencies b...
This paper focuses on extending the power of caching and prefetching to reduce file read latencies b...
Aggressive prefetching is an effective technique for reducing the execution times of disk-bound appl...
Despite impressive advances in file system throughput resulting from technologies such as high-bandw...
We have previously shown that the patterns in which files are accessed offer information that can ac...
I/O performance is lagging No current solution fully addresses read latency TIP to reduce latency • ...
Improvements in the processing speed of multiprocessors are outpacing improvements in the speed of d...
Although file caching and prefetching are known techniques to improve the performance of file system...
In parallel I/O systems the I/O buffer can be used to improve I/O parallelism by improving I/O laten...
Recently two groups of researchers have proposed systems that exploit application knowledge to impro...
High-performance I/O systems depend on prefetching and caching in order to deliver good performance ...
High performance computing has become one of the fundamental contributors to the progress of science...
Device independent I/O has been a holy grail to OS designers since the early days of UNIX. Unfortuna...
Informed prefetching and caching based on application dis� closure of future I�O accesses �hints � c...