Recent increases in CPU performance have outpaced in-creases in hard drive performance. As a result, disk opera-tions have become more expensive in terms of CPU cycles spent waiting for disk operations to complete. File pre-diction can mitigate this problem by prefetching les into cache before they are accessed. However, incorrect pre-diction is to a certain degree both unavoidable and costly. We present the Program-based Last N Successors (PLNS) le prediction model that identies relationships between les through the names of the programs accessing them. Our simulation results show that PLNS makes at least 21.11 % fewer incorrect predictions and roughly the same number of correct predictions as the last-successor model. We also examine the ...
Neural networks have been widely applied to various research and production fields. However, most re...
When picking a cache replacement policy for file systems, LRU (Least Recently Used) has always been ...
Modern processors use high-performance cache replacement policies that outperform traditional altern...
Recent increases in CPU performance have outpaced increases in hard drive performance. As a result, ...
Recent increases in CPU performance have surpassed those in hard drives. As a result, disk operation...
Recent increases in CPU performance have outpaced in-creases in hard drive performance. As a result,...
Recent increases in CPU performance have outpaced increases in hard drive performance. As a result, ...
Prefetching multiple files per prediction can improve the predictive accuracy. However, it comes wit...
We have adapted a multi-order context modeling technique used in the data compression method Predict...
Traditional caches employ the LRU management policy to drive replacement decisions. However, previou...
Memory latency is a key bottleneck for many programs. Caching and prefetching are two popular hardwa...
This paper describes the design, implementation, and evaluation of a predictive file caching approac...
Despite impressive advances in file system throughput resulting from technologies such as high-bandw...
File prefetching based on previous file access patterns has been shown to be an effective means of r...
Memory latency has become an important performance bottleneck in current microprocessors. This probl...
Neural networks have been widely applied to various research and production fields. However, most re...
When picking a cache replacement policy for file systems, LRU (Least Recently Used) has always been ...
Modern processors use high-performance cache replacement policies that outperform traditional altern...
Recent increases in CPU performance have outpaced increases in hard drive performance. As a result, ...
Recent increases in CPU performance have surpassed those in hard drives. As a result, disk operation...
Recent increases in CPU performance have outpaced in-creases in hard drive performance. As a result,...
Recent increases in CPU performance have outpaced increases in hard drive performance. As a result, ...
Prefetching multiple files per prediction can improve the predictive accuracy. However, it comes wit...
We have adapted a multi-order context modeling technique used in the data compression method Predict...
Traditional caches employ the LRU management policy to drive replacement decisions. However, previou...
Memory latency is a key bottleneck for many programs. Caching and prefetching are two popular hardwa...
This paper describes the design, implementation, and evaluation of a predictive file caching approac...
Despite impressive advances in file system throughput resulting from technologies such as high-bandw...
File prefetching based on previous file access patterns has been shown to be an effective means of r...
Memory latency has become an important performance bottleneck in current microprocessors. This probl...
Neural networks have been widely applied to various research and production fields. However, most re...
When picking a cache replacement policy for file systems, LRU (Least Recently Used) has always been ...
Modern processors use high-performance cache replacement policies that outperform traditional altern...