Buffer caching is an integral part of the operating system. In this paper, we propose a scheme that integrates buffer cache management and prefetching via cache partitioning. The scheme, which we call SA-W(2) R, is simple to implement, making it a feasible solution in real systems. In its basic form, for buffer replacement, it uses the LRU policy. However, its modular design allows for any replacement policy to be incorporated into the scheme. For prefetching, it uses the LRU-One Block Lookahead (LRU-OBL) approach, eliminating any extra burden that is generally necessary in other prefetching approaches. Implementation studies based on the GNU/Linux kernel version 2.2.14 show that the SA-W(2) R performs better than the scheme currently used,...
In traditional file system implementations, the Least Recently Used (LRU) block replacement scheme i...
Recent studies have shown that cache partitioning is an efficient technique to improve throughput, f...
Task-based dataflow programming models and runtimes em-erge as promising candidates for programming ...
Many replacement and prefetching policies have recently been proposed for buffer cache management. H...
Data prefetching is an effective technique to hide memory latency and thus bridge the increasing pro...
Although file caching and prefetching are known techniques to improve the performance of file system...
As the performance gap between disks and microprocessors continues to increase, effective utilizatio...
Part 6: Poster SessionsInternational audienceThis paper presents a new access-density-based prefetch...
A common mechanism to perform hardware-based prefetching for regular accesses to arrays and chained...
The full text of this article is not available on SOAR. WSU users can access the article via IEEE Xp...
The full text of this article is not available on SOAR. WSU users can access the article via IEEE Xp...
The memory system remains a major performance bottleneck in modern and future architectures. In this...
Despite large caches, main-memory access latencies still cause significant performance losses in man...
We introduce Selfish-LRU, a variant of the LRU (least recently used) cache replacement policy that i...
In traditional file system implementations, the Least Recently Used (LRU) block replacement scheme i...
In traditional file system implementations, the Least Recently Used (LRU) block replacement scheme i...
Recent studies have shown that cache partitioning is an efficient technique to improve throughput, f...
Task-based dataflow programming models and runtimes em-erge as promising candidates for programming ...
Many replacement and prefetching policies have recently been proposed for buffer cache management. H...
Data prefetching is an effective technique to hide memory latency and thus bridge the increasing pro...
Although file caching and prefetching are known techniques to improve the performance of file system...
As the performance gap between disks and microprocessors continues to increase, effective utilizatio...
Part 6: Poster SessionsInternational audienceThis paper presents a new access-density-based prefetch...
A common mechanism to perform hardware-based prefetching for regular accesses to arrays and chained...
The full text of this article is not available on SOAR. WSU users can access the article via IEEE Xp...
The full text of this article is not available on SOAR. WSU users can access the article via IEEE Xp...
The memory system remains a major performance bottleneck in modern and future architectures. In this...
Despite large caches, main-memory access latencies still cause significant performance losses in man...
We introduce Selfish-LRU, a variant of the LRU (least recently used) cache replacement policy that i...
In traditional file system implementations, the Least Recently Used (LRU) block replacement scheme i...
In traditional file system implementations, the Least Recently Used (LRU) block replacement scheme i...
Recent studies have shown that cache partitioning is an efficient technique to improve throughput, f...
Task-based dataflow programming models and runtimes em-erge as promising candidates for programming ...