Abstract—Cache compression improves the performance of a multi-core system by being able to store more cache blocks in a compressed format. Compression is achieved by exploiting data patterns present within a block. For a given cache space, compression increases the effective cache capacity. However, this increase is limited by the number of tags that can be accommodated at the cache. Prefetching is another technique that improves system performance by fetching the cache blocks ahead of time into the cache and hiding the off-chip latency. Commonly used hardware prefetchers, such as stream and stride, fetch multiple contiguous blocks into the cache. In this paper we propose prefetched blocks compaction (PBC) wherein we exploit the data patte...
In this dissertation, we provide hardware solutions to increase the efficiency of the cache hierarch...
The full text of this article is not available on SOAR. WSU users can access the article via IEEE Xp...
Prefetching disk blocks to main memory will become increasingly important to overcome the widening g...
Processors face steep penalties when accessing on-chip memory in the form of high latency. On-chip c...
The speed gap between CPU and memory is impairing performance. Cache compression and hardware prefet...
<p>We introduce a set of new Compression-Aware Management Policies (CAMP) for on-chip caches that em...
We introduce a set of new Compression-Aware Management Policies (CAMP) for on-chip caches that emplo...
A well known performance bottleneck in computer architecture is the so-called memory wall. This term...
Chip multiprocessors (CMPs) combine multiple processors on a single die, typically with private leve...
International audienceThe effectiveness of a compressed cache depends on three features: i) th...
Abstract — Cache compression seeks the benefits of a larger cache with the area and power of a small...
Prefetching has proven to be a useful technique for re-ducing cache misses in multiprocessors at the...
International audienceCache compression seeks the benefits of a larger cache with the area and power...
In the last century great progress was achieved in developing processors with extremely high computa...
As the trends of process scaling make memory system even more crucial bottleneck, the importance of ...
In this dissertation, we provide hardware solutions to increase the efficiency of the cache hierarch...
The full text of this article is not available on SOAR. WSU users can access the article via IEEE Xp...
Prefetching disk blocks to main memory will become increasingly important to overcome the widening g...
Processors face steep penalties when accessing on-chip memory in the form of high latency. On-chip c...
The speed gap between CPU and memory is impairing performance. Cache compression and hardware prefet...
<p>We introduce a set of new Compression-Aware Management Policies (CAMP) for on-chip caches that em...
We introduce a set of new Compression-Aware Management Policies (CAMP) for on-chip caches that emplo...
A well known performance bottleneck in computer architecture is the so-called memory wall. This term...
Chip multiprocessors (CMPs) combine multiple processors on a single die, typically with private leve...
International audienceThe effectiveness of a compressed cache depends on three features: i) th...
Abstract — Cache compression seeks the benefits of a larger cache with the area and power of a small...
Prefetching has proven to be a useful technique for re-ducing cache misses in multiprocessors at the...
International audienceCache compression seeks the benefits of a larger cache with the area and power...
In the last century great progress was achieved in developing processors with extremely high computa...
As the trends of process scaling make memory system even more crucial bottleneck, the importance of ...
In this dissertation, we provide hardware solutions to increase the efficiency of the cache hierarch...
The full text of this article is not available on SOAR. WSU users can access the article via IEEE Xp...
Prefetching disk blocks to main memory will become increasingly important to overcome the widening g...