Processors face steep penalties when accessing on-chip memory in the form of high latency. On-chip caches help mitigate this latency by storing the most used data close to the processor. Despite this, cache space limitations limit their effectiveness. Compressing data in caches can improve performance by increasing effective cache capacity, but can incur additional access latency. Current cache compression schemes focus on just one granularity | either a single cache line or a small number of contiguous lines. In this thesis, we propose Precompression a novel technique that transforms cache data, making it more amenable to compression. In contrast to prior compression techniques, our work takes advantage of the redundancy existing across mu...
With the widening gap between processor and memory speeds, memory system designers may find cache co...
We propose a depth cache that keeps the depth data in compressed format, when possi-ble. Compared to...
Increasing cache latencies limit L1 cache sizes. In this paper we investigate restrictive compressio...
Processors face steep penalties when accessing on-chip memory in the form of high latency. On-chip c...
Caches are essential to today's microprocessors. They close the huge speed gap between processors an...
Chip multiprocessors (CMPs) combine multiple processors on a single die, typically with private leve...
Abstract—Cache compression improves the performance of a multi-core system by being able to store mo...
International audienceCache compression algorithms must abide by hardware constraints; thus, their e...
This synthesis lecture presents the current state-of-the-art in applying low-latency, lossless hardw...
On-chip caches are essential as they bridge the growing speed-gap between off-chip memory and proces...
International audienceHardware cache compression derives from software-compression research; yet, it...
Abstract — Chip Multiprocessors (CMPs) combine multiple cores on a single die, typically with privat...
Storing data in compressed form is becoming common practice in high-performance systems, where memor...
On-chip cache memories are instrumental in tackling several performance and energy issues facing con...
The speed gap between CPU and memory is impairing performance. Cache compression and hardware prefet...
With the widening gap between processor and memory speeds, memory system designers may find cache co...
We propose a depth cache that keeps the depth data in compressed format, when possi-ble. Compared to...
Increasing cache latencies limit L1 cache sizes. In this paper we investigate restrictive compressio...
Processors face steep penalties when accessing on-chip memory in the form of high latency. On-chip c...
Caches are essential to today's microprocessors. They close the huge speed gap between processors an...
Chip multiprocessors (CMPs) combine multiple processors on a single die, typically with private leve...
Abstract—Cache compression improves the performance of a multi-core system by being able to store mo...
International audienceCache compression algorithms must abide by hardware constraints; thus, their e...
This synthesis lecture presents the current state-of-the-art in applying low-latency, lossless hardw...
On-chip caches are essential as they bridge the growing speed-gap between off-chip memory and proces...
International audienceHardware cache compression derives from software-compression research; yet, it...
Abstract — Chip Multiprocessors (CMPs) combine multiple cores on a single die, typically with privat...
Storing data in compressed form is becoming common practice in high-performance systems, where memor...
On-chip cache memories are instrumental in tackling several performance and energy issues facing con...
The speed gap between CPU and memory is impairing performance. Cache compression and hardware prefet...
With the widening gap between processor and memory speeds, memory system designers may find cache co...
We propose a depth cache that keeps the depth data in compressed format, when possi-ble. Compared to...
Increasing cache latencies limit L1 cache sizes. In this paper we investigate restrictive compressio...