International audienceCache compression algorithms must abide by hardware constraints; thus, their efficiency ends up being low, and most cache lines end up barely compressed. Moreover, schemes that compress relatively well often decompress slowly, and vice versa. This paper proposes a compression scheme achieving high (good) compaction ratio and fast decompression latency. The key observation is that by further subdividing the chunks of data being compressed one can tailor the algorithms. This concept is orthogonal to most existent compressors, and results in a reduction of their average compressed size. In particular, we leverage this concept to boost a single-cycle-decompression compressor to reach a compressibility level competitive to ...
Les techniques de compression matérielle sont généralement des simplifications des méthodes de compr...
The effective size of an L2 cache can be increased by using a dictionary-based compression scheme. N...
Lempel-Ziv's LZ77 algorithm is the de facto choice for compressing massive datasets (see e.g., Snapp...
International audienceCache compression algorithms must abide by hardware constraints; thus, their e...
International audienceHardware cache compression derives from software-compression research; yet, it...
Hardware compression techniques are typically simplifications of software compression methods. They ...
Virtual ConferenceInternational audienceCompressed cache layouts require adding the block's size inf...
International audienceThe effectiveness of a compressed cache depends on three features: i) th...
The last few years have seen an exponential increase, driven by many disparate fields such as big da...
With the widening gap between processor and memory speeds, memory system designers may find cache co...
International audienceCache compression seeks the benefits of a larger cache with the area and power...
On-chip caches are essential as they bridge the growing speed-gap between off-chip memory and proces...
This synthesis lecture presents the current state-of-the-art in applying low-latency, lossless hardw...
Increasing cache latencies limit L1 cache sizes. In this paper we investigate restrictive compressio...
Abstract — Cache compression seeks the benefits of a larger cache with the area and power of a small...
Les techniques de compression matérielle sont généralement des simplifications des méthodes de compr...
The effective size of an L2 cache can be increased by using a dictionary-based compression scheme. N...
Lempel-Ziv's LZ77 algorithm is the de facto choice for compressing massive datasets (see e.g., Snapp...
International audienceCache compression algorithms must abide by hardware constraints; thus, their e...
International audienceHardware cache compression derives from software-compression research; yet, it...
Hardware compression techniques are typically simplifications of software compression methods. They ...
Virtual ConferenceInternational audienceCompressed cache layouts require adding the block's size inf...
International audienceThe effectiveness of a compressed cache depends on three features: i) th...
The last few years have seen an exponential increase, driven by many disparate fields such as big da...
With the widening gap between processor and memory speeds, memory system designers may find cache co...
International audienceCache compression seeks the benefits of a larger cache with the area and power...
On-chip caches are essential as they bridge the growing speed-gap between off-chip memory and proces...
This synthesis lecture presents the current state-of-the-art in applying low-latency, lossless hardw...
Increasing cache latencies limit L1 cache sizes. In this paper we investigate restrictive compressio...
Abstract — Cache compression seeks the benefits of a larger cache with the area and power of a small...
Les techniques de compression matérielle sont généralement des simplifications des méthodes de compr...
The effective size of an L2 cache can be increased by using a dictionary-based compression scheme. N...
Lempel-Ziv's LZ77 algorithm is the de facto choice for compressing massive datasets (see e.g., Snapp...