Proposed cache compression schemes make design-time assumptions on value locality to reduce decompression latency. For example, some schemes assume that common values are spatially close whereas other schemes assume that null blocks are common. Most schemes, however, assume that value locality is best exploited by fixed-size data types (e.g., 32-bit integers). This assumption falls short when other data types, such as floating-point numbers, are common. This paper makes two contributions. First, HyComp - a hybrid cache compression scheme - selects the best-performing compression scheme, based on heuristics that predict data types. Data types considered are pointers, integers, floating-point numbers and the special (and trivial) case of null...
International audienceHardware cache compression derives from software-compression research; yet, it...
International audienceHardware cache compression derives from software-compression research; yet, it...
Increasing cache latencies limit L1 cache sizes. In this paper we investigate restrictive compressio...
Proposed cache compression schemes make design-time assumptions on value locality to reduce decompre...
On-chip caches are essential as they bridge the growing speed-gap between off-chip memory and proces...
On-chip caches are essential as they bridge the growing speed-gap between off-chip memory and proces...
A challenge in the design of high performance computer systems is how to transferdata efficiently be...
This synthesis lecture presents the current state-of-the-art in applying low-latency, lossless hardw...
A challenge in the design of high performance computer systems is how to transfer data efficiently b...
On-chip cache memories are instrumental in tackling several performance and energy issues facing con...
Low utilization of on-chip cache capacity limits perfor-mance and wastes energy because of the long ...
Low utilization of on-chip cache capacity limits performance and wastes energy because of the long l...
With the widening gap between processor and memory speeds, memory system designers may find cache co...
processor architecture, memory system and management, cache memory, hardware and software technique,...
International audienceHardware cache compression derives from software-compression research; yet, it...
International audienceHardware cache compression derives from software-compression research; yet, it...
International audienceHardware cache compression derives from software-compression research; yet, it...
Increasing cache latencies limit L1 cache sizes. In this paper we investigate restrictive compressio...
Proposed cache compression schemes make design-time assumptions on value locality to reduce decompre...
On-chip caches are essential as they bridge the growing speed-gap between off-chip memory and proces...
On-chip caches are essential as they bridge the growing speed-gap between off-chip memory and proces...
A challenge in the design of high performance computer systems is how to transferdata efficiently be...
This synthesis lecture presents the current state-of-the-art in applying low-latency, lossless hardw...
A challenge in the design of high performance computer systems is how to transfer data efficiently b...
On-chip cache memories are instrumental in tackling several performance and energy issues facing con...
Low utilization of on-chip cache capacity limits perfor-mance and wastes energy because of the long ...
Low utilization of on-chip cache capacity limits performance and wastes energy because of the long l...
With the widening gap between processor and memory speeds, memory system designers may find cache co...
processor architecture, memory system and management, cache memory, hardware and software technique,...
International audienceHardware cache compression derives from software-compression research; yet, it...
International audienceHardware cache compression derives from software-compression research; yet, it...
International audienceHardware cache compression derives from software-compression research; yet, it...
Increasing cache latencies limit L1 cache sizes. In this paper we investigate restrictive compressio...