Energy is an increasingly important consideration in memory system design. Although caches can save energy in several ways, such as by decreasing execution time and reducing the number of main memory accesses, they also suffer from known inefficiencies: the last-level cache (LLC) tends to have a high miss ratio while simultaneously storing many blocks that are never referenced after being written back to LLC. These blocks contribute to dynamic energy while simultaneously causing cache pollution. Because these blocks are not referenced before they are evicted, we can write them directly to memory rather than to the LLC. To do so, we must predict which blocks will not be referenced. Previous approaches rely on additional state at the LLC and...
The ever-increasing computational power of contemporary microprocessors reduces the execution time s...
Last level caches (LLCs) account for a substantial fraction of the area and power budget in many mod...
We introduce a set of new Compression-Aware Management Policies (CAMP) for on-chip caches that emplo...
Emerging Non-Volatile Memory (NVM) technologies are explored as potential alternatives to traditiona...
Last-Level Cache (LLC) represents the bulk of a modern CPU processor's transistor budget and is esse...
Distinguishing transient blocks from frequently used blocks enables servicing references to transien...
We introduce a novel approach to predict whether a block should be allocated in the cache or not upo...
Recent increases in CPU performance have outpaced in-creases in hard drive performance. As a result,...
The last level cache (LLC) is critical for mobile computer systems in terms of both energy consumpti...
As per-core CPU performance plateaus and data-bound applications like graph analytics and key-value ...
We introduce a set of new Compression-Aware Management Policies (CAMP) for on-chip caches that emplo...
On-chip cache memories are instrumental in tackling several performance and energy issues facing con...
Memory latency has become an important performance bottleneck in current microprocessors. This probl...
Cache memories are commonly implemented through multiple memory banks to improve bandwidth and laten...
The ever-increasing computational power of contemporary microprocessors reduces the execution time s...
The ever-increasing computational power of contemporary microprocessors reduces the execution time s...
Last level caches (LLCs) account for a substantial fraction of the area and power budget in many mod...
We introduce a set of new Compression-Aware Management Policies (CAMP) for on-chip caches that emplo...
Emerging Non-Volatile Memory (NVM) technologies are explored as potential alternatives to traditiona...
Last-Level Cache (LLC) represents the bulk of a modern CPU processor's transistor budget and is esse...
Distinguishing transient blocks from frequently used blocks enables servicing references to transien...
We introduce a novel approach to predict whether a block should be allocated in the cache or not upo...
Recent increases in CPU performance have outpaced in-creases in hard drive performance. As a result,...
The last level cache (LLC) is critical for mobile computer systems in terms of both energy consumpti...
As per-core CPU performance plateaus and data-bound applications like graph analytics and key-value ...
We introduce a set of new Compression-Aware Management Policies (CAMP) for on-chip caches that emplo...
On-chip cache memories are instrumental in tackling several performance and energy issues facing con...
Memory latency has become an important performance bottleneck in current microprocessors. This probl...
Cache memories are commonly implemented through multiple memory banks to improve bandwidth and laten...
The ever-increasing computational power of contemporary microprocessors reduces the execution time s...
The ever-increasing computational power of contemporary microprocessors reduces the execution time s...
Last level caches (LLCs) account for a substantial fraction of the area and power budget in many mod...
We introduce a set of new Compression-Aware Management Policies (CAMP) for on-chip caches that emplo...