On chip caches in modern processors account for a sizable fraction of the dynamic and leakage power. Much of this power is wasted, required only because the memory cells farthest from the sense amplifiers in the cache must discharge a large capacitance on the bitlines. We reduce this capacitance by segmenting the memory cells along the bitlines, and turning off the segmenters to reduce the overall bitline capacitance. The success of this cache relies on accessing segments near the senseamps much more often than remote segments. We show that the access pattern to the first level data and instruction cache is extremely skewed. Only a small set of cache lines are accessed frequently. We exploit this nonuniform cache access pattern by mapping t...
The line size/performance trade-offs in off-chip second-level caches in light of energy-efficiency a...
textOne of the major limiters to computer systems and systems on chip (SOC) designs is accessing the...
This paper presents a technique for minimizing chip-area cost of implementing an on-chip cache memor...
Most microprocessors employ the on-chip caches to bridge the performance gap between the processor a...
Minimizing power consumption continues to grow as a critical design issue for many platforms, from e...
Power dissipation is increasingly important in CPUs rang-ing from those intended for mobile use, all...
Abstract—Energy efficiency plays a crucial role in the design of embedded processors especially for ...
Abstract—Reducing the supply voltage to reduce dynamic power consumption in CMOS devices, inadverten...
Leakage power in data cache memories represents a sizable fraction of total power consumption, and m...
If current technology scaling trends hold, leakage power dissipation will soon become the dominant s...
Abstract: Caches contribute to much of a microprocessor system's set-associative cache. However...
Leakage power in data cache memories represents a sizable fraction of total power consumption, and m...
Previous work has shown that cache line sizes impact performance differently for different desktop p...
Recent studies have shown that peripheral circuits, including decoders, wordline drivers, input and ...
Minimizing power consumption continues to grow as a critical design issue for many platforms, from ...
The line size/performance trade-offs in off-chip second-level caches in light of energy-efficiency a...
textOne of the major limiters to computer systems and systems on chip (SOC) designs is accessing the...
This paper presents a technique for minimizing chip-area cost of implementing an on-chip cache memor...
Most microprocessors employ the on-chip caches to bridge the performance gap between the processor a...
Minimizing power consumption continues to grow as a critical design issue for many platforms, from e...
Power dissipation is increasingly important in CPUs rang-ing from those intended for mobile use, all...
Abstract—Energy efficiency plays a crucial role in the design of embedded processors especially for ...
Abstract—Reducing the supply voltage to reduce dynamic power consumption in CMOS devices, inadverten...
Leakage power in data cache memories represents a sizable fraction of total power consumption, and m...
If current technology scaling trends hold, leakage power dissipation will soon become the dominant s...
Abstract: Caches contribute to much of a microprocessor system's set-associative cache. However...
Leakage power in data cache memories represents a sizable fraction of total power consumption, and m...
Previous work has shown that cache line sizes impact performance differently for different desktop p...
Recent studies have shown that peripheral circuits, including decoders, wordline drivers, input and ...
Minimizing power consumption continues to grow as a critical design issue for many platforms, from ...
The line size/performance trade-offs in off-chip second-level caches in light of energy-efficiency a...
textOne of the major limiters to computer systems and systems on chip (SOC) designs is accessing the...
This paper presents a technique for minimizing chip-area cost of implementing an on-chip cache memor...