This paper presents a technique for minimizing chip-area cost of implementing an on-chip cache memory of microprocessors. The main idea of the technique is Caching Address Tags, or CAT cache for short. The CAT cache exploits locality property that exists among addresses of memory references for the purpose of minimizing chip area-cost of address tags. By keeping only a limited number of distinct tags of cached data rather than having as many tags as cache lines, the CAT cache can reduce the cost of implementing tag memory by an order of magnitude without noticeable performance difference from ordinary caches. Therefore, CAT represents another level of caching for cache memories. Simulation experiments are carried out to evaluate performance...
In current processors, the cache controller, which contains the cache directory and other logic such...
Most of the embedded processors utilize cache memory in order to minimize the performance gap betwee...
The gap between CPU and main memory speeds has long been a performance bottleneck. As we move toward...
This paper presents a technique for minimizing chip-area cost of implementing an on-chip cache memor...
Cache memory is an important level of the memory hierarchy, and its performance and implementation c...
Most newly announced microprocessors manipulate 64-bit virtual addresses and the width of physical a...
Most newly announced high erformance micro ro-/ {cessors sup ort 64-bit virtual ad resses and the wi...
In the embedded domain, the gap between memory and processor performance and the increase in applica...
A new dynamic cache resizing scheme for low-power CAM-tag caches is introduced. A control algorithm ...
We propose a novel energy-efficient memory architecture which relies on the use of cache with a redu...
Abstract: Caches contribute to much of a microprocessor system's set-associative cache. However...
In the embedded domain, the gap between memory and processor performance and the increase in applica...
In embedded systems caches are very precious for keeping low the memory bandwidth and to allow emplo...
Energy consumption in caches is widely studied topic. The access to a cache line consumes energy. Th...
[EN] Power consumption in current high-performance chip multiprocessors (CMPs) has become a major de...
In current processors, the cache controller, which contains the cache directory and other logic such...
Most of the embedded processors utilize cache memory in order to minimize the performance gap betwee...
The gap between CPU and main memory speeds has long been a performance bottleneck. As we move toward...
This paper presents a technique for minimizing chip-area cost of implementing an on-chip cache memor...
Cache memory is an important level of the memory hierarchy, and its performance and implementation c...
Most newly announced microprocessors manipulate 64-bit virtual addresses and the width of physical a...
Most newly announced high erformance micro ro-/ {cessors sup ort 64-bit virtual ad resses and the wi...
In the embedded domain, the gap between memory and processor performance and the increase in applica...
A new dynamic cache resizing scheme for low-power CAM-tag caches is introduced. A control algorithm ...
We propose a novel energy-efficient memory architecture which relies on the use of cache with a redu...
Abstract: Caches contribute to much of a microprocessor system's set-associative cache. However...
In the embedded domain, the gap between memory and processor performance and the increase in applica...
In embedded systems caches are very precious for keeping low the memory bandwidth and to allow emplo...
Energy consumption in caches is widely studied topic. The access to a cache line consumes energy. Th...
[EN] Power consumption in current high-performance chip multiprocessors (CMPs) has become a major de...
In current processors, the cache controller, which contains the cache directory and other logic such...
Most of the embedded processors utilize cache memory in order to minimize the performance gap betwee...
The gap between CPU and main memory speeds has long been a performance bottleneck. As we move toward...