Conventional set-associative data cache accesses waste energy since tag and data arrays of several ways are simultaneously accessed to sustain pipeline speed. Different access techniques to avoid activating all cache ways have been previously proposed in an effort to reduce energy usage. However, a problem that many of these access techniques have in common is that they need to access different cache memory portions in a sequential manner, which is difficult to support with standard synchronous SRAM memory. We propose the speculative halt-tag access (SHA) approach, which accesses low-order tag bits, i.e., the halt tag, in the address generation stage instead of the SRAM access stage to eliminate accesses to cache ways that cannot possibly c...
Fast set-associative level-one data caches (L1 DCs) access all ways in parallel during load operatio...
SUMMARY Energy consumption has become an important design consideration in modern processors. Theref...
We propose a novel energy-efficient memory architecture which relies on the use of cache with a redu...
Abstract—Conventional set-associative data cache accesses waste energy since tag and data arrays of ...
Abstract: Caches contribute to much of a microprocessor system's set-associative cache. However...
Caches contribute to much of a microprocessor system's power and energy consumption. We have de...
Due to performance reasons, all ways in set-associative level-one (L1) data caches are accessed in p...
Abstract—Due to performance reasons, all ways in set-associative level-one (L1) data caches are acce...
In recent years, CPU performance has become energy constrained. If performance is to continue increa...
L1 data caches in high-performance processors continue to grow in set associativity. Higher associat...
L1 data caches in high-performance processors continue to grow in set associativity. Higher associat...
Abstract—Fast set-associative level-one data caches (L1 DCs) access all ways in parallel during load...
Abstract—Fast set-associative level-one data caches (L1 DCs) access all ways in parallel during load...
Set-associative caches achieve low miss rates for typical applications but result in significant ene...
© 2018 ACM. Level-one data cache (L1 DC) accesses impact energy usage as they frequently occur and u...
Fast set-associative level-one data caches (L1 DCs) access all ways in parallel during load operatio...
SUMMARY Energy consumption has become an important design consideration in modern processors. Theref...
We propose a novel energy-efficient memory architecture which relies on the use of cache with a redu...
Abstract—Conventional set-associative data cache accesses waste energy since tag and data arrays of ...
Abstract: Caches contribute to much of a microprocessor system's set-associative cache. However...
Caches contribute to much of a microprocessor system's power and energy consumption. We have de...
Due to performance reasons, all ways in set-associative level-one (L1) data caches are accessed in p...
Abstract—Due to performance reasons, all ways in set-associative level-one (L1) data caches are acce...
In recent years, CPU performance has become energy constrained. If performance is to continue increa...
L1 data caches in high-performance processors continue to grow in set associativity. Higher associat...
L1 data caches in high-performance processors continue to grow in set associativity. Higher associat...
Abstract—Fast set-associative level-one data caches (L1 DCs) access all ways in parallel during load...
Abstract—Fast set-associative level-one data caches (L1 DCs) access all ways in parallel during load...
Set-associative caches achieve low miss rates for typical applications but result in significant ene...
© 2018 ACM. Level-one data cache (L1 DC) accesses impact energy usage as they frequently occur and u...
Fast set-associative level-one data caches (L1 DCs) access all ways in parallel during load operatio...
SUMMARY Energy consumption has become an important design consideration in modern processors. Theref...
We propose a novel energy-efficient memory architecture which relies on the use of cache with a redu...