This thesis presents a novel dynamically reconfigurable active L1 instruction and data cache model, called DRAC. Employing cache, particularly L1, can speed up memory accesses, reduce the effects of memory bottleneck and consequently improve the system performance; however, efficient design of a cache for embedded systems requires fast and early performance modeling. Our proposed model is cycle accurate instruction and data cache emulator that is designed as an on-chip hardware peripheral on FPGA. The model can also be integrated into multicore emulation system and emulate multiple caches of the cores. DRAC model is implemented on Xilinx Virtex 5 FPGA and validated using several benchmarks. Our experimental results show the model can accura...
Multi-core platforms have entered the realm of the embedded systems to meet the ever growing perform...
The design of an ALU and a Cache memory for use in a high performance processor was examined in this...
Two important parameters for DRAM cache are the miss rate and the hit latency, as they strongly infl...
This work presents design of a configurable and observable model of L1 data cache memory and a novel...
This dissertation addresses two sets of challenges facing processor design as the industry enters th...
In this thesis, we propose two optimization techniques to reduce power consumption in L1 caches (dat...
This thesis proposes a buffered dual access mode cache to reduce power consumption in multicore cach...
The speed at which microprocessors can perform computations is increasing faster than the speed of a...
The increasing speed gap between microprocessors and off-chip DRAM makes last-level caches (LLCs) a ...
This paper presents a flexible multi-core cache memory simulator to design and evaluate memory hiera...
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer...
In modern multicore processors, various resources (such as memory bandwidth and caches) are designed...
For forty years, transistor counts on integrated circuits have doubled roughly every two years, enab...
The current trend in a processor design has moved from multicore to manycore (tens to hundreds, or m...
The rapid development of computing platforms has widened the gap between the computing system and me...
Multi-core platforms have entered the realm of the embedded systems to meet the ever growing perform...
The design of an ALU and a Cache memory for use in a high performance processor was examined in this...
Two important parameters for DRAM cache are the miss rate and the hit latency, as they strongly infl...
This work presents design of a configurable and observable model of L1 data cache memory and a novel...
This dissertation addresses two sets of challenges facing processor design as the industry enters th...
In this thesis, we propose two optimization techniques to reduce power consumption in L1 caches (dat...
This thesis proposes a buffered dual access mode cache to reduce power consumption in multicore cach...
The speed at which microprocessors can perform computations is increasing faster than the speed of a...
The increasing speed gap between microprocessors and off-chip DRAM makes last-level caches (LLCs) a ...
This paper presents a flexible multi-core cache memory simulator to design and evaluate memory hiera...
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer...
In modern multicore processors, various resources (such as memory bandwidth and caches) are designed...
For forty years, transistor counts on integrated circuits have doubled roughly every two years, enab...
The current trend in a processor design has moved from multicore to manycore (tens to hundreds, or m...
The rapid development of computing platforms has widened the gap between the computing system and me...
Multi-core platforms have entered the realm of the embedded systems to meet the ever growing perform...
The design of an ALU and a Cache memory for use in a high performance processor was examined in this...
Two important parameters for DRAM cache are the miss rate and the hit latency, as they strongly infl...