Since the dawn of computing, CPU performance has continually grown, buoyed by Moore\u27s Law. Execution speed for parallelizable programs in particular has massively increased with the now widespread employment of GPUs, TPUs, and FPGAs, capable of preforming hundreds of computations simultaneously, for data processing. A major bottleneck for further performance increases, which has impeded speedup of sequential programming in particular, is the processor memory performance gap. One of the approaches to address this block is improving cache management algorithms. Caching is transparent to software, but traditional caching algorithms forgo hardware-software collaboration. Previous work introduced the idea of assigning leases to cache blocks a...
This dissertation addresses two sets of challenges facing processor design as the industry enters th...
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer...
Caching is a well-known technique for speeding up computation. We cache data from file systems and d...
Caching is a common solution to the data movement performance bottleneck of today’s computational sy...
Today’s real-time systems need to be faster and more powerful than ever before. Caches are an archit...
The cache interference is found to play a critical role in optimizing cache allocation among concurr...
Memory (cache, DRAM, and disk) is in charge of providing data and instructions to a computer\u27s pr...
Memory (cache, DRAM, and disk) is in charge of providing data and instructions to a computer’s proce...
The increasing speed gap between microprocessors and off-chip DRAM makes last-level caches (LLCs) a ...
With contemporary research focusing its attention primarily on benchmark-driven performance evaluati...
This report evaluates two distinct methods of improving the performance of GPU memory systems. Over ...
The memory system remains a major performance bottleneck in modern and future architectures. In this...
Cache memory is one of the most important components of a computer system. The cache allows quickly...
The cache interference is found to play a critical role in optimizing cache allocation among concurr...
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Compute...
This dissertation addresses two sets of challenges facing processor design as the industry enters th...
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer...
Caching is a well-known technique for speeding up computation. We cache data from file systems and d...
Caching is a common solution to the data movement performance bottleneck of today’s computational sy...
Today’s real-time systems need to be faster and more powerful than ever before. Caches are an archit...
The cache interference is found to play a critical role in optimizing cache allocation among concurr...
Memory (cache, DRAM, and disk) is in charge of providing data and instructions to a computer\u27s pr...
Memory (cache, DRAM, and disk) is in charge of providing data and instructions to a computer’s proce...
The increasing speed gap between microprocessors and off-chip DRAM makes last-level caches (LLCs) a ...
With contemporary research focusing its attention primarily on benchmark-driven performance evaluati...
This report evaluates two distinct methods of improving the performance of GPU memory systems. Over ...
The memory system remains a major performance bottleneck in modern and future architectures. In this...
Cache memory is one of the most important components of a computer system. The cache allows quickly...
The cache interference is found to play a critical role in optimizing cache allocation among concurr...
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Compute...
This dissertation addresses two sets of challenges facing processor design as the industry enters th...
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer...
Caching is a well-known technique for speeding up computation. We cache data from file systems and d...