Recent technological advances are such that the gap between processor cycle times and memory cycle times is growing. Techniques to reduce or tolerate large memory latencies become essential for achieving high processor utilization. In this dissertation, we propose and evaluate data prefetching techniques that address the data access penalty problems. First, we propose a hardware-based data prefetching approach for reducing memory latency. The basic idea of the prefetching scheme is to keep track of data access patterns in a reference prediction table (RPT) organized as an instruction cache. It includes three variations of the design of the RPT and associated logic: generic design, a lookahead mechanism, and a correlated scheme. They differ ...
Processor performance has increased far faster than memories have been able to keep up with, forcing...
grantor: University of TorontoThe latency of accessing instructions and data from the memo...
As the trends of process scaling make memory system even more crucial bottleneck, the importance of ...
Conventional cache prefetching approaches can be either hardware-based, generally by using a one-blo...
Memory latency has always been a major issue in shared-memory multiprocessors and high-speed systems...
Data prefetching has been considered an effective way to mask data access latency caused by cache mi...
Prefetching, i.e., exploiting the overlap of processor com-putations with data accesses, is one of s...
Data-intensive applications often exhibit memory referencing patterns with little data reuse, result...
CPU speeds double approximately every eighteen months, while main memory speeds double only about ev...
In this dissertation, we provide hardware solutions to increase the efficiency of the cache hierarch...
A well known performance bottleneck in computer architecture is the so-called memory wall. This term...
Abstract. Given the increasing gap between processors and memory, prefetching data into cache become...
Abstract Data prefetching is an effective data access latency hiding technique to mask the CPU stall...
Despite rapid increases in CPU performance, the primary obstacles to achieving higher performance in...
Data prefetching has been considered an effective way to cross the performance gap between processor...
Processor performance has increased far faster than memories have been able to keep up with, forcing...
grantor: University of TorontoThe latency of accessing instructions and data from the memo...
As the trends of process scaling make memory system even more crucial bottleneck, the importance of ...
Conventional cache prefetching approaches can be either hardware-based, generally by using a one-blo...
Memory latency has always been a major issue in shared-memory multiprocessors and high-speed systems...
Data prefetching has been considered an effective way to mask data access latency caused by cache mi...
Prefetching, i.e., exploiting the overlap of processor com-putations with data accesses, is one of s...
Data-intensive applications often exhibit memory referencing patterns with little data reuse, result...
CPU speeds double approximately every eighteen months, while main memory speeds double only about ev...
In this dissertation, we provide hardware solutions to increase the efficiency of the cache hierarch...
A well known performance bottleneck in computer architecture is the so-called memory wall. This term...
Abstract. Given the increasing gap between processors and memory, prefetching data into cache become...
Abstract Data prefetching is an effective data access latency hiding technique to mask the CPU stall...
Despite rapid increases in CPU performance, the primary obstacles to achieving higher performance in...
Data prefetching has been considered an effective way to cross the performance gap between processor...
Processor performance has increased far faster than memories have been able to keep up with, forcing...
grantor: University of TorontoThe latency of accessing instructions and data from the memo...
As the trends of process scaling make memory system even more crucial bottleneck, the importance of ...