Processor speed is improving at a faster rate than the speed of main memory, which makes memory accesses increasingly expensive. One way to solve this problem is to reduce miss ratio of the processor’s last level cache by improving its replacement policy. We approach the problem by co-designing the runtime system and hardware and exploiting the semantics of the applications written in data-flow task-based programming models to provide hardware with information about the task types and task data-dependencies. We propose the Task-Type aware Insertion Policy, TTIP, which uses the runtime system to dynamically determine the best probability per task type for bimodal insertion in the recency stack and the static Dependency-Type aware Insertion P...
The presence of shared caches in current multicore processors may generate a lot of performance vari...
Memory (cache, DRAM, and disk) is in charge of providing data and instructions to a computer\u27s pr...
Since different companies are introducing new capabilities and features on their products, the dema...
Architects have adopted the shared memory model that implicitly manages cache coherence and cache ca...
High performance cache mechanisms have a great impact on overall performance of computer systems by ...
One of the dominant approaches towards implementing fast and high performance computer architectures...
The performance gap between processors and main memory has been growing over the last decades. Fast ...
Emerging task-based parallel programming models shield programmers from the daunting task of paralle...
Modern processors use high-performance cache replacement policies that outperform traditional altern...
Emerging task-based parallel programming models shield programmers from the daunting task of paralle...
With recent advances of processor technology, the LRU based shared last-level cache (LLC) has been w...
Abstract—Parallel applications are becoming mainstream and architectural techniques for multicores t...
Missing the deadline of an application task can be catastrophic in real-time systems. Therefore, to ...
International audienceThe presence of shared caches in current multicore processors may generate a l...
Modern processors use high-performance cache replacement policies that outperform traditional altern...
The presence of shared caches in current multicore processors may generate a lot of performance vari...
Memory (cache, DRAM, and disk) is in charge of providing data and instructions to a computer\u27s pr...
Since different companies are introducing new capabilities and features on their products, the dema...
Architects have adopted the shared memory model that implicitly manages cache coherence and cache ca...
High performance cache mechanisms have a great impact on overall performance of computer systems by ...
One of the dominant approaches towards implementing fast and high performance computer architectures...
The performance gap between processors and main memory has been growing over the last decades. Fast ...
Emerging task-based parallel programming models shield programmers from the daunting task of paralle...
Modern processors use high-performance cache replacement policies that outperform traditional altern...
Emerging task-based parallel programming models shield programmers from the daunting task of paralle...
With recent advances of processor technology, the LRU based shared last-level cache (LLC) has been w...
Abstract—Parallel applications are becoming mainstream and architectural techniques for multicores t...
Missing the deadline of an application task can be catastrophic in real-time systems. Therefore, to ...
International audienceThe presence of shared caches in current multicore processors may generate a l...
Modern processors use high-performance cache replacement policies that outperform traditional altern...
The presence of shared caches in current multicore processors may generate a lot of performance vari...
Memory (cache, DRAM, and disk) is in charge of providing data and instructions to a computer\u27s pr...
Since different companies are introducing new capabilities and features on their products, the dema...