Abstract—We introduce Selfish-LRU, a variant of the LRU (least recently used) cache replacement policy that improves per-formance and predictability in preemptive scheduling scenarios. In multitasking systems with conventional caches, a single memory access by a preempting task can trigger a chain reaction leading to a large number of additional cache misses in the preempted task. Selfish-LRU prevents such chain reactions by first evicting cache blocks that do not belong to the currently active task. Simulations confirm that Selfish-LRU reduces the CRPD (cache-related preemption delay) as well as the overall number of cache misses. At the same time, it simplifies CRPD analysis and results in smaller CRPD bounds. I
The trend in nowadays real-time embedded systems is to use commercial off-the-shelf com-ponents, and...
International audienceA task can be preempted by several jobs of higher priority tasks during its re...
The cache interference is found to play a critical role in optimizing cache allocation among concurr...
We introduce Selfish-LRU, a variant of the LRU (least recently used) cache replacement policy that i...
We describe and evaluate explicit reservation of cache memory to reduce the cache-related preemption...
In preemptive real-time systems, scheduling analyses need - in addition to the worst-case execution ...
Dependable real-time systems are essential to time-critical applications. The systems that run these...
Tasks running on microprocessors with cache memories are often subjected to cache related preemption...
Cache memory is used in almost all computer systems today to bridge the ever increasing speed gap be...
The full text of this article is not available on SOAR. WSU users can access the article via IEEE Xp...
Recent studies have shown that in highly associative caches, the perfor-mance gap between the Least ...
In modern embedded systems, real-time applications are often executed on multi-core systems that als...
Caches have become invaluable for higher-end architectures to hide, in part, the increasing gap betw...
The full text of this article is not available on SOAR. WSU users can access the article via IEEE Xp...
With the advancement of technology, multi-cores with shared cache have been used in real-time applic...
The trend in nowadays real-time embedded systems is to use commercial off-the-shelf com-ponents, and...
International audienceA task can be preempted by several jobs of higher priority tasks during its re...
The cache interference is found to play a critical role in optimizing cache allocation among concurr...
We introduce Selfish-LRU, a variant of the LRU (least recently used) cache replacement policy that i...
We describe and evaluate explicit reservation of cache memory to reduce the cache-related preemption...
In preemptive real-time systems, scheduling analyses need - in addition to the worst-case execution ...
Dependable real-time systems are essential to time-critical applications. The systems that run these...
Tasks running on microprocessors with cache memories are often subjected to cache related preemption...
Cache memory is used in almost all computer systems today to bridge the ever increasing speed gap be...
The full text of this article is not available on SOAR. WSU users can access the article via IEEE Xp...
Recent studies have shown that in highly associative caches, the perfor-mance gap between the Least ...
In modern embedded systems, real-time applications are often executed on multi-core systems that als...
Caches have become invaluable for higher-end architectures to hide, in part, the increasing gap betw...
The full text of this article is not available on SOAR. WSU users can access the article via IEEE Xp...
With the advancement of technology, multi-cores with shared cache have been used in real-time applic...
The trend in nowadays real-time embedded systems is to use commercial off-the-shelf com-ponents, and...
International audienceA task can be preempted by several jobs of higher priority tasks during its re...
The cache interference is found to play a critical role in optimizing cache allocation among concurr...