We introduce Selfish-LRU, a variant of the LRU (least recently used) cache replacement policy that improves performance and predictability in preemptive scheduling scenarios. In multitasking systems with conventional caches, a single memory access by a preempting task can trigger a chain reaction leading to a large number of additional cache misses in the preempted task. Selfish-LRU prevents such chain reactions by first evicting cache blocks that do not belong to the currently active task. Simulations confirm that Selfish-LRU reduces the CRPD (cache-related preemption delay) as well as the overall number of cache misses. At the same time, it simplifies CRPD analysis and results in smaller CRPD bounds
International audienceA task can be preempted by several jobs of higher priority tasks during its re...
With the advancement of technology, multi-cores with shared cache have been used in real-time applic...
The cache interference is found to play a critical role in optimizing cache allocation among concurr...
Abstract—We introduce Selfish-LRU, a variant of the LRU (least recently used) cache replacement poli...
We describe and evaluate explicit reservation of cache memory to reduce the cache-related preemption...
In preemptive real-time systems, scheduling analyses need - in addition to the worst-case execution ...
Dependable real-time systems are essential to time-critical applications. The systems that run these...
Cache memory is used in almost all computer systems today to bridge the ever increasing speed gap be...
Caches have become invaluable for higher-end architectures to hide, in part, the increasing gap betw...
Recent studies have shown that in highly associative caches, the perfor-mance gap between the Least ...
Tasks running on microprocessors with cache memories are often subjected to cache related preemption...
In modern embedded systems, real-time applications are often executed on multi-core systems that als...
The full text of this article is not available on SOAR. WSU users can access the article via IEEE Xp...
The trend in nowadays real-time embedded systems is to use commercial off-the-shelf com-ponents, and...
The full text of this article is not available on SOAR. WSU users can access the article via IEEE Xp...
International audienceA task can be preempted by several jobs of higher priority tasks during its re...
With the advancement of technology, multi-cores with shared cache have been used in real-time applic...
The cache interference is found to play a critical role in optimizing cache allocation among concurr...
Abstract—We introduce Selfish-LRU, a variant of the LRU (least recently used) cache replacement poli...
We describe and evaluate explicit reservation of cache memory to reduce the cache-related preemption...
In preemptive real-time systems, scheduling analyses need - in addition to the worst-case execution ...
Dependable real-time systems are essential to time-critical applications. The systems that run these...
Cache memory is used in almost all computer systems today to bridge the ever increasing speed gap be...
Caches have become invaluable for higher-end architectures to hide, in part, the increasing gap betw...
Recent studies have shown that in highly associative caches, the perfor-mance gap between the Least ...
Tasks running on microprocessors with cache memories are often subjected to cache related preemption...
In modern embedded systems, real-time applications are often executed on multi-core systems that als...
The full text of this article is not available on SOAR. WSU users can access the article via IEEE Xp...
The trend in nowadays real-time embedded systems is to use commercial off-the-shelf com-ponents, and...
The full text of this article is not available on SOAR. WSU users can access the article via IEEE Xp...
International audienceA task can be preempted by several jobs of higher priority tasks during its re...
With the advancement of technology, multi-cores with shared cache have been used in real-time applic...
The cache interference is found to play a critical role in optimizing cache allocation among concurr...