We describe and evaluate explicit reservation of cache memory to reduce the cache-related preemption de-lay (CRPD) observed when tasks share a cache in a preemptive multitasking hard real-time system. We demonstrate the approach using measurements obtained from a hardware prototype, and present schedu-lability analyses for systems that share a cache by explicit reservation. These analyses form the basis for a series of experiments to further evaluate the approach. We find that explicit reservation is most useful for larger task sets with high utilization. Some task sets cannot be scheduled with a conventional cache, but are schedulable with explicit reservation. 1
Growing processing demand on multi-tasking real-time systems can be met by employing scalable multi-...
23rd IEEE Real-Time and Embedded Technology and Applications Symposium, RTAS 2017, Pittsburg, PA, US...
Abstract — Schedulability analysis for real-time systems has been the subject of prominent research ...
Cache locking improves timing predictability at the cost of performance. We explore a novel approach...
The trend in nowadays real-time embedded systems is to use commercial off-the-shelf com-ponents, and...
Abstract—In hard real-time systems, cache partitioning is often suggested as a means of increasing t...
Dependable real-time systems are essential to time-critical applications. The systems that run these...
We observe the cache misses introduced by scheduling and preemptions and their effects on the worst ...
Multitasked real-time systems often employ caches to boost performance. However the unpredictable dy...
Tasks running on microprocessors with cache memories are often subjected to cache related preemption...
In hard real-time systems, cache partitioning is often suggested as a means of increasing the predic...
We introduce Selfish-LRU, a variant of the LRU (least recently used) cache replacement policy that i...
Traditionally, caches have been used to reduce the average case memory latency in computer systems....
The assumption of task independence has long been consubstantial with the formulation of many schedu...
Schedulability analyses for preemptive real-time systems need to take into account cache-related pre...
Growing processing demand on multi-tasking real-time systems can be met by employing scalable multi-...
23rd IEEE Real-Time and Embedded Technology and Applications Symposium, RTAS 2017, Pittsburg, PA, US...
Abstract — Schedulability analysis for real-time systems has been the subject of prominent research ...
Cache locking improves timing predictability at the cost of performance. We explore a novel approach...
The trend in nowadays real-time embedded systems is to use commercial off-the-shelf com-ponents, and...
Abstract—In hard real-time systems, cache partitioning is often suggested as a means of increasing t...
Dependable real-time systems are essential to time-critical applications. The systems that run these...
We observe the cache misses introduced by scheduling and preemptions and their effects on the worst ...
Multitasked real-time systems often employ caches to boost performance. However the unpredictable dy...
Tasks running on microprocessors with cache memories are often subjected to cache related preemption...
In hard real-time systems, cache partitioning is often suggested as a means of increasing the predic...
We introduce Selfish-LRU, a variant of the LRU (least recently used) cache replacement policy that i...
Traditionally, caches have been used to reduce the average case memory latency in computer systems....
The assumption of task independence has long been consubstantial with the formulation of many schedu...
Schedulability analyses for preemptive real-time systems need to take into account cache-related pre...
Growing processing demand on multi-tasking real-time systems can be met by employing scalable multi-...
23rd IEEE Real-Time and Embedded Technology and Applications Symposium, RTAS 2017, Pittsburg, PA, US...
Abstract — Schedulability analysis for real-time systems has been the subject of prominent research ...