Multicore architectures can provide high predictable performance through parallel processing. Unfortunately, computing the makespan of parallel applications is overly pessimistic either due to load imbalance issues plaguing static scheduling methods or due to timing anomalies plaguing dynamic scheduling methods. This paper contributes with an anomaly-free dynamic scheduling method, called Lazy, which is non-preemptive and non-greedy in the sense that some ready tasks may not be dispatched for execution even if some processors are idle. Assuming parallel applications using contemporary taskbased parallel programming models, such as OpenMP, the general idea of Lazy is to avoid timing anomalies by assigning fixed priorities to the tasks and th...
A runtime support is necessary for parallel computations with irregular and dynamic structures. One ...
AbstractThe optimization of parallel applications is difficult to achieve by classical optimization ...
There has been significant progress in understanding the parallelism inherent to iterative sequentia...
Multicore architectures can provide high predictable performance through parallel processing. Unfort...
Lazy scheduling is a runtime scheduler for task-parallel codes that effectively coarsens parallelism...
To use multiprocessors in hard real-time systems, schedulability analysis is needed to provide forma...
[[abstract]]Task scheduling is concerned with the sequence in which tasks entering a multiprocessor ...
Abstract—Effective multicore computing requires to make efficient usage of the computational resourc...
none3noEffective multicore computing requires to make efficient usage of the computational resources...
International audienceThe optimization of parallel applications is difficult to achieve by classical...
Preemptive scheduling of periodically arriving tasks on a multiprocessor is considered. We show that...
High-level parallel languages offer a simple way for application programmers to specify parallelism ...
Shared resource interference is observed by applications as dynamic performance asymmetry. Prior art...
International audienceMulti-core systems are increasingly interesting candidates for executing paral...
Many parallel algorithms are naturally expressed at a fine level of granularity, often finer than a ...
A runtime support is necessary for parallel computations with irregular and dynamic structures. One ...
AbstractThe optimization of parallel applications is difficult to achieve by classical optimization ...
There has been significant progress in understanding the parallelism inherent to iterative sequentia...
Multicore architectures can provide high predictable performance through parallel processing. Unfort...
Lazy scheduling is a runtime scheduler for task-parallel codes that effectively coarsens parallelism...
To use multiprocessors in hard real-time systems, schedulability analysis is needed to provide forma...
[[abstract]]Task scheduling is concerned with the sequence in which tasks entering a multiprocessor ...
Abstract—Effective multicore computing requires to make efficient usage of the computational resourc...
none3noEffective multicore computing requires to make efficient usage of the computational resources...
International audienceThe optimization of parallel applications is difficult to achieve by classical...
Preemptive scheduling of periodically arriving tasks on a multiprocessor is considered. We show that...
High-level parallel languages offer a simple way for application programmers to specify parallelism ...
Shared resource interference is observed by applications as dynamic performance asymmetry. Prior art...
International audienceMulti-core systems are increasingly interesting candidates for executing paral...
Many parallel algorithms are naturally expressed at a fine level of granularity, often finer than a ...
A runtime support is necessary for parallel computations with irregular and dynamic structures. One ...
AbstractThe optimization of parallel applications is difficult to achieve by classical optimization ...
There has been significant progress in understanding the parallelism inherent to iterative sequentia...