Abstract—In task parallel languages, an important factor for achieving a good performance is the use of a cut-off technique to reduce the number of tasks created. Using a cut-off to avoid an excessive number of tasks helps the runtime system to reduce the total overhead associated with task creation, particularlt if the tasks are fine grain. Unfortunately, the best cut-off technique its usually dependent on the application structure or even the input data of the application. We propose a new cut-off technique that, using information from the application collected at runtime, decides which tasks should be pruned to improve the performance of the application. This technique does not rely on the programmer to determine the cut-off technique th...
OpenMP is a parallel programming model widely used on shared-memory systems. Over the years, the Ope...
The OpenMP programming model provides parallel applications a very important feature: job malleabili...
Achieving high performance in task-parallel runtime systems, especially with high degrees of paralle...
Parallel task-based programming models like OpenMP support the declaration of task data dependences....
Reductions represent a common algorithmic pattern in many scientific applications. OpenMP* has alway...
OpenMP is a very convenient programming model to parallelize critical real-time applications for sev...
There has been a proliferation of task-parallel programming sys-tems to address the requirements of ...
In recent years parallel computing has become ubiquitous. Lead by the spread of commodity multicore ...
The OpenMP task directive makes it possible to efficiently parallelize irregular applications, with ...
© Springer International Publishing Switzerland 2014. The wide adoption of parallel processing hardw...
OpenMP has been very successful in exploiting structured parallelism in applications. With increasin...
Abstract—OpenMP has been very successful in exploiting structured parallelism in applications. With ...
The wide adoption of parallel processing hardware in mainstream computing as well as the raising int...
Not caring about resources means wasting them. Current task-based parallel models such as Cilk or Op...
The concept of task already exists in many parallel programming models. Programmers express parallel...
OpenMP is a parallel programming model widely used on shared-memory systems. Over the years, the Ope...
The OpenMP programming model provides parallel applications a very important feature: job malleabili...
Achieving high performance in task-parallel runtime systems, especially with high degrees of paralle...
Parallel task-based programming models like OpenMP support the declaration of task data dependences....
Reductions represent a common algorithmic pattern in many scientific applications. OpenMP* has alway...
OpenMP is a very convenient programming model to parallelize critical real-time applications for sev...
There has been a proliferation of task-parallel programming sys-tems to address the requirements of ...
In recent years parallel computing has become ubiquitous. Lead by the spread of commodity multicore ...
The OpenMP task directive makes it possible to efficiently parallelize irregular applications, with ...
© Springer International Publishing Switzerland 2014. The wide adoption of parallel processing hardw...
OpenMP has been very successful in exploiting structured parallelism in applications. With increasin...
Abstract—OpenMP has been very successful in exploiting structured parallelism in applications. With ...
The wide adoption of parallel processing hardware in mainstream computing as well as the raising int...
Not caring about resources means wasting them. Current task-based parallel models such as Cilk or Op...
The concept of task already exists in many parallel programming models. Programmers express parallel...
OpenMP is a parallel programming model widely used on shared-memory systems. Over the years, the Ope...
The OpenMP programming model provides parallel applications a very important feature: job malleabili...
Achieving high performance in task-parallel runtime systems, especially with high degrees of paralle...