MPI (Message Passing Interface) is the de facto stan-dard in High Performance Computing. By using some MPI-2 new features, such as the dynamic creation of processes, it is possible to implement highly efficient parallel programs that can run on dynamic and/or heterogeneous resources, provided a good schedule of the processes can be computed at run-time. A classical solution to schedule parallel pro-grams on-line is Work Stealing. However, its use with MPI-2 is complicated by a restricted communication scheme be-tween the processes: namely, spawned processes in MPI-2 can only communicate with their direct parents. This work presents an on-line scheduling algorithm, called Hierarchi-cal Work Stealing, to obtain good load-balancing of MPI-2 pr...
To parallelize an application program for a distributed memory architecture, we can use a precedence...
Computers across all domains increasingly rely on multiple processors/cores, with processors startin...
International audienceWhile task-based programming, such as OpenMP, is a promising solution to explo...
Abstract. The Message Passing Interface is one of the most well known parallel programming libraries...
Com o objetivo de ser portável e eficiente em arquiteturas HPC atuais, a execução de um programa par...
Computationally-intensive loops are the primary source of parallelism in scientific applications. Su...
In this paper a hierarchical task scheduling strategy for assigning parallel computations with dynam...
This paper presents a complete framework for the parallelization of nested loops by applying tiling ...
Heading towards exascale, the challenges for process management with respect to flexibility and effi...
High-level parallel languages offer a simple way for application programmers to specify parallelism ...
Abstract. We present a work-stealing algorithm for runtime scheduling of data-parallel operations in...
Abstract. We present a work-stealing algorithm for runtime scheduling of data-parallel operations in...
This paper studies the problem of eciently scheduling fully strict (i.e., well-structured) multithre...
In this paper we propose new insights into the problem of concurrently scheduling threads through ma...
) Robert D. Blumofe Dionisios Papadopoulos Department of Computer Sciences, The University of Texas...
To parallelize an application program for a distributed memory architecture, we can use a precedence...
Computers across all domains increasingly rely on multiple processors/cores, with processors startin...
International audienceWhile task-based programming, such as OpenMP, is a promising solution to explo...
Abstract. The Message Passing Interface is one of the most well known parallel programming libraries...
Com o objetivo de ser portável e eficiente em arquiteturas HPC atuais, a execução de um programa par...
Computationally-intensive loops are the primary source of parallelism in scientific applications. Su...
In this paper a hierarchical task scheduling strategy for assigning parallel computations with dynam...
This paper presents a complete framework for the parallelization of nested loops by applying tiling ...
Heading towards exascale, the challenges for process management with respect to flexibility and effi...
High-level parallel languages offer a simple way for application programmers to specify parallelism ...
Abstract. We present a work-stealing algorithm for runtime scheduling of data-parallel operations in...
Abstract. We present a work-stealing algorithm for runtime scheduling of data-parallel operations in...
This paper studies the problem of eciently scheduling fully strict (i.e., well-structured) multithre...
In this paper we propose new insights into the problem of concurrently scheduling threads through ma...
) Robert D. Blumofe Dionisios Papadopoulos Department of Computer Sciences, The University of Texas...
To parallelize an application program for a distributed memory architecture, we can use a precedence...
Computers across all domains increasingly rely on multiple processors/cores, with processors startin...
International audienceWhile task-based programming, such as OpenMP, is a promising solution to explo...