Many parallel applications do not completely fit into the data parallel model. Although these applications contain data parallelism, task parallelism is needed to represent the natural computation structure or enhance performance. To combine the easiness of programming of the data parallel model with the efficiency of the task parallel model allows to parallel forms to be nested, giving Nested parallelism. In this work, we examine the solutions provided to N ested parallelism in two standard parallel programming platforms, HPF and MPI. Both their expression capacity and their efficiency are compared on a Cray- 3TE, which is distributed memory machine. Finally, an additional speech about the use of the methodology proposed for MPI is done o...
International audienceOver the past decade, many programming languages and systems for parallel-comp...
Fortran and C++ are the dominant programming languages used in scientific computation. Consequently,...
In this work, we show how parallel applications can be implemented efficiently using task parallelis...
Many parallel applications do not completely fit into the data parallel model. Although these applic...
Many parallel applications do not completely fit into the data parallel model. Although these applic...
Data parallelislm is one of the more successful efforts to introduce explicit parallelism to high le...
The data-parallel language High Performance Fortran (HPF) does not allow efficient expression of mix...
High Performance Fortran (HPF) has emerged as a standard dialect of Fortran for data-parallel comput...
bulk synchronous parallel (BSP) communication model can hinder performance increases. This is due to...
15 pagesInternational audienceProgramming parallelmachines as effectively as sequential ones would i...
Data-parallel languages such as High Performance Fortran (HPF) present a simple execution model in w...
Two paradigms for distributed-memory parallel computation that free the application programmer from ...
Three paradigms for distributed-memory parallel computation that free the application programmer fro...
It has become common knowledge that parallel programming is needed for scientific applications, part...
International audienceLarge applications for parallel computers and more specifically unstructured C...
International audienceOver the past decade, many programming languages and systems for parallel-comp...
Fortran and C++ are the dominant programming languages used in scientific computation. Consequently,...
In this work, we show how parallel applications can be implemented efficiently using task parallelis...
Many parallel applications do not completely fit into the data parallel model. Although these applic...
Many parallel applications do not completely fit into the data parallel model. Although these applic...
Data parallelislm is one of the more successful efforts to introduce explicit parallelism to high le...
The data-parallel language High Performance Fortran (HPF) does not allow efficient expression of mix...
High Performance Fortran (HPF) has emerged as a standard dialect of Fortran for data-parallel comput...
bulk synchronous parallel (BSP) communication model can hinder performance increases. This is due to...
15 pagesInternational audienceProgramming parallelmachines as effectively as sequential ones would i...
Data-parallel languages such as High Performance Fortran (HPF) present a simple execution model in w...
Two paradigms for distributed-memory parallel computation that free the application programmer from ...
Three paradigms for distributed-memory parallel computation that free the application programmer fro...
It has become common knowledge that parallel programming is needed for scientific applications, part...
International audienceLarge applications for parallel computers and more specifically unstructured C...
International audienceOver the past decade, many programming languages and systems for parallel-comp...
Fortran and C++ are the dominant programming languages used in scientific computation. Consequently,...
In this work, we show how parallel applications can be implemented efficiently using task parallelis...