In this paper we discuss the application of an hybrid programming paradigm that combines message-passing (MPI) with shared memory programming (OpenMP). We apply this model to the parallel solution of two basic problems: the sparse matrix-vector product and the dynamic program-ming problem. We compare the results of the hybrid model with the application of a pure MPI model on a cluster of dual Intel Xeon processors. The experimental results show that the behavior of both models depend, among other factors, on the application and on the size of the problems. While with the dynamic programming problem we obtain very good speedups, in the case of the matrix-vector product the algorithms do not take very good profit of the dual processors. 1
It is important to have a fast, robust and scalable algorithm to solve a sparse linear system AX=B. ...
International audienceSince the last decade, most of the supercomputer architectures are based on cl...
We describe a methodology for developing high performance programs running on clusters of SMP nodes....
The mixed-mode OpenMP and MPI programming models in parallel application have significant impact on ...
With the end of Dennard scaling, future high performance computers are expected to consist of distri...
Hybrid programming, whereby shared-memory and mes-sage-passing programming techniques are combined w...
Hybrid programming, whereby shared memory and message passing programming techniques are combined wi...
The mixing of shared memory and message passing programming models within a single application has o...
The mixing of shared memory and message passing programming models within a single application has o...
This paper applies a Hybrid MPI-OpenMP program-ming model with a thread-to-thread communication meth...
An introduction to the parallel programming of supercomputers is given. The focus is on the usage of...
Communication overhead is one of the dominant factors affecting performance in high-end computing sy...
The parallelization process of nested-loop algorithms onto popular multi-level parallel architectur...
Most HPC systems are clusters of shared memory nodes. Parallel programming must combine the distribu...
Abstract Hybrid parallel programming with the message passing interface (MPI) for internode communic...
It is important to have a fast, robust and scalable algorithm to solve a sparse linear system AX=B. ...
International audienceSince the last decade, most of the supercomputer architectures are based on cl...
We describe a methodology for developing high performance programs running on clusters of SMP nodes....
The mixed-mode OpenMP and MPI programming models in parallel application have significant impact on ...
With the end of Dennard scaling, future high performance computers are expected to consist of distri...
Hybrid programming, whereby shared-memory and mes-sage-passing programming techniques are combined w...
Hybrid programming, whereby shared memory and message passing programming techniques are combined wi...
The mixing of shared memory and message passing programming models within a single application has o...
The mixing of shared memory and message passing programming models within a single application has o...
This paper applies a Hybrid MPI-OpenMP program-ming model with a thread-to-thread communication meth...
An introduction to the parallel programming of supercomputers is given. The focus is on the usage of...
Communication overhead is one of the dominant factors affecting performance in high-end computing sy...
The parallelization process of nested-loop algorithms onto popular multi-level parallel architectur...
Most HPC systems are clusters of shared memory nodes. Parallel programming must combine the distribu...
Abstract Hybrid parallel programming with the message passing interface (MPI) for internode communic...
It is important to have a fast, robust and scalable algorithm to solve a sparse linear system AX=B. ...
International audienceSince the last decade, most of the supercomputer architectures are based on cl...
We describe a methodology for developing high performance programs running on clusters of SMP nodes....