The majority of current HPC applications are composed of complex and irregular data structures that involve techniques such as linear algebra, graph algorithms, and resource management, for which new platforms with varying computation-unit capacity and features are required. Platforms using several cores with different performance characteristics make a challenge the selection of the best programming model, based on the corresponding executing algorithm. To make this study, there are approaches in the literature, that go from comparing in isolation the corresponding programming models’ primitives to the evaluation of a complete set of benchmarks. Our study shows that none of them may provide enough information for a HPC application to make ...
Abstract—Big and complex applications need many resources and long computation time to execute seque...
Most HPC systems are clusters of shared memory nodes. To use such systems efficiently both memory co...
With the end of Dennard scaling, future high performance computers are expected to consist of distri...
Most HPC systems are clusters of shared memory nodes. Parallel programming must combine the distribu...
Most HPC systems are clusters of shared memory nodes. Parallel programming must combine the distribu...
Parallel programming models are quite challenging and emerging topic in the parallel computing era. ...
Over the last few decades, Message Passing Interface (MPI) has become the parallel-communication sta...
The demand for ever-growing computing capabilities in scientific computing and simulation has led to...
Abstract: The developments of multi-core technology have induced big challenges to software structur...
Nowadays, shared-memory parallel architectures have evolved and new programming frameworks have appe...
The mixing of shared memory and message passing programming models within a single application has o...
In this masters thesis we explore past work trying to classify algorithmic problems. These classicat...
The goal of this work was to examine existing shared memory parallel programming models, figure out ...
This is a post-peer-review, pre-copyedit version of an article published in Lecture Notes in Compute...
HPC applications are often very complex and their behavior depends on a wide range of factors from a...
Abstract—Big and complex applications need many resources and long computation time to execute seque...
Most HPC systems are clusters of shared memory nodes. To use such systems efficiently both memory co...
With the end of Dennard scaling, future high performance computers are expected to consist of distri...
Most HPC systems are clusters of shared memory nodes. Parallel programming must combine the distribu...
Most HPC systems are clusters of shared memory nodes. Parallel programming must combine the distribu...
Parallel programming models are quite challenging and emerging topic in the parallel computing era. ...
Over the last few decades, Message Passing Interface (MPI) has become the parallel-communication sta...
The demand for ever-growing computing capabilities in scientific computing and simulation has led to...
Abstract: The developments of multi-core technology have induced big challenges to software structur...
Nowadays, shared-memory parallel architectures have evolved and new programming frameworks have appe...
The mixing of shared memory and message passing programming models within a single application has o...
In this masters thesis we explore past work trying to classify algorithmic problems. These classicat...
The goal of this work was to examine existing shared memory parallel programming models, figure out ...
This is a post-peer-review, pre-copyedit version of an article published in Lecture Notes in Compute...
HPC applications are often very complex and their behavior depends on a wide range of factors from a...
Abstract—Big and complex applications need many resources and long computation time to execute seque...
Most HPC systems are clusters of shared memory nodes. To use such systems efficiently both memory co...
With the end of Dennard scaling, future high performance computers are expected to consist of distri...