In an ideal world, scientific applications would be expressed as high-level compositions of abstractions that encapsulate parallelism and deliver near-optimal performance with low maintainability costs. The alternative, where such abstractions are unavailable, is for application programmers to control execution using an appropriate explicitly parallel programming model. In this thesis we explore both approaches, represented by the Firedrake framework and the OpenMP programming model respectively. We also explore how OpenMP can support high level abstractions such as Firedrake. Firedrake is designed as a composition of domain-specific abstractions for solving partial differential equations via the finite element method. We extend Firedrake ...
Finding numerical solutions to partial differential equations (PDEs) is an essential task in the dis...
Shared memory parallel programming, for instance by inserting OpenMP pragmas into program code, migh...
International audienceWe are dealing here with the parallelization of fire spreading simulations fol...
OpenMP [13] is the dominant programming model for shared-memory parallelism in C, C++ and Fortran du...
This paper advances the state-of-the-art in programming models for exploiting task-level parallelism...
This paper advances the state-of-the-art in programming models for exploiting task-level parallelis...
With the introduction of more powerful and massively parallel embedded processors, embedded systems ...
Today’s High Performance Computing architectures exhibit significant compute power within each node ...
As chip manufacturing processes are getting ever closer to what is physically possible, the projecti...
OpenMP enables productive software development that targets shared-memory general purpose systems. H...
With the introduction of more powerful and massively parallel embedded processors, embedded systems ...
During the past decade, accelerators, such as NVIDIA CUDA GPUs and Intel Xeon Phis, have seen an inc...
With a large variety and complexity of existing HPC machines and uncertainty regarding exact future ...
Graphics Processing Units (GPU) have been widely adopted to accelerate the execution of HPC workload...
Heterogeneous computing is increasingly being used in a diversity of computing systems, ranging from...
Finding numerical solutions to partial differential equations (PDEs) is an essential task in the dis...
Shared memory parallel programming, for instance by inserting OpenMP pragmas into program code, migh...
International audienceWe are dealing here with the parallelization of fire spreading simulations fol...
OpenMP [13] is the dominant programming model for shared-memory parallelism in C, C++ and Fortran du...
This paper advances the state-of-the-art in programming models for exploiting task-level parallelism...
This paper advances the state-of-the-art in programming models for exploiting task-level parallelis...
With the introduction of more powerful and massively parallel embedded processors, embedded systems ...
Today’s High Performance Computing architectures exhibit significant compute power within each node ...
As chip manufacturing processes are getting ever closer to what is physically possible, the projecti...
OpenMP enables productive software development that targets shared-memory general purpose systems. H...
With the introduction of more powerful and massively parallel embedded processors, embedded systems ...
During the past decade, accelerators, such as NVIDIA CUDA GPUs and Intel Xeon Phis, have seen an inc...
With a large variety and complexity of existing HPC machines and uncertainty regarding exact future ...
Graphics Processing Units (GPU) have been widely adopted to accelerate the execution of HPC workload...
Heterogeneous computing is increasingly being used in a diversity of computing systems, ranging from...
Finding numerical solutions to partial differential equations (PDEs) is an essential task in the dis...
Shared memory parallel programming, for instance by inserting OpenMP pragmas into program code, migh...
International audienceWe are dealing here with the parallelization of fire spreading simulations fol...