In this paper we will present works and results obtained during the one year PRACE project Cim128Ki [1]. This project aims to make computation on full Tier0 supercomputers or at least up to 131,072 cores. The goal of such a project was to validate the scalability used to develop our application at such a large scale. The development context is a finite element formulation using an implicit scheme in time discretization leading to solve at the end very large linear systems. Another main axe in our strategy is doing mesh adaptation to reduce the size of the space discretization keeping the precision of the simulation unchanged. The main idea behind this to combine the benefits of every numerical technique rather than choosing one negl...
A commonly held view in the turbomachinery community is that finite element methods are not well-sui...
The efficient solution of many large-scale scientific calculations depends on unstructured mesh stra...
Benchmarks for parallel processing of large models is an urgent need for High Performance Computing ...
International audienceIn this paper, we present developments done to obtain efficient parallel compu...
International audienceWe first define the meaning of "massively parallel computation": considering o...
Abstract: Making multigrid algorithms run efficiently on large parallel computers is a challenge. Wi...
We develop scalable algorithms and object-oriented code frameworks for terascale scientific simulati...
Today’s largest supercomputers have 100,000s of processor cores and offer the potential to solve par...
The majority of finite element models in structural engineering are composed of unstructured meshes....
Processor technology is still dramatically advancing and promises enormous improvements in processin...
The focus of the subject DOE sponsored research concerns parallel methods, algorithms, and software ...
L’introduction des supercalculateurs parallèles a donné une nouvelle dimension au calcul scientifiqu...
SummaryThe team of Research Programme Supercomputing for Industry at IT4Innovations National Superco...
Computational methods based on the use of adaptively constructed nonuniform meshes reduce the amount...
In order to run CFD codes more efficiently on large scales, the parallel computing has to be employe...
A commonly held view in the turbomachinery community is that finite element methods are not well-sui...
The efficient solution of many large-scale scientific calculations depends on unstructured mesh stra...
Benchmarks for parallel processing of large models is an urgent need for High Performance Computing ...
International audienceIn this paper, we present developments done to obtain efficient parallel compu...
International audienceWe first define the meaning of "massively parallel computation": considering o...
Abstract: Making multigrid algorithms run efficiently on large parallel computers is a challenge. Wi...
We develop scalable algorithms and object-oriented code frameworks for terascale scientific simulati...
Today’s largest supercomputers have 100,000s of processor cores and offer the potential to solve par...
The majority of finite element models in structural engineering are composed of unstructured meshes....
Processor technology is still dramatically advancing and promises enormous improvements in processin...
The focus of the subject DOE sponsored research concerns parallel methods, algorithms, and software ...
L’introduction des supercalculateurs parallèles a donné une nouvelle dimension au calcul scientifiqu...
SummaryThe team of Research Programme Supercomputing for Industry at IT4Innovations National Superco...
Computational methods based on the use of adaptively constructed nonuniform meshes reduce the amount...
In order to run CFD codes more efficiently on large scales, the parallel computing has to be employe...
A commonly held view in the turbomachinery community is that finite element methods are not well-sui...
The efficient solution of many large-scale scientific calculations depends on unstructured mesh stra...
Benchmarks for parallel processing of large models is an urgent need for High Performance Computing ...