This report presents the Data-aware Process Networks, a new parallel execution model adapted to the hardware constraints of high-level synthesis, where the data transferts are made explicit. We show that the DPN model is consistant in the meaning where any translation of a sequential program produces an equivalent DPN without deadlocks. Finally, we show how to compile a sequential program to a DPN and how to optimize the input/output and the parallelism
Dans l’objectif d’augmenter les performances, l’architecture des processeurs a évolué versdes plate-...
Afin d'exploiter les capacités des architectures parallèles telles que les grappes, les grilles, les...
International audienceFormal process languages inheriting the concurrency and communication features...
This report presents the Data-aware Process Networks, a new parallel execution model adapted to the ...
Since the end of Dennard scaling, power efficiency is the limiting factor for large-scale computing....
The training phase in Deep Neural Networks has become an important source of computing resource usag...
Inter-node communication has turned out to be one of the determining factors of the performance on m...
International audienceNous nous plaçons dans le contexte du mapping de réseaux de processus du modèl...
technical reportA procedural parallel process representation, known as data-driven nets is described...
National audienceLa puissance et l'extensibilité des architectures parallèles à mémoire distribuée (...
This thesis provides a fully automatic translation from synchronous programs to parallel software fo...
International audience[Excerpt from the introduction] The spreading of Distributed Memory Parallel C...
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.39.6074&rep=rep1&type=pdfInternational audi...
Journal ArticleThe complexity and diversity of parallel programming languages and computer architect...
Data locality is a critical issue in order to achieve performance on today's high-end parallel machi...
Dans l’objectif d’augmenter les performances, l’architecture des processeurs a évolué versdes plate-...
Afin d'exploiter les capacités des architectures parallèles telles que les grappes, les grilles, les...
International audienceFormal process languages inheriting the concurrency and communication features...
This report presents the Data-aware Process Networks, a new parallel execution model adapted to the ...
Since the end of Dennard scaling, power efficiency is the limiting factor for large-scale computing....
The training phase in Deep Neural Networks has become an important source of computing resource usag...
Inter-node communication has turned out to be one of the determining factors of the performance on m...
International audienceNous nous plaçons dans le contexte du mapping de réseaux de processus du modèl...
technical reportA procedural parallel process representation, known as data-driven nets is described...
National audienceLa puissance et l'extensibilité des architectures parallèles à mémoire distribuée (...
This thesis provides a fully automatic translation from synchronous programs to parallel software fo...
International audience[Excerpt from the introduction] The spreading of Distributed Memory Parallel C...
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.39.6074&rep=rep1&type=pdfInternational audi...
Journal ArticleThe complexity and diversity of parallel programming languages and computer architect...
Data locality is a critical issue in order to achieve performance on today's high-end parallel machi...
Dans l’objectif d’augmenter les performances, l’architecture des processeurs a évolué versdes plate-...
Afin d'exploiter les capacités des architectures parallèles telles que les grappes, les grilles, les...
International audienceFormal process languages inheriting the concurrency and communication features...