DIY2 is a programming model and runtime for block-parallel analytics on distributed-memory machines. Its main abstraction is block-structured data parallelism: data are decomposed into blocks; blocks are assigned to processing elements (processes or threads); computation is described as iterations over these blocks, and communication between blocks is defined by reusable patterns. By expressing computation in this general form, the DIY2 runtime is free to optimize the movement of blocks between slow and fast memories (disk and flash vs. DRAM) and to concurrently execute blocks residing in memory with multiple threads. This enables the same program to execute in-core, out-of-core, serial, parallel, single-threaded, multithreaded, or combina...
Parallelizing compiler technology has improved in re-cent years. One area in which compilers have ma...
[[abstract]]A methodology for designing pipelined data-parallel algorithms on multicomputers is stud...
The thesis offers a comparison of OpenMP and Intel Threading Building blocks. The two are threading ...
In recent years a new category of data analysis applications have evolved, known as data pipelining ...
In recent years a new category of data analysis applications have evolved, known as data pipelining ...
Research on programming distributed memory multiprocessors has resulted in a well-understood program...
International audienceThis paper describes a tool using one or more executions of a sequential progr...
This work presents the first thorough quantitative study of the available instruction-level parallel...
Abstract: "Data-parallel programming languages have many desirable features, such as single-thread s...
Data ow architecture as a concept has been around since the 1970s for parallel com-putation. In data...
In this paper, we introduce a model for managing abstract data structures that map to arbitrary dist...
The demand for ever-growing computing capabilities in scientific computing and simulation has led to...
While the chip multiprocessor (CMP) has quickly become the predominant processor architecture, its c...
Data mining is the process of extracting useful information or patterns from large raw sets of data....
This paper presents a framework for characterizing the distribution of fine-grained parallelism, dat...
Parallelizing compiler technology has improved in re-cent years. One area in which compilers have ma...
[[abstract]]A methodology for designing pipelined data-parallel algorithms on multicomputers is stud...
The thesis offers a comparison of OpenMP and Intel Threading Building blocks. The two are threading ...
In recent years a new category of data analysis applications have evolved, known as data pipelining ...
In recent years a new category of data analysis applications have evolved, known as data pipelining ...
Research on programming distributed memory multiprocessors has resulted in a well-understood program...
International audienceThis paper describes a tool using one or more executions of a sequential progr...
This work presents the first thorough quantitative study of the available instruction-level parallel...
Abstract: "Data-parallel programming languages have many desirable features, such as single-thread s...
Data ow architecture as a concept has been around since the 1970s for parallel com-putation. In data...
In this paper, we introduce a model for managing abstract data structures that map to arbitrary dist...
The demand for ever-growing computing capabilities in scientific computing and simulation has led to...
While the chip multiprocessor (CMP) has quickly become the predominant processor architecture, its c...
Data mining is the process of extracting useful information or patterns from large raw sets of data....
This paper presents a framework for characterizing the distribution of fine-grained parallelism, dat...
Parallelizing compiler technology has improved in re-cent years. One area in which compilers have ma...
[[abstract]]A methodology for designing pipelined data-parallel algorithms on multicomputers is stud...
The thesis offers a comparison of OpenMP and Intel Threading Building blocks. The two are threading ...