A variety of historically-proven computer languages have recently been extended to support parallel computation in a data-parallel framework. The performance capabilities of modern microprocessors have made the "cluster-of-workstations" model of parallel computing more attractive, by permitting organizations to network together workstations to solve problems in concert, without the need to buy specialized and expensive supercomputers or mainframes. For the most part, research on these extended languages has focused on compile-time analyses which detect data dependencies and use user-provided hints to distribute data and encode the necessary communication operations between nodes in a multiprocessor system. These analyses have shown their va...
Multicomputer (distributed memory MIMD machines) have emerged as inexpensive, yet powerful parallel...
Distributed-memory multicomputers, such as the Intel iPSC/860, the Intel Paragon, the IBM SP-1 /SP-2...
[[abstract]]Performing run-time parallelization on general networks of workstations (NOWs) without s...
Data parallel languages are gaining interest as it becomes clear that they support a wider range of ...
Data-parallel languages allow programmers to use the familiar machine-independent programming style ...
This work was also published as a Rice University thesis/dissertation: http://hdl.handle.net/1911/19...
Widespread adoption of parallel computing depends on the availability of improved software environme...
We describe the compilation and execution of data-parallel languages for networks of workstations. E...
In recent years, distributed memory parallel machines have been widely recognized as the most likely...
Advances in computing and networking infrastructure have enabled an increasing number of application...
. In this paper, we describe experiments comparing the communication times for a number of different...
Significant progress has been made in the development of programming languages and tools that are su...
Efficiently using multicore architectures demands an increasing degree of fluency in parallel progra...
Traditionally, languages were created and intended for sequential machines and were, naturally, sequ...
We present an overview of research at the Center for Research on Parallel Computation designed to pr...
Multicomputer (distributed memory MIMD machines) have emerged as inexpensive, yet powerful parallel...
Distributed-memory multicomputers, such as the Intel iPSC/860, the Intel Paragon, the IBM SP-1 /SP-2...
[[abstract]]Performing run-time parallelization on general networks of workstations (NOWs) without s...
Data parallel languages are gaining interest as it becomes clear that they support a wider range of ...
Data-parallel languages allow programmers to use the familiar machine-independent programming style ...
This work was also published as a Rice University thesis/dissertation: http://hdl.handle.net/1911/19...
Widespread adoption of parallel computing depends on the availability of improved software environme...
We describe the compilation and execution of data-parallel languages for networks of workstations. E...
In recent years, distributed memory parallel machines have been widely recognized as the most likely...
Advances in computing and networking infrastructure have enabled an increasing number of application...
. In this paper, we describe experiments comparing the communication times for a number of different...
Significant progress has been made in the development of programming languages and tools that are su...
Efficiently using multicore architectures demands an increasing degree of fluency in parallel progra...
Traditionally, languages were created and intended for sequential machines and were, naturally, sequ...
We present an overview of research at the Center for Research on Parallel Computation designed to pr...
Multicomputer (distributed memory MIMD machines) have emerged as inexpensive, yet powerful parallel...
Distributed-memory multicomputers, such as the Intel iPSC/860, the Intel Paragon, the IBM SP-1 /SP-2...
[[abstract]]Performing run-time parallelization on general networks of workstations (NOWs) without s...