Modern parallel codes are often written as a collection of several diverse modules. Different programming languages might be the best or natural fit for each of these modules or for different libraries that are used together in an application. For such applications, the restriction of implementing the entire application in a single parallel language may impact the application’s performance and programmer’s productivity negatively. This paper studies interoperation among parallel languages that differ with respect to the driver of program execution. We describe the challenges in enabling interoperation among user-driven and system-driven languages, and present techniques for managing important attributes of a program, such as the control ...
The programming of parallel and distributed applications is difficult. The proliferation of net wor...
Widespread adoption of parallel computing depends on the availability of improved software environme...
Applications are increasingly being executed on computational systems that have hierarchical paralle...
Modern parallel codes are often written as a collection of several diverse modules. Different progra...
Abstract—MPI and Charm++ embody two distinct per-spectives for writing parallel programs. While MPI ...
We discuss an object-based, multi-paradigm approach to the development of large-scale, high performa...
Recent developments in supercomputing have brought us massively parallel machines. With the number o...
Interoperability of programming languages is the ability for two or more languages to interact as pa...
The inevitable transition to parallel programming can be facilitated by appropriate tools, including...
Due to power constraints, future growth in computing capability must explicitly leverage parallelism...
This paper describes a framework for providing the ability to use multiple specialized data parallel...
Advances in computing and networking infrastructure have enabled an increasing number of application...
This paper describes a framework for providing the ability to use multiple specialized data parallel...
Abstract—A range of tools, from parallel debuggers to per-formance analysis/visualization to simulat...
The data-parallel language High Performance Fortran (HPF) does not allow efficient expression of mix...
The programming of parallel and distributed applications is difficult. The proliferation of net wor...
Widespread adoption of parallel computing depends on the availability of improved software environme...
Applications are increasingly being executed on computational systems that have hierarchical paralle...
Modern parallel codes are often written as a collection of several diverse modules. Different progra...
Abstract—MPI and Charm++ embody two distinct per-spectives for writing parallel programs. While MPI ...
We discuss an object-based, multi-paradigm approach to the development of large-scale, high performa...
Recent developments in supercomputing have brought us massively parallel machines. With the number o...
Interoperability of programming languages is the ability for two or more languages to interact as pa...
The inevitable transition to parallel programming can be facilitated by appropriate tools, including...
Due to power constraints, future growth in computing capability must explicitly leverage parallelism...
This paper describes a framework for providing the ability to use multiple specialized data parallel...
Advances in computing and networking infrastructure have enabled an increasing number of application...
This paper describes a framework for providing the ability to use multiple specialized data parallel...
Abstract—A range of tools, from parallel debuggers to per-formance analysis/visualization to simulat...
The data-parallel language High Performance Fortran (HPF) does not allow efficient expression of mix...
The programming of parallel and distributed applications is difficult. The proliferation of net wor...
Widespread adoption of parallel computing depends on the availability of improved software environme...
Applications are increasingly being executed on computational systems that have hierarchical paralle...