In a previous report the design concepts of Charon were presented. Charon is a toolkit that aids engineers in developing scientific programs for structured-grid applications to be run on MIMD parallel computers. It constitutes an augmentation of the general-purpose MPI-based message-passing layer, and provides the user with a hierarchy of tools for rapid prototyping and validation of parallel programs, and subsequent piecemeal performance tuning. Here we describe the implementation of the domain decomposition tools used for creating data distributions across sets of processors. We also present the hierarchy of parallelization tools that allows smooth translation of legacy code (or a serial design) into a parallel program. Along with the act...
In this section we discuss the approach of using unified block structures as the basis for paralleli...
A programming paradigm is a method for structuring programs in order to reduce the complexity of the...
We present an overview of research at the Center for Research on Parallel Computation designed to pr...
Message passing is among the most popular techniques for parallelizing scientific programs on distri...
In this paper we present a simple language for expressing divide and conquer computations. The langu...
The chare kernel is a runtime support system for executing parallel programs. It is responsible for ...
Three paradigms for distributed-memory parallel computation that free the application programmer fro...
Scientific and engineering applications often involve structured meshes. These meshes may be nested ...
We propose a parallel specialized language that ensures portable and cost-predictable implementation...
After at least a decade of parallel tool development, parallelization of scientific applications rem...
Many programming models for massively parallel machines exist, and each has its advantages and disad...
Over the past few decades, scientific research has grown to rely increasingly on simulation and othe...
peer-reviewedParallelising serial software systems presents many challenges. In particular, the tas...
High Performance Fortran (HPF) has emerged as a standard dialect of Fortran for data-parallel comput...
Writing applications for high performance computers is a challenging task. Although writing code by ...
In this section we discuss the approach of using unified block structures as the basis for paralleli...
A programming paradigm is a method for structuring programs in order to reduce the complexity of the...
We present an overview of research at the Center for Research on Parallel Computation designed to pr...
Message passing is among the most popular techniques for parallelizing scientific programs on distri...
In this paper we present a simple language for expressing divide and conquer computations. The langu...
The chare kernel is a runtime support system for executing parallel programs. It is responsible for ...
Three paradigms for distributed-memory parallel computation that free the application programmer fro...
Scientific and engineering applications often involve structured meshes. These meshes may be nested ...
We propose a parallel specialized language that ensures portable and cost-predictable implementation...
After at least a decade of parallel tool development, parallelization of scientific applications rem...
Many programming models for massively parallel machines exist, and each has its advantages and disad...
Over the past few decades, scientific research has grown to rely increasingly on simulation and othe...
peer-reviewedParallelising serial software systems presents many challenges. In particular, the tas...
High Performance Fortran (HPF) has emerged as a standard dialect of Fortran for data-parallel comput...
Writing applications for high performance computers is a challenging task. Although writing code by ...
In this section we discuss the approach of using unified block structures as the basis for paralleli...
A programming paradigm is a method for structuring programs in order to reduce the complexity of the...
We present an overview of research at the Center for Research on Parallel Computation designed to pr...