In many existing and planned parallel machines, memory cannot be considered as a single homogeneous resource. Instead, each pro-cessor has a "local " section of memory which is more accessible than others. Because of this ease of access, it is necessary to distribute the data. across the system so that most references are made to local data. In this paper, we give a. mathematical description of data distribution in parallel machines. We then show its application to strip mining, a common transformation for converting sequential programs to run on parallel hardware. Strip mining using data distribution informa-tion enhances the locality of reference in the resulting program, thus speeding performance.
Data distribution functions are introduced. They are matced with scheduling functions. The processor...
This paper presents a framework for characterizing the distribution of fine-grained parallelism, dat...
Data locality is a key factor for the performance of parallel systems. In a Distribute
Massively Parallel Processor systems provide the required computational power to solve most large sc...
We present algorithms for the transportation of data in parallel and distributed systems that would ...
Data locality is a well-recognized requirement for the development of any parallel application, but ...
Massively Parallel Processor systems provide the required computational power to solve most large sc...
This paper investigates the design of parallel algorithmic strategies that address the efficient use...
An approach to programming distributed memory-parallel machines that has recently become popular is ...
A parallel file may be physically stored on several independent disks and logically partitioned by s...
We consider distribution at compile time of the array data in a distributed-memory implementation of...
Data-parallel languages, such as H scIGH P scERFORMANCE F scORTRAN or F scORTRAN D, provide a machin...
An efficient parallel algorithm FPM(Fast Parallel Mining) for mining association rules on a shared-n...
An important problem facing parallelizing compilers for distributed memory mimd machines is that of ...
Abstract: High performance computing (HPC) architectures are specialized machines which can reach th...
Data distribution functions are introduced. They are matced with scheduling functions. The processor...
This paper presents a framework for characterizing the distribution of fine-grained parallelism, dat...
Data locality is a key factor for the performance of parallel systems. In a Distribute
Massively Parallel Processor systems provide the required computational power to solve most large sc...
We present algorithms for the transportation of data in parallel and distributed systems that would ...
Data locality is a well-recognized requirement for the development of any parallel application, but ...
Massively Parallel Processor systems provide the required computational power to solve most large sc...
This paper investigates the design of parallel algorithmic strategies that address the efficient use...
An approach to programming distributed memory-parallel machines that has recently become popular is ...
A parallel file may be physically stored on several independent disks and logically partitioned by s...
We consider distribution at compile time of the array data in a distributed-memory implementation of...
Data-parallel languages, such as H scIGH P scERFORMANCE F scORTRAN or F scORTRAN D, provide a machin...
An efficient parallel algorithm FPM(Fast Parallel Mining) for mining association rules on a shared-n...
An important problem facing parallelizing compilers for distributed memory mimd machines is that of ...
Abstract: High performance computing (HPC) architectures are specialized machines which can reach th...
Data distribution functions are introduced. They are matced with scheduling functions. The processor...
This paper presents a framework for characterizing the distribution of fine-grained parallelism, dat...
Data locality is a key factor for the performance of parallel systems. In a Distribute