The role of a vectorising compiler for an imperative language is to transform the for-loops of a program into the vector instructions of a data-parallel machine. In a functional language, constant complexity map is the essence of data-parallelism, where a function is applied to every element of a data-structure all at the same time. As map can be considered to be an abstraction of an imperative for-loop, the goal of vectorising a functional language is to transform map expressions into vector operations. This paper presents a vectorisation process in terms of transformations to programs expressed in an extended -calculus. Of particular interest is the way in which algebraic data-types are transformed into a form that is susceptible to the ...
This paper describes methods to adapt existing optimizing compilers for sequential languages to prod...
Expressing algorithms using immutable arrays greatly simplifies the challenges of automatic SIMD vec...
The success of parallel architectures has been limited by the lack of high-level parallel programmin...
The essence of data-parallelism is a O(1) map function. A data-parallel interpretation of map is the...
It has long been known that some of the most common uses of for and while-loops in imperative progra...
We discuss a translation methodology for transforming a high level algorithmic specification written...
AbstractTraditionally a vectorizing compiler matches the iterative constructs of a program against a...
We investigate the claim that functional languages offer low-cost parallelism in the context of symb...
It has often been suggested that functional languages provide an excellent basis for programming par...
Abstract—Data-parallel programming languages are an impor-tant component in today’s parallel computi...
We propose a parallel specialized language that ensures portable and cost-predictable implementation...
We propose a parallel specialized language that ensures portable and cost-predictable implementation...
Existing approaches to higher-order vectorisation, also known as flattening nested data parallelism,...
Despite the widespread adoption of parallel operations in contemporary CPU designs, their use has be...
AbstractTraditionally a vectorizing compiler matches the iterative constructs of a program against a...
This paper describes methods to adapt existing optimizing compilers for sequential languages to prod...
Expressing algorithms using immutable arrays greatly simplifies the challenges of automatic SIMD vec...
The success of parallel architectures has been limited by the lack of high-level parallel programmin...
The essence of data-parallelism is a O(1) map function. A data-parallel interpretation of map is the...
It has long been known that some of the most common uses of for and while-loops in imperative progra...
We discuss a translation methodology for transforming a high level algorithmic specification written...
AbstractTraditionally a vectorizing compiler matches the iterative constructs of a program against a...
We investigate the claim that functional languages offer low-cost parallelism in the context of symb...
It has often been suggested that functional languages provide an excellent basis for programming par...
Abstract—Data-parallel programming languages are an impor-tant component in today’s parallel computi...
We propose a parallel specialized language that ensures portable and cost-predictable implementation...
We propose a parallel specialized language that ensures portable and cost-predictable implementation...
Existing approaches to higher-order vectorisation, also known as flattening nested data parallelism,...
Despite the widespread adoption of parallel operations in contemporary CPU designs, their use has be...
AbstractTraditionally a vectorizing compiler matches the iterative constructs of a program against a...
This paper describes methods to adapt existing optimizing compilers for sequential languages to prod...
Expressing algorithms using immutable arrays greatly simplifies the challenges of automatic SIMD vec...
The success of parallel architectures has been limited by the lack of high-level parallel programmin...