The most widely available high performance platforms today are hierarchical, with shared memory leaves, e.g. clusters of multi-cores, or NUMA with multiple regions. The Glasgow Haskell Compiler (GHC) provides a number of parallel Haskell implementations targeting different parallel architectures. In particular, GHC-SMP supports shared memory architectures, and GHC-GUM supports distributed memory machines. Both implementations use different, but related, runtime system (RTS) mechanisms and achieve good performance. A specialised RTS for the ubiquitous hierarchical architectures is lacking. This thesis presents the design, implementation, and evaluation of a new parallel Haskell RTS, GUMSMP, that combines shared and distributed memor...
As the number of cores increases Non-Uniform Memory Access (NUMA) is becoming increasingly prevalent...
In principle, pure functional languages promise straightforward architecture-independent parallelism...
The statelessness of functional computations facilitates both parallelism and fault recovery. Faults...
The most widely available high performance platforms today are hierarchical, with shared memory lea...
General purpose computing architectures are evolving quickly to become manycore and hierarchical: i...
Over time, several competing approaches to parallel Haskell programming have emerged. Different appr...
AbstractGeneral purpose computing architectures are evolving quickly to become many-core and hierarc...
In this paper, we investigate the differences and tradeoffs imposed by two parallel Haskell dialects...
GUM is a portable, parallel implementation of the Haskell functional language which has been publicl...
If you want to program a parallel computer, a purely functional language like Haskell is a promising...
GUM is a portable, parallel implementation of the Haskell functional language. Despite sustained res...
Computational GRIDs potentially offer low-cost, readily available, and large-scale high-performance ...
<p>With the emergence of commodity multicore architectures, exploiting tightly-coupled paralle...
Non-uniform memory access (NUMA) architectures are modern shared-memory, multi-core machines offerin...
Conventional parallel programming is complex and error prone. To improve programmer productivity, w...
As the number of cores increases Non-Uniform Memory Access (NUMA) is becoming increasingly prevalent...
In principle, pure functional languages promise straightforward architecture-independent parallelism...
The statelessness of functional computations facilitates both parallelism and fault recovery. Faults...
The most widely available high performance platforms today are hierarchical, with shared memory lea...
General purpose computing architectures are evolving quickly to become manycore and hierarchical: i...
Over time, several competing approaches to parallel Haskell programming have emerged. Different appr...
AbstractGeneral purpose computing architectures are evolving quickly to become many-core and hierarc...
In this paper, we investigate the differences and tradeoffs imposed by two parallel Haskell dialects...
GUM is a portable, parallel implementation of the Haskell functional language which has been publicl...
If you want to program a parallel computer, a purely functional language like Haskell is a promising...
GUM is a portable, parallel implementation of the Haskell functional language. Despite sustained res...
Computational GRIDs potentially offer low-cost, readily available, and large-scale high-performance ...
<p>With the emergence of commodity multicore architectures, exploiting tightly-coupled paralle...
Non-uniform memory access (NUMA) architectures are modern shared-memory, multi-core machines offerin...
Conventional parallel programming is complex and error prone. To improve programmer productivity, w...
As the number of cores increases Non-Uniform Memory Access (NUMA) is becoming increasingly prevalent...
In principle, pure functional languages promise straightforward architecture-independent parallelism...
The statelessness of functional computations facilitates both parallelism and fault recovery. Faults...