General purpose computing architectures are evolving quickly to become manycore and hierarchical: i.e. a core can communicate more quickly locally than globally. To be effective on such architectures, programming models must be aware of the communications hierarchy. This thesis investigates a programming model that aims to share the responsibility of task placement, load balance, thread creation, and synchronisation between the application developer and the runtime system. The main contribution of this thesis is the development of four new architectureaware constructs for Glasgow parallel Haskell that exploit information about task size and aim to reduce communication for small tasks, preserve data locality, or to distribute large...
We propose a refactoring tool for the Haskell programming language, capable of introducing paralleli...
As the number of cores in manycore systems grows exponentially, the number of failures is also pred...
High performance architectures are increasingly heterogeneous with shared and distributed memory co...
AbstractGeneral purpose computing architectures are evolving quickly to become many-core and hierarc...
<p>With the emergence of commodity multicore architectures, exploiting tightly-coupled paralle...
The most widely available high performance platforms today are hierarchical, with shared memory lea...
In this paper, we investigate the differences and tradeoffs imposed by two parallel Haskell dialects...
Over time, several competing approaches to parallel Haskell programming have emerged. Different appr...
Conventional parallel programming is complex and error prone. To improve programmer productivity, w...
We investigate the claim that functional languages offer low-cost parallelism in the context of symb...
If you want to program a parallel computer, a purely functional language like Haskell is a promising...
In principle, pure functional languages promise straightforward architecture-independent parallelism...
Intel Concurrent Collections (CnC) is a parallel programming model in which a network of steps (func...
The statelessness of functional computations facilitates both parallelism and fault recovery. Faults...
Computational GRIDs potentially offer low-cost, readily available, and large-scale high-performance ...
We propose a refactoring tool for the Haskell programming language, capable of introducing paralleli...
As the number of cores in manycore systems grows exponentially, the number of failures is also pred...
High performance architectures are increasingly heterogeneous with shared and distributed memory co...
AbstractGeneral purpose computing architectures are evolving quickly to become many-core and hierarc...
<p>With the emergence of commodity multicore architectures, exploiting tightly-coupled paralle...
The most widely available high performance platforms today are hierarchical, with shared memory lea...
In this paper, we investigate the differences and tradeoffs imposed by two parallel Haskell dialects...
Over time, several competing approaches to parallel Haskell programming have emerged. Different appr...
Conventional parallel programming is complex and error prone. To improve programmer productivity, w...
We investigate the claim that functional languages offer low-cost parallelism in the context of symb...
If you want to program a parallel computer, a purely functional language like Haskell is a promising...
In principle, pure functional languages promise straightforward architecture-independent parallelism...
Intel Concurrent Collections (CnC) is a parallel programming model in which a network of steps (func...
The statelessness of functional computations facilitates both parallelism and fault recovery. Faults...
Computational GRIDs potentially offer low-cost, readily available, and large-scale high-performance ...
We propose a refactoring tool for the Haskell programming language, capable of introducing paralleli...
As the number of cores in manycore systems grows exponentially, the number of failures is also pred...
High performance architectures are increasingly heterogeneous with shared and distributed memory co...