Higher-level parallel programming languages can be difficult to implement efficiently on parallel machines. This paper shows how a flexible, compiler-controlled memory system can help achieve good performance for language constructs that previously appeared too costly to be practical.Our compiler-controlled memory system is called Loosely Coherent Memory (LCM). It is an example of a larger class of Reconcilable Shared Memory (RSM) systems, which generalize the replication and merge policies of cache-coherent shared-memory. RSM protocols differ in the action taken by a processor in response to arequestfor a location and the way in which a processorreconcilesmultiple outstanding copies of a location. LCM memory becomes temporarily inconsisten...
textA memory consistency model for a language defines the order of memory operations performed by ea...
Manual memory management is error prone. Some of the errors it causes, in particular memory leaks an...
Many parallel languages presume a shared address space in which any portion of a computation can acc...
Higher-level parallel programming languages can be difficult to implement efficiently on parallel ma...
Distributed Shared memory (DSM) has become an accepted abstraction for programming distributed syst...
International audienceIn this article, we consider the semantic design and verified compilation of a...
Sequential consistency (SC) is the simplest program-ming interface for shared-memory systems but imp...
We present a new mechanism-oriented memory model called Commit-Reconcile & Fences (CRF) and defi...
Distributed Shared Memory (DSM) is becoming an accepted abstraction for programming distributed sy...
The most intuitive memory model for shared-memory multi-threaded programming is sequenti...
Many hardware and compiler optimisations introduced to speed up single-threaded programs also introd...
Coherent Parallel C (CPC) is an extension of C for parallelism. The extensions are not simply parall...
This paper discusses memory consistency models and their influence on software in the context of par...
Shared memory concurrency is the pervasive programming model for multicore architectures such as x8...
This work was also published as a Rice University thesis/dissertation: http://hdl.handle.net/1911/16...
textA memory consistency model for a language defines the order of memory operations performed by ea...
Manual memory management is error prone. Some of the errors it causes, in particular memory leaks an...
Many parallel languages presume a shared address space in which any portion of a computation can acc...
Higher-level parallel programming languages can be difficult to implement efficiently on parallel ma...
Distributed Shared memory (DSM) has become an accepted abstraction for programming distributed syst...
International audienceIn this article, we consider the semantic design and verified compilation of a...
Sequential consistency (SC) is the simplest program-ming interface for shared-memory systems but imp...
We present a new mechanism-oriented memory model called Commit-Reconcile & Fences (CRF) and defi...
Distributed Shared Memory (DSM) is becoming an accepted abstraction for programming distributed sy...
The most intuitive memory model for shared-memory multi-threaded programming is sequenti...
Many hardware and compiler optimisations introduced to speed up single-threaded programs also introd...
Coherent Parallel C (CPC) is an extension of C for parallelism. The extensions are not simply parall...
This paper discusses memory consistency models and their influence on software in the context of par...
Shared memory concurrency is the pervasive programming model for multicore architectures such as x8...
This work was also published as a Rice University thesis/dissertation: http://hdl.handle.net/1911/16...
textA memory consistency model for a language defines the order of memory operations performed by ea...
Manual memory management is error prone. Some of the errors it causes, in particular memory leaks an...
Many parallel languages presume a shared address space in which any portion of a computation can acc...