Locks are used in shared memory parallel programs to achieve a variety of synchronization objectives. They may provide mutual exclusion for a critical section, or they may provide synchronized access to a task queue. In the former case, no ordering is implied between data operations outside of the critical sections, while in the latter case the operations preceding the enqueue of a task are ordered before the operations following a dequeue of that task. In this paper we argue that many uses of locks can be replaced by higher-level primitives that directly express the intended behavior, such as, for example, enqueue and dequeue operations on a task queue. This approach not only simplifies the programming model, but also allows a more efficie...
Abstract: "An important class of concurrent objects are those that are lock-free, that is, whose ope...
Modern multi-core processors provide primitives to allow parallel programs to atomically perform sel...
The past few years have marked the start of a historic transition from sequential to parallel comput...
EjFcient synchronization primitives are essential for achieving high performance in he-grain, shared...
The only reason to parallelize a program is to gain performance. However, the synchronization primit...
On shared memory multiprocessors, synchronization often turns out to be a performance bottleneck and...
As parallel machines become part of the mainstream computing environment, compilers will need to app...
Locks are a frequently used synchronisation mechanism in shared memory concurrent programs. They are...
Mutual-exclusion locks are currently the most popular mechanism for interprocess synchronisation, la...
Busy-wait techniques are heavily used for mutual exclusion and barrier synchronization in shared-mem...
A concurrent data object is lock-free if it guarantees that at least one, among all concurrent opera...
The advent of chip multi-processors has led to an increase in computational performance in recent ye...
Abstract. Synchronization in parallel programs is a major performance bottleneck. Shared data is pro...
Large-scale shared-memory multiprocessors typically have long latencies for remote data accesses. A...
Abstract. We present a code transformation for concurrent data struc-tures, which increases their sc...
Abstract: "An important class of concurrent objects are those that are lock-free, that is, whose ope...
Modern multi-core processors provide primitives to allow parallel programs to atomically perform sel...
The past few years have marked the start of a historic transition from sequential to parallel comput...
EjFcient synchronization primitives are essential for achieving high performance in he-grain, shared...
The only reason to parallelize a program is to gain performance. However, the synchronization primit...
On shared memory multiprocessors, synchronization often turns out to be a performance bottleneck and...
As parallel machines become part of the mainstream computing environment, compilers will need to app...
Locks are a frequently used synchronisation mechanism in shared memory concurrent programs. They are...
Mutual-exclusion locks are currently the most popular mechanism for interprocess synchronisation, la...
Busy-wait techniques are heavily used for mutual exclusion and barrier synchronization in shared-mem...
A concurrent data object is lock-free if it guarantees that at least one, among all concurrent opera...
The advent of chip multi-processors has led to an increase in computational performance in recent ye...
Abstract. Synchronization in parallel programs is a major performance bottleneck. Shared data is pro...
Large-scale shared-memory multiprocessors typically have long latencies for remote data accesses. A...
Abstract. We present a code transformation for concurrent data struc-tures, which increases their sc...
Abstract: "An important class of concurrent objects are those that are lock-free, that is, whose ope...
Modern multi-core processors provide primitives to allow parallel programs to atomically perform sel...
The past few years have marked the start of a historic transition from sequential to parallel comput...