The assignment of resources or tasks to processors in a distributed or parallel system needs to be done in a fashion that helps to balance the load and scales to large configurations. In an architectural model that distinguishes between local and remote data access, it is important to base these allocation functions on a mechanism that preserves locality and avoids high-latency remote references. This paper explores performance considerations affecting the design of such a mechanism, the Concurrent Pools data structure. We evaluate the effectiveness of three different implementations of concurrent pools under a variety of stressful workloads. Our experiments expose several interesting effects with strong implications for practical concurren...
In this thesis, we examine an important issue in the execution of parallel programs on multicomputer...
In any system in which concurrent processes share resources, mutual exclusion refers to the problem ...
The multicore revolution means that programmers have many cores at their disposal in everything from...
The assignment of resources or tasks to processors in a distributed or parallel system needs to be d...
One common cause of poor performance in large-scale shared-memory multiprocessors is limited memory ...
The major impediment to scaling concurrent data structures is memory contention when accessing share...
Running programs across multiple nodes in a cluster of networked computers, such as in a supercomput...
Distributed shared-memory systems provide scalable performance and a convenient model for parallel p...
. This paper studies the locality analysis problem for sharedmemory multiprocessors, a class of para...
In this report, we propose new concurrent data structures and load balancing strategies for Branch-a...
We define a set of overhead functions that capture the salient artifacts representing the interact...
A concurrent system is a collection of processors that communicate by reading and writing from a sha...
It is often assumed that computational load balance cannot be achieved in parallel and distributed s...
We examine the task of concurrently computing alternative solutions to a problem. We restrict our in...
International audienceWe present a new model for distributed shared memory systems, based on remote ...
In this thesis, we examine an important issue in the execution of parallel programs on multicomputer...
In any system in which concurrent processes share resources, mutual exclusion refers to the problem ...
The multicore revolution means that programmers have many cores at their disposal in everything from...
The assignment of resources or tasks to processors in a distributed or parallel system needs to be d...
One common cause of poor performance in large-scale shared-memory multiprocessors is limited memory ...
The major impediment to scaling concurrent data structures is memory contention when accessing share...
Running programs across multiple nodes in a cluster of networked computers, such as in a supercomput...
Distributed shared-memory systems provide scalable performance and a convenient model for parallel p...
. This paper studies the locality analysis problem for sharedmemory multiprocessors, a class of para...
In this report, we propose new concurrent data structures and load balancing strategies for Branch-a...
We define a set of overhead functions that capture the salient artifacts representing the interact...
A concurrent system is a collection of processors that communicate by reading and writing from a sha...
It is often assumed that computational load balance cannot be achieved in parallel and distributed s...
We examine the task of concurrently computing alternative solutions to a problem. We restrict our in...
International audienceWe present a new model for distributed shared memory systems, based on remote ...
In this thesis, we examine an important issue in the execution of parallel programs on multicomputer...
In any system in which concurrent processes share resources, mutual exclusion refers to the problem ...
The multicore revolution means that programmers have many cores at their disposal in everything from...