of the Dissertation A Compiler-Directed Distributed Shared Memory System by Manish Verma Doctor of Philosophy in Computer Science State University of New York at Stony Brook 1996 This dissertation presents Locust, a compiler-directed distributed shared memory system for parallel computing in a network of workstations (NOW) environment that is shared with other routine applications. The focus of the Locust project is on scientific and engineering numerical applications with regular computation structures. The main drawback of a commodity NOW platform is the high cost of communication in this environment. Locust strives to overcome this handicap by reducing both the total amount of communication and the overhead associated with every indivi...
Data-parallel languages allow programmers to use the familiar machine-independent programming style ...
Reducing memory latency is critical to the performance of large-scale parallel systems. Due to the t...
Distributed systems receive much attention because parallelism and scalability are achieved with rel...
Many scientific applications are iterative and specify repetitive communication patterns. This paper...
Multiprocessors with shared memory are considered more general and easier to program than message-pa...
This was a two-page overview of my NSF-funded project Supercomputing on a Cluster of Workstations v...
Thesis (Ph. D.)--University of Washington, 1997Two recent trends are affecting the design of medium-...
We are developing Munin y, a system that allows programs written for shared memory multiprocessors t...
In this thesis we propose and evaluate an architecture to build large scale distributed shared memor...
We are developing Munin, a system that allows programs written for shared memory multiprocessors to ...
137 p.Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 1997.These techniques have been im...
In this thesis, we explore the use of software distributed shared memory (SDSM) as a target communic...
This paper presents a new approach towards solving the combination and communication problems betwee...
This work was also published as a Rice University thesis/dissertation: http://hdl.handle.net/1911/19...
The increasing number of cores in manycore architectures causes important power and scalability prob...
Data-parallel languages allow programmers to use the familiar machine-independent programming style ...
Reducing memory latency is critical to the performance of large-scale parallel systems. Due to the t...
Distributed systems receive much attention because parallelism and scalability are achieved with rel...
Many scientific applications are iterative and specify repetitive communication patterns. This paper...
Multiprocessors with shared memory are considered more general and easier to program than message-pa...
This was a two-page overview of my NSF-funded project Supercomputing on a Cluster of Workstations v...
Thesis (Ph. D.)--University of Washington, 1997Two recent trends are affecting the design of medium-...
We are developing Munin y, a system that allows programs written for shared memory multiprocessors t...
In this thesis we propose and evaluate an architecture to build large scale distributed shared memor...
We are developing Munin, a system that allows programs written for shared memory multiprocessors to ...
137 p.Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 1997.These techniques have been im...
In this thesis, we explore the use of software distributed shared memory (SDSM) as a target communic...
This paper presents a new approach towards solving the combination and communication problems betwee...
This work was also published as a Rice University thesis/dissertation: http://hdl.handle.net/1911/19...
The increasing number of cores in manycore architectures causes important power and scalability prob...
Data-parallel languages allow programmers to use the familiar machine-independent programming style ...
Reducing memory latency is critical to the performance of large-scale parallel systems. Due to the t...
Distributed systems receive much attention because parallelism and scalability are achieved with rel...