Abstract—We present a distributed transactional memory system that exploits a new opportunity to automatically hide network latency by speculatively prefetching and caching objects. The system includes an object caching framework, language extensions to support our approach, and symbolic prefetches. To our knowledge, this is the first prefetching approach that can prefetch objects whose addresses have not been computed or predicted. Our approach makes aggressive use of both prefetching and caching of remote objects to hide network latency while relying on the transaction commit mechanism to preserve the simple transactional consistency model that we present to the developer. We have evaluated this approach on three distributed benchmarks, f...
AbstractPrefetch engines working on distributed memory systems behave independently by analyzing the...
grantor: University of TorontoA key obstacle to achieving high performance on software dis...
[[abstract]]Proxy prefetch caching aims to reduce the latency in serving web requests by prefetching...
International audienceDeveloping efficient distributed applications while ...
We present a static analysis for the automatic generation of sym-bolic prefetches in a transactional...
We have developed a transaction-based approach to distributed shared memory(DSM) that supports objec...
Memory access latency is the primary performance bottle-neck in modern computer systems. Prefetching...
Memory access latency is the primary performance bottle-neck in modern computer systems. Prefetching...
this paper, we examine the way in which prefetching can exploit parallelism. Prefetching has been st...
This paper presents cooperative prefetching and caching — the use of network-wide global resources (...
Abstract—In this paper, we present an informed prefetching technique called IPODS that makes use of ...
This paper presents our studies on the connectivity between objects and traversal behavior over the ...
Recent advances in integrating logic and DRAM on the same chip potentially open up new avenues for a...
As an innovative distributed computing technique for sharing the memory resources in high-speed netw...
Although shared memory programming models show good programmability compared to message passing prog...
AbstractPrefetch engines working on distributed memory systems behave independently by analyzing the...
grantor: University of TorontoA key obstacle to achieving high performance on software dis...
[[abstract]]Proxy prefetch caching aims to reduce the latency in serving web requests by prefetching...
International audienceDeveloping efficient distributed applications while ...
We present a static analysis for the automatic generation of sym-bolic prefetches in a transactional...
We have developed a transaction-based approach to distributed shared memory(DSM) that supports objec...
Memory access latency is the primary performance bottle-neck in modern computer systems. Prefetching...
Memory access latency is the primary performance bottle-neck in modern computer systems. Prefetching...
this paper, we examine the way in which prefetching can exploit parallelism. Prefetching has been st...
This paper presents cooperative prefetching and caching — the use of network-wide global resources (...
Abstract—In this paper, we present an informed prefetching technique called IPODS that makes use of ...
This paper presents our studies on the connectivity between objects and traversal behavior over the ...
Recent advances in integrating logic and DRAM on the same chip potentially open up new avenues for a...
As an innovative distributed computing technique for sharing the memory resources in high-speed netw...
Although shared memory programming models show good programmability compared to message passing prog...
AbstractPrefetch engines working on distributed memory systems behave independently by analyzing the...
grantor: University of TorontoA key obstacle to achieving high performance on software dis...
[[abstract]]Proxy prefetch caching aims to reduce the latency in serving web requests by prefetching...