textWe consider cache replacement algorithms at a shared cache in a multicore system which receives an arbitrary interleaving of requests from processes that have full knowledge about their individual request sequences. We establish tight bounds on the competitive ratio of deterministic and randomized cache replacement strategies when processes share memory blocks. Our main result for this case is a deterministic algorithm called GLOBAL-MAXIMA which is optimum up to a constant factor when processes share memory blocks. Our framework is a generalization of the application controlled caching framework in which processes access disjoint sets of memory blocks. We also present a deterministic algorithm called RR-PROC-MARK which exactly matches t...
On multicore processors, applications are run sharing the cache. This paper presents online optimiza...
In a large-scale information system such as a digital library or the web, a set of distributed cach...
Modern processors use high-performance cache replacement policies that outperform traditional altern...
We propose a provably efficient application-controlled global strategy for organizing a cache of siz...
This is the published version. Copyright © 2000 Society for Industrial and Applied MathematicsWe pro...
Reordering instructions and data layout can bring significant performance improvement for memory bou...
Multi-core x86_64 processors introduced an important change in architecture, a shared last level cac...
Reconsider the competitiveness ofon-line strategies using k servers versus the optimal off-line stra...
Memory efficiency and locality have substantial impact on the performance of programs, particularly ...
When a cache is shared by multiple cores, its space may be allocated either by sharing, partitioning...
An optimal replacement policy that minimizes the miss rate in a private cache was proposed several d...
Effective sharing of the last level cache has a significant influence on the overall performance of ...
The cache interference is found to play a critical role in optimizing cache allocation among concurr...
Multicore processors have become ubiquitous, both in general-purpose and special-purpose application...
Thesis (Ph.D.)--University of Rochester. Department of Computer Science, 2018.Advancements in comput...
On multicore processors, applications are run sharing the cache. This paper presents online optimiza...
In a large-scale information system such as a digital library or the web, a set of distributed cach...
Modern processors use high-performance cache replacement policies that outperform traditional altern...
We propose a provably efficient application-controlled global strategy for organizing a cache of siz...
This is the published version. Copyright © 2000 Society for Industrial and Applied MathematicsWe pro...
Reordering instructions and data layout can bring significant performance improvement for memory bou...
Multi-core x86_64 processors introduced an important change in architecture, a shared last level cac...
Reconsider the competitiveness ofon-line strategies using k servers versus the optimal off-line stra...
Memory efficiency and locality have substantial impact on the performance of programs, particularly ...
When a cache is shared by multiple cores, its space may be allocated either by sharing, partitioning...
An optimal replacement policy that minimizes the miss rate in a private cache was proposed several d...
Effective sharing of the last level cache has a significant influence on the overall performance of ...
The cache interference is found to play a critical role in optimizing cache allocation among concurr...
Multicore processors have become ubiquitous, both in general-purpose and special-purpose application...
Thesis (Ph.D.)--University of Rochester. Department of Computer Science, 2018.Advancements in comput...
On multicore processors, applications are run sharing the cache. This paper presents online optimiza...
In a large-scale information system such as a digital library or the web, a set of distributed cach...
Modern processors use high-performance cache replacement policies that outperform traditional altern...