Abstract—This paper presents the design, implementation and evaluation of BAG, a system that manages GPU as the buffer cache in operating systems. Unlike previous uses of GPUs, which have focused on the computational capabilities of GPUs, BAG is designed to explore a new dimension in managing GPUs in heterogeneous systems where the GPU memory is an exploitable but always ignored resource. With the carefully designed data structures and algorithms, such as concurrent hashtable, log-structured data store for the management of GPU memory, and highly-parallel GPU kernels for garbage collection, BAG achieves good performance under various workloads. In addition, leveraging the existing abstraction of the operating system not only makes the imple...
Massively parallel, throughput-oriented systems such as graphics processing units (GPUs) offer high ...
The usage of Graphics Processing Units (GPUs) as an application accelerator has become increasingly ...
Abstract—With the SIMT execution model, GPUs can hide memory latency through massive multithreading ...
This paper presents the design, implementation and evaluation of BAG, a system that manages GPU as t...
International audienceInitially introduced as special-purpose accelerators for graphics applications...
AbstractCloud computing has become an emerging virtualization-based computing paradigm for various a...
<p>The continued growth of the computational capability of throughput processors has made throughput...
state.edu GPGPUs are evolving from dedicated accelerators towards mainstream commodity computing res...
The graphics processing unit (GPU) is becoming a very powerful platform to accelerate graphics and d...
Graphics processing units (GPUs) have become a very powerful platform embracing a concept of heterog...
Integrated Heterogeneous System (IHS) processors pack throughput-oriented General-Purpose Graphics P...
Graphics processing units (GPUs) have become ubiquitous for general purpose applications due to thei...
Graphical Processing Units (GPUs) offer massive, highly-efficient parallelism, making them an attrac...
Hardware caches are widely employed in GPGPUs to achieve higher performance and energy efficiency. I...
Abstract—Programmer-managed GPU memory is a major challenge in writing GPU applications. Programmers...
Massively parallel, throughput-oriented systems such as graphics processing units (GPUs) offer high ...
The usage of Graphics Processing Units (GPUs) as an application accelerator has become increasingly ...
Abstract—With the SIMT execution model, GPUs can hide memory latency through massive multithreading ...
This paper presents the design, implementation and evaluation of BAG, a system that manages GPU as t...
International audienceInitially introduced as special-purpose accelerators for graphics applications...
AbstractCloud computing has become an emerging virtualization-based computing paradigm for various a...
<p>The continued growth of the computational capability of throughput processors has made throughput...
state.edu GPGPUs are evolving from dedicated accelerators towards mainstream commodity computing res...
The graphics processing unit (GPU) is becoming a very powerful platform to accelerate graphics and d...
Graphics processing units (GPUs) have become a very powerful platform embracing a concept of heterog...
Integrated Heterogeneous System (IHS) processors pack throughput-oriented General-Purpose Graphics P...
Graphics processing units (GPUs) have become ubiquitous for general purpose applications due to thei...
Graphical Processing Units (GPUs) offer massive, highly-efficient parallelism, making them an attrac...
Hardware caches are widely employed in GPGPUs to achieve higher performance and energy efficiency. I...
Abstract—Programmer-managed GPU memory is a major challenge in writing GPU applications. Programmers...
Massively parallel, throughput-oriented systems such as graphics processing units (GPUs) offer high ...
The usage of Graphics Processing Units (GPUs) as an application accelerator has become increasingly ...
Abstract—With the SIMT execution model, GPUs can hide memory latency through massive multithreading ...