This paper presents the design, implementation, and evaluation ofIO-Lite, a unified I/O buffering and caching system. IO-Lite unifies all buffering and caching in the system, to the extent permitted by the hardware. In particular, it allows applications, interprocess communication, the file system, the file cache, and the network subsystem to share a single physical copy of the data safely and concurrently. Protection and security are maintained through a combination of access control and read-only sharing. The various subsystems use (mutable) buffer aggregates to access the data according to their needs. IO-Lite eliminates all copying and multiple buffering of I/Odata, and enables various cross-subsystem optimizations. Performance measurem...
Due to historical reasons, today's computer systems treat I/O devices as second-class citizens, supp...
Proceedings of the First PhD Symposium on Sustainable Ultrascale Computing Systems (NESUS PhD 2016) ...
As IO bandwidth continues to grow, processor speeds have stagnated. As such, the need to maximize th...
This paper presents the design, implementation, and evaluation of IO-Lite, a unified I/O buffering a...
This article presents the design, implementation, and evaluation of IO -Lite, a unified I/O bufferin...
This work was also published as a Rice University thesis/dissertation: http://hdl.handle.net/1911/17...
Memory copy speed is known to be a significant barrier to high-speed communication. We perform an an...
Client-side file caching has long been recognized as a file system enhancement to reduce the amount ...
We present a novel taxonomy that characterizes in a structured way the software and hardware tradeof...
In parallel I/O systems the I/O buffer can be used to improve I/O parallelism by improving I/O laten...
Client-side file caching is one of many I/O strategies adopted by today’s parallel file systems that...
Abstract—Massively parallel applications often require periodic data checkpointing for program resta...
Traditionally storage has not been part of a programming model’s semantics and is added only as an I...
This thesis comprises of an in-depth investigation on issues related to high performance I/O archite...
The basic block I/O interface used for interacting with stor-age devices hasn’t changed much in 30 y...
Due to historical reasons, today's computer systems treat I/O devices as second-class citizens, supp...
Proceedings of the First PhD Symposium on Sustainable Ultrascale Computing Systems (NESUS PhD 2016) ...
As IO bandwidth continues to grow, processor speeds have stagnated. As such, the need to maximize th...
This paper presents the design, implementation, and evaluation of IO-Lite, a unified I/O buffering a...
This article presents the design, implementation, and evaluation of IO -Lite, a unified I/O bufferin...
This work was also published as a Rice University thesis/dissertation: http://hdl.handle.net/1911/17...
Memory copy speed is known to be a significant barrier to high-speed communication. We perform an an...
Client-side file caching has long been recognized as a file system enhancement to reduce the amount ...
We present a novel taxonomy that characterizes in a structured way the software and hardware tradeof...
In parallel I/O systems the I/O buffer can be used to improve I/O parallelism by improving I/O laten...
Client-side file caching is one of many I/O strategies adopted by today’s parallel file systems that...
Abstract—Massively parallel applications often require periodic data checkpointing for program resta...
Traditionally storage has not been part of a programming model’s semantics and is added only as an I...
This thesis comprises of an in-depth investigation on issues related to high performance I/O archite...
The basic block I/O interface used for interacting with stor-age devices hasn’t changed much in 30 y...
Due to historical reasons, today's computer systems treat I/O devices as second-class citizens, supp...
Proceedings of the First PhD Symposium on Sustainable Ultrascale Computing Systems (NESUS PhD 2016) ...
As IO bandwidth continues to grow, processor speeds have stagnated. As such, the need to maximize th...