The ability to write programs that execute efficiently on modern parallel computers has not been fully studied. In a DARPA-sponsored project, we are looking at measuring the development time for programs written for high performance computers (HPC). To attack this relatively novel measurement problem, our goal is to initially measure such development time in student programming to evaluate our own experimental protocols. Based on these results, we will generate a set of feasible experimental methods that can then be applied with more confidence to professional expert programmers. This paper describes a first pilot study addressing those goals. We ran an observational study with 15 students in a graduate level High Performance Computing clas...
Substantial time is spent on building, optimizing and maintaining large-scale software that is run o...
We describe a successful approach to designing and implementing a High Performance Computing (HPC) c...
We performed semistructured, open-ended interviews with 11 professional developers of parallel, scie...
The ability to write programs that execute efficiently on modern parallel computers has not been fu...
In developing High-Performance Computing (HPC) software, time to solution is an important metric. Th...
Abstract In order to understand how high performance computing (HPC) programs are developed, a serie...
In the high performance computing domain, the speed of execution of a program has typically been the...
Over the past three years we have been developing a methodology for running HPC experiments in a cla...
In this thesis, we quantitatively study the effect of High Performance Computing (HPC) novice progra...
In this thesis, we quantitatively study the effect of High Performance Computing (HPC) novice progra...
Abstract — There is widespread belief in the computer science community that MPI is a difficult and ...
We evaluate the claim that a PRAM-like parallel programming model (XMTC) requires less effort than a...
Substantial time is spent on building, optimizing and maintaining large-scale software that is run o...
One key to improving high performance computing (HPC) productivity is to find better ways to measure...
Substantial time is spent on building, optimizing and maintaining large-scale software that is run o...
Substantial time is spent on building, optimizing and maintaining large-scale software that is run o...
We describe a successful approach to designing and implementing a High Performance Computing (HPC) c...
We performed semistructured, open-ended interviews with 11 professional developers of parallel, scie...
The ability to write programs that execute efficiently on modern parallel computers has not been fu...
In developing High-Performance Computing (HPC) software, time to solution is an important metric. Th...
Abstract In order to understand how high performance computing (HPC) programs are developed, a serie...
In the high performance computing domain, the speed of execution of a program has typically been the...
Over the past three years we have been developing a methodology for running HPC experiments in a cla...
In this thesis, we quantitatively study the effect of High Performance Computing (HPC) novice progra...
In this thesis, we quantitatively study the effect of High Performance Computing (HPC) novice progra...
Abstract — There is widespread belief in the computer science community that MPI is a difficult and ...
We evaluate the claim that a PRAM-like parallel programming model (XMTC) requires less effort than a...
Substantial time is spent on building, optimizing and maintaining large-scale software that is run o...
One key to improving high performance computing (HPC) productivity is to find better ways to measure...
Substantial time is spent on building, optimizing and maintaining large-scale software that is run o...
Substantial time is spent on building, optimizing and maintaining large-scale software that is run o...
We describe a successful approach to designing and implementing a High Performance Computing (HPC) c...
We performed semistructured, open-ended interviews with 11 professional developers of parallel, scie...