Abstract. Large–scale computation on graphs and other discrete struc-tures is becoming increasingly important in many applications, includ-ing computational biology, web search, and knowledge discovery. High– performance combinatorial computing is an infant field, in sharp contrast with numerical scientific computing. We argue that many of the tools of high-performance numerical com-puting – in particular, parallel algorithms and data structures for com-putation with sparse matrices – can form the nucleus of a robust infras-tructure for parallel computing on graphs. We demonstrate this with an implementation of a graph analysis benchmark using the sparse matrix infrastructure in Star-P, our parallel dialect of the Matlab program-ming langua...
Scaling up the sparse matrix-vector multiplication kernel on modern Graphics Processing Units (GPU) ...
How do we develop programs that are easy to express, easy to reason about, and able to achieve high ...
Vector computers have been extensively used for years in matrix algebra to treat with large dense ma...
This dissertation advances the state of the art for scalable high-performance graph analytics and da...
Sparse matrices are first class objects in many VHLLs (very high level languages) used for scientifi...
Algorithms are often parallelized based on data dependence analysis manually or by means of parallel...
Abstract. Generalized sparse matrix-matrix multiplication (or SpGEMM) is a key primitive for many hi...
A notable characteristic of the scientific computing and machine learning prob-lem domains is the la...
Efficiently processing large graphs is challenging, since parallel graph algorithms suffer from poor...
There has been significant recent interest in parallel graph processing due to the need to quickly a...
Abstract. Sparse matrix-matrix multiplication (or SpGEMM) is a key primitive for many high-performan...
Sparse matrix-vector multiplication is the kernel for many scientific computations. Parallelizing th...
Combinatorial algorithms have long played apivotal enabling role in many applications of parallel co...
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Comp...
Combinatorial algorithms have long played apivotal enabling role in many applications of parallel co...
Scaling up the sparse matrix-vector multiplication kernel on modern Graphics Processing Units (GPU) ...
How do we develop programs that are easy to express, easy to reason about, and able to achieve high ...
Vector computers have been extensively used for years in matrix algebra to treat with large dense ma...
This dissertation advances the state of the art for scalable high-performance graph analytics and da...
Sparse matrices are first class objects in many VHLLs (very high level languages) used for scientifi...
Algorithms are often parallelized based on data dependence analysis manually or by means of parallel...
Abstract. Generalized sparse matrix-matrix multiplication (or SpGEMM) is a key primitive for many hi...
A notable characteristic of the scientific computing and machine learning prob-lem domains is the la...
Efficiently processing large graphs is challenging, since parallel graph algorithms suffer from poor...
There has been significant recent interest in parallel graph processing due to the need to quickly a...
Abstract. Sparse matrix-matrix multiplication (or SpGEMM) is a key primitive for many high-performan...
Sparse matrix-vector multiplication is the kernel for many scientific computations. Parallelizing th...
Combinatorial algorithms have long played apivotal enabling role in many applications of parallel co...
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Comp...
Combinatorial algorithms have long played apivotal enabling role in many applications of parallel co...
Scaling up the sparse matrix-vector multiplication kernel on modern Graphics Processing Units (GPU) ...
How do we develop programs that are easy to express, easy to reason about, and able to achieve high ...
Vector computers have been extensively used for years in matrix algebra to treat with large dense ma...