Thesis (Ph.D.), Department of Mathematics, Washington State UniversityA new class of methods for accelerating linear system solving and eigenvalue computations for positive definite matrices using GPUs is presented. This method makes use of techniques from polynomial approximation theory to construct new types of polynomial spectral transformations that are easy to parallelize and when combined with GPUs can give a factor of 100 reduction in run times for certain matrices. These methods also require significantly less memory than traditional methods, making it possible to solve large problems on an average workstation.Department of Mathematics, Washington State Universit
Many eigenvalue and eigenvector algorithms begin with reducing the input matrix into a tridiagonal ...
Simulations are indispensable for engineering. They make it possible that one can perform fa...
Computing a matrix polynomial is the basic process in the calculation of functions of matrices by th...
The article describes the matrix algebra libraries based on the modern technologies of parallel prog...
The modern GPUs are well suited for intensive computational tasks and massive parallel computation. ...
© 2019, Pleiades Publishing, Ltd. Practical applicability of many statistical algorithms is limited ...
Abstract. Linear systems are required to solve in many scientific applications and the solution of t...
We present several algorithms to compute the solution of a linear system of equations on a graphics ...
In the recent years, the graphics processing unit (GPU) has emerged as a popular platform for perfor...
We present several algorithms to compute the solution of a linear system of equa-tions on a GPU, as ...
Graphical Processing Units (GPUs) have become more accessible peripheral devices with great computin...
Communicated by Yasuaki Ito Solution of large-scale dense nonsymmetric eigenvalue problem is require...
As a recurrent problem in numerical analysis and computational science, eigenvector and eigenvalue d...
This work was also published as a Rice University thesis/dissertation: http://hdl.handle.net/1911/17...
Abstract. Approximation of matrices using the Singular Value Decom-position (SVD) plays a central ro...
Many eigenvalue and eigenvector algorithms begin with reducing the input matrix into a tridiagonal ...
Simulations are indispensable for engineering. They make it possible that one can perform fa...
Computing a matrix polynomial is the basic process in the calculation of functions of matrices by th...
The article describes the matrix algebra libraries based on the modern technologies of parallel prog...
The modern GPUs are well suited for intensive computational tasks and massive parallel computation. ...
© 2019, Pleiades Publishing, Ltd. Practical applicability of many statistical algorithms is limited ...
Abstract. Linear systems are required to solve in many scientific applications and the solution of t...
We present several algorithms to compute the solution of a linear system of equations on a graphics ...
In the recent years, the graphics processing unit (GPU) has emerged as a popular platform for perfor...
We present several algorithms to compute the solution of a linear system of equa-tions on a GPU, as ...
Graphical Processing Units (GPUs) have become more accessible peripheral devices with great computin...
Communicated by Yasuaki Ito Solution of large-scale dense nonsymmetric eigenvalue problem is require...
As a recurrent problem in numerical analysis and computational science, eigenvector and eigenvalue d...
This work was also published as a Rice University thesis/dissertation: http://hdl.handle.net/1911/17...
Abstract. Approximation of matrices using the Singular Value Decom-position (SVD) plays a central ro...
Many eigenvalue and eigenvector algorithms begin with reducing the input matrix into a tridiagonal ...
Simulations are indispensable for engineering. They make it possible that one can perform fa...
Computing a matrix polynomial is the basic process in the calculation of functions of matrices by th...