International audienceBy using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution. These ideas can be applied to sparse multifrontal and supernodal direct techniques and sparse iterative techniques such as Krylov subspace methods. The approach presented here can apply not only to conventional processors but also to exotic technologies such as Field Programmable Gate Arrays (FPGA), Graphical Processing Units (GPU), and the Cell BE processor
Engineering problems involve the solution of large sparse linear systems, and require therefore fast...
This report has been developed over the work done in the deliverable [Nava94] There it was shown tha...
Vector computers have been extensively used for years in matrix algebra to treat with large dense ma...
International audienceBy using a combination of 32-bit and 64-bit floating point arithmetic, the per...
International audienceBy using a combination of 32-bit and 64-bit floating point arithmetic, the per...
International audienceOn modern architectures, the performance of 32-bit operations is often at leas...
By using a combination of 32-bit and 64-bit floating point arithmetic, the per-formance of many dens...
Recent versions of microprocessors exhibit performance characteristics for 32 bit floating point ari...
It is well established that mixed precision algorithms that factorize a matrix at a precision lower...
It is well established that reduced precision arithmetic can be exploited to accelerate the solution...
Today's floating-point arithmetic landscape is broader than ever. While scientific computing has tra...
On many current and emerging computing architectures, single-precision calculations are at least twi...
The standard LU factorization-based solution process for linear systems can be enhanced in speed or ...
Manufacturers of computer hardware are able to continuously sustain an unprecedented pace of progres...
Abstract—Krylov subspace solvers are often the method of choice when solving sparse linear systems i...
Engineering problems involve the solution of large sparse linear systems, and require therefore fast...
This report has been developed over the work done in the deliverable [Nava94] There it was shown tha...
Vector computers have been extensively used for years in matrix algebra to treat with large dense ma...
International audienceBy using a combination of 32-bit and 64-bit floating point arithmetic, the per...
International audienceBy using a combination of 32-bit and 64-bit floating point arithmetic, the per...
International audienceOn modern architectures, the performance of 32-bit operations is often at leas...
By using a combination of 32-bit and 64-bit floating point arithmetic, the per-formance of many dens...
Recent versions of microprocessors exhibit performance characteristics for 32 bit floating point ari...
It is well established that mixed precision algorithms that factorize a matrix at a precision lower...
It is well established that reduced precision arithmetic can be exploited to accelerate the solution...
Today's floating-point arithmetic landscape is broader than ever. While scientific computing has tra...
On many current and emerging computing architectures, single-precision calculations are at least twi...
The standard LU factorization-based solution process for linear systems can be enhanced in speed or ...
Manufacturers of computer hardware are able to continuously sustain an unprecedented pace of progres...
Abstract—Krylov subspace solvers are often the method of choice when solving sparse linear systems i...
Engineering problems involve the solution of large sparse linear systems, and require therefore fast...
This report has been developed over the work done in the deliverable [Nava94] There it was shown tha...
Vector computers have been extensively used for years in matrix algebra to treat with large dense ma...