Abstract On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution. The approach presented here can apply not only to conventional processors but also to other technologies such as Field Programmab... Title of program: ITER-REF Catalogue Id: AECO_v1_0 Nature of problem On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and...
It is well established that reduced precision arithmetic can be exploited to accelerate the solution...
Today's floating-point arithmetic landscape is broader than ever. While scientific computing has tra...
The recent dramatic progress in machine learning is partially attributed to the availability of high...
International audienceOn modern architectures, the performance of 32-bit operations is often at leas...
Recent versions of microprocessors exhibit performance characteristics for 32 bit floating point ari...
International audienceBy using a combination of 32-bit and 64-bit floating point arithmetic, the per...
International audienceBy using a combination of 32-bit and 64-bit floating point arithmetic, the per...
By using a combination of 32-bit and 64-bit floating point arithmetic, the per-formance of many dens...
In this talk, we will look at the current state of high performance computing and look at the next s...
How should one design and implement a program for the multiplication of sparse polynomials? This is ...
The objective of high performance computing (HPC) is to ensure that the computational power of hardw...
The recent dramatic progress in machine learning is partially attributed to the availability of high...
A plethora of program analysis and optimization techniques rely on linear programming at their heart...
This report has been developed over the work done in the deliverable [Nava94] There it was shown tha...
This dissertation details contributions made by the author to the field of computer science while wo...
It is well established that reduced precision arithmetic can be exploited to accelerate the solution...
Today's floating-point arithmetic landscape is broader than ever. While scientific computing has tra...
The recent dramatic progress in machine learning is partially attributed to the availability of high...
International audienceOn modern architectures, the performance of 32-bit operations is often at leas...
Recent versions of microprocessors exhibit performance characteristics for 32 bit floating point ari...
International audienceBy using a combination of 32-bit and 64-bit floating point arithmetic, the per...
International audienceBy using a combination of 32-bit and 64-bit floating point arithmetic, the per...
By using a combination of 32-bit and 64-bit floating point arithmetic, the per-formance of many dens...
In this talk, we will look at the current state of high performance computing and look at the next s...
How should one design and implement a program for the multiplication of sparse polynomials? This is ...
The objective of high performance computing (HPC) is to ensure that the computational power of hardw...
The recent dramatic progress in machine learning is partially attributed to the availability of high...
A plethora of program analysis and optimization techniques rely on linear programming at their heart...
This report has been developed over the work done in the deliverable [Nava94] There it was shown tha...
This dissertation details contributions made by the author to the field of computer science while wo...
It is well established that reduced precision arithmetic can be exploited to accelerate the solution...
Today's floating-point arithmetic landscape is broader than ever. While scientific computing has tra...
The recent dramatic progress in machine learning is partially attributed to the availability of high...