Field of study: Electrical engineering.Dr. Michela Becchi, Thesis Supervisor."December 2017."Floating-point computations produce approximate results, possibly leading to inaccuracy and reproducibility problems. Existing work addresses two issues: first, the design of high precision floating-point representations, and second, the study of methods to support a trade-off between accuracy and performance of central processing unit (CPU) applications. However, a comprehensive study of trade-offs between accuracy and performance on modern graphic processing units (GPUs) is missing. This thesis covers the use of different floating-point precisions (i.e., single and double floating-point precision) in the IEEE 754 standard, the GNU Multiple Precisi...
Since 1985, the IEEE 754 standard defines formats, rounding modes and basic operations for floating-...
We present a methodology for generating floating-point arithmetic hardware designs which are, for su...
It has been shown that FPGAs could outperform high-end microprocessors on floating-point computation...
Floating-point computations produce approximate results, which can lead to inaccuracy problems. Exis...
Abstract. FPGAs and GPUs are increasingly used in a range of high performance computing applications...
Abstract — Double-float (df64) and quad-float (qf128) numeric types can be implemented on current GP...
Abstract. Most mathematical formulae are defined in terms of operations on real numbers, but compute...
<p>In this thesis, we design frameworks for efficient and accurate floating point computation. The p...
As scientific computation continues to scale, it is crucial to use floating-point arithmetic process...
Reducing the precision of floating-point values can improve performance and/or reduce energy expendi...
Floating-point numbers have an intuitive meaning when it comes to physics-based numerical computatio...
This handbook is a definitive guide to the effective use of modern floating-point arithmetic, which ...
The precision used in an algorithm affects the error and performance of individual computations, the...
International audienceThis handbook is a definitive guide to the effective use of modern floating-po...
International audienceThere is a growing interest in the use of reduced-precision arithmetic, exacer...
Since 1985, the IEEE 754 standard defines formats, rounding modes and basic operations for floating-...
We present a methodology for generating floating-point arithmetic hardware designs which are, for su...
It has been shown that FPGAs could outperform high-end microprocessors on floating-point computation...
Floating-point computations produce approximate results, which can lead to inaccuracy problems. Exis...
Abstract. FPGAs and GPUs are increasingly used in a range of high performance computing applications...
Abstract — Double-float (df64) and quad-float (qf128) numeric types can be implemented on current GP...
Abstract. Most mathematical formulae are defined in terms of operations on real numbers, but compute...
<p>In this thesis, we design frameworks for efficient and accurate floating point computation. The p...
As scientific computation continues to scale, it is crucial to use floating-point arithmetic process...
Reducing the precision of floating-point values can improve performance and/or reduce energy expendi...
Floating-point numbers have an intuitive meaning when it comes to physics-based numerical computatio...
This handbook is a definitive guide to the effective use of modern floating-point arithmetic, which ...
The precision used in an algorithm affects the error and performance of individual computations, the...
International audienceThis handbook is a definitive guide to the effective use of modern floating-po...
International audienceThere is a growing interest in the use of reduced-precision arithmetic, exacer...
Since 1985, the IEEE 754 standard defines formats, rounding modes and basic operations for floating-...
We present a methodology for generating floating-point arithmetic hardware designs which are, for su...
It has been shown that FPGAs could outperform high-end microprocessors on floating-point computation...