When performing divisions using Newton-Raphson (or similar) iterations on a processor with a floating-point fused multiply-add instruction, one must sometimes scale the iterations, to avoid over/underflow and/or loss of accuracy. This may lead to double-roundings, resulting in output values that may not be correctly rounded when the quotient is in the subnormal range. We show how to avoid this problem
International audienceThis paper deals with the accuracy of complex division in radix-two floating-p...
We present techniques for accelerating the floating-point computation of x/y when y is known before ...
AbstractBack in the 1960s Goldschmidt presented a variation of Newton–Raphson iterations for divisio...
When performing divisions using Newton-Raphson (or similar) iterations on a processor with a floatin...
International audienceWhen performing divisions using Newton-Raphson (or similar) iterations on a pr...
International audienceSince the introduction of the Fused Multiply and Add (FMA) in the IEEE-754-200...
Double rounding occurs when a floating-point value is first rounded to an intermediate precision bef...
This paper describes a study of a class of algorithms for the floating-point divide and square root ...
International audienceMany numerical problems require a higher computing precision than that offered...
International audienceMany numerical problems require a higher computing precision than the one offe...
The advantages of the convergence with the square of the Newton-Raphson method are combined with the...
Multiplicative Newton–Raphson and Goldschmidt algorithms are widely used in current processors to im...
The authors consider the possibility of designing architectures which combine in the best possible w...
Goldschmidt’s Algorithms for division and square root are often characterized as being useful for ha...
We present techniques for accelerating the floating-point computation of x/y when y is known before ...
International audienceThis paper deals with the accuracy of complex division in radix-two floating-p...
We present techniques for accelerating the floating-point computation of x/y when y is known before ...
AbstractBack in the 1960s Goldschmidt presented a variation of Newton–Raphson iterations for divisio...
When performing divisions using Newton-Raphson (or similar) iterations on a processor with a floatin...
International audienceWhen performing divisions using Newton-Raphson (or similar) iterations on a pr...
International audienceSince the introduction of the Fused Multiply and Add (FMA) in the IEEE-754-200...
Double rounding occurs when a floating-point value is first rounded to an intermediate precision bef...
This paper describes a study of a class of algorithms for the floating-point divide and square root ...
International audienceMany numerical problems require a higher computing precision than that offered...
International audienceMany numerical problems require a higher computing precision than the one offe...
The advantages of the convergence with the square of the Newton-Raphson method are combined with the...
Multiplicative Newton–Raphson and Goldschmidt algorithms are widely used in current processors to im...
The authors consider the possibility of designing architectures which combine in the best possible w...
Goldschmidt’s Algorithms for division and square root are often characterized as being useful for ha...
We present techniques for accelerating the floating-point computation of x/y when y is known before ...
International audienceThis paper deals with the accuracy of complex division in radix-two floating-p...
We present techniques for accelerating the floating-point computation of x/y when y is known before ...
AbstractBack in the 1960s Goldschmidt presented a variation of Newton–Raphson iterations for divisio...