International audienceThis paper presents some work in progress on the development of fast and accurate support for complex floating-point arithmetic on embedded processors. Focusing on the case of multiplication, we describe algorithms and implementations for computing both the real and imaginary parts with high relative accuracy. We show that, in practice, such accuracy guarantees can be achieved with reasonable overhead compared with conventional algorithms (which are those offered by current implementations and for which the real or imaginary part of a product can have no correct digit at all). For example, the average execution-time overheads when computing an FFT on the ARM Cortex-A53 and -A57 processors range from 1.04x to 1.17x only...
Abstract. The accuracy analysis of complex floating-point multiplication done by Brent, Percival, an...
International audienceThis paper presents some work in progress on fast and accurate floating-point ...
Numerical codes that require arbitrary precision floating point (APFP) numbers for their core comput...
International audienceThis paper presents some work in progress on the development of fast and accur...
National audienceOn modern multi-core, many-core, and heterogeneous architectures, floating-point co...
International audienceWe deal with accurate complex multiplication in binary floating-point arithmet...
Abstract. Most mathematical formulae are defined in terms of operations on real numbers, but compute...
It has been shown that FPGAs could outperform high-end microprocessors on floating-point computation...
On modern multi-core, many-core, and heterogeneous architectures, floating-point computations, espec...
International audienceDue to non-associativity of floating-point operations and dynamic schedu...
High speed computation is the need of today’s generation of Processors. To accomplish this maj...
International audienceOn modern multi-core, many-core, and heterogeneous architectures, floating-poi...
International audienceSome important computational problems must use a floating-point (FP) precision...
Abstract. The accuracy analysis of complex floating-point multiplication done by Brent, Percival, an...
Abstract. The accuracy analysis of complex floating-point multiplication done by Brent, Percival, an...
International audienceThis paper presents some work in progress on fast and accurate floating-point ...
Numerical codes that require arbitrary precision floating point (APFP) numbers for their core comput...
International audienceThis paper presents some work in progress on the development of fast and accur...
National audienceOn modern multi-core, many-core, and heterogeneous architectures, floating-point co...
International audienceWe deal with accurate complex multiplication in binary floating-point arithmet...
Abstract. Most mathematical formulae are defined in terms of operations on real numbers, but compute...
It has been shown that FPGAs could outperform high-end microprocessors on floating-point computation...
On modern multi-core, many-core, and heterogeneous architectures, floating-point computations, espec...
International audienceDue to non-associativity of floating-point operations and dynamic schedu...
High speed computation is the need of today’s generation of Processors. To accomplish this maj...
International audienceOn modern multi-core, many-core, and heterogeneous architectures, floating-poi...
International audienceSome important computational problems must use a floating-point (FP) precision...
Abstract. The accuracy analysis of complex floating-point multiplication done by Brent, Percival, an...
Abstract. The accuracy analysis of complex floating-point multiplication done by Brent, Percival, an...
International audienceThis paper presents some work in progress on fast and accurate floating-point ...
Numerical codes that require arbitrary precision floating point (APFP) numbers for their core comput...