Abstract—This paper is concerned with an accurate computation of matrix multiplication, where components of matrices are represented by summation of floating-point numbers. Recently, an accurate summation algorithm is de-veloped by the latter three of the authors. In this paper, it is specialized to dot product. Using this, a fast implementa-tion of accurate matrix multiplication is discussed. Finally, some numerical results are presented to confirm the effec-tiveness of the proposed algorithm. 1
Fast algorithms for matrix multiplication, namely those that perform asymptotically fewer scalar ope...
International audienceDot products (also called sums of products) are ubiquitous in matrix computati...
AbstractWe discuss several methods for real interval matrix multiplication. First, earlier studies o...
Algorithms for summation and dot product of floating point numbers are presented which are fast in t...
National audienceOn modern multi-core, many-core, and heterogeneous architectures, floating-point co...
Abstract. Given a vector of floating-point numbers with exact sum s, we present an algorithm for cal...
Combined with doubly compensated summation, scalar fused multiply-add instructions redefine the conc...
International audienceDue to non-associativity of floating-point operations and dynamic schedu...
International audienceOn modern multi-core, many-core, and heterogeneous architectures, floating-poi...
Abstract. In this Part II of this paper we first refine the analysis of error-free vector transforma...
AbstractSummation is a basic operation in scientific computing; furthermore division-free arithmetic...
Abstract. Given a vector pi of floating-point numbers with exact sum s, we present a new algorithm w...
This paper presents a study of some basic blocks needed in the design of floating-point summation al...
Numerical data processing is a key task across different fields of computer technology use. However,...
International audienceWe deal with accurate complex multiplication in binary floating-point arithmet...
Fast algorithms for matrix multiplication, namely those that perform asymptotically fewer scalar ope...
International audienceDot products (also called sums of products) are ubiquitous in matrix computati...
AbstractWe discuss several methods for real interval matrix multiplication. First, earlier studies o...
Algorithms for summation and dot product of floating point numbers are presented which are fast in t...
National audienceOn modern multi-core, many-core, and heterogeneous architectures, floating-point co...
Abstract. Given a vector of floating-point numbers with exact sum s, we present an algorithm for cal...
Combined with doubly compensated summation, scalar fused multiply-add instructions redefine the conc...
International audienceDue to non-associativity of floating-point operations and dynamic schedu...
International audienceOn modern multi-core, many-core, and heterogeneous architectures, floating-poi...
Abstract. In this Part II of this paper we first refine the analysis of error-free vector transforma...
AbstractSummation is a basic operation in scientific computing; furthermore division-free arithmetic...
Abstract. Given a vector pi of floating-point numbers with exact sum s, we present a new algorithm w...
This paper presents a study of some basic blocks needed in the design of floating-point summation al...
Numerical data processing is a key task across different fields of computer technology use. However,...
International audienceWe deal with accurate complex multiplication in binary floating-point arithmet...
Fast algorithms for matrix multiplication, namely those that perform asymptotically fewer scalar ope...
International audienceDot products (also called sums of products) are ubiquitous in matrix computati...
AbstractWe discuss several methods for real interval matrix multiplication. First, earlier studies o...