International audienceOn modern multi-core, many-core, and heterogeneous architectures, floating-point computations may become non-deterministic and thus non-reproducible mainly due to non-associativity of floating-point operations. We introduce an algorithm to compute a product of two floating-point matrices that delivers reproducible results with the best possible accuracy. Our multi-level algorithm relies on fast vectorized floating-point expansions and as well as superaccumulators in a high-radix carry-save representation. We present implementations on recent Intel Xeon Phi accelerators and both AMD and NVIDIA GPUs