We present a method of computing with matrices over very small finite fields of size larger than 2. We show how the method of Four Russians can be efficiently adapted to these larger fields, and introduce a row-wise matrix compression scheme that both reduces memory requirements and allows one to vectorize element operations. We also present some timings which indicate these methods equal [TODO are we uniformly better now?] or exceed the speed of the fastest implementations of linear algebra the authors are aware of
The complexity of matrix multiplication (hereafter MM) has been intensively studied since 1969, when...
Multiple independent matrix problems of very small size appear in a variety of different fields. In ...
The biggest cost of computing with large matrices in any modern computer is related to memory latenc...
We present a method of computing with matrices over very small finite fields of size larger than 2. ...
In a few number of applications, a need arises to do some usual linear algebra operations on a very ...
Black box linear algebra algorithms treat matrices as black boxes that can be applied to input vecto...
The aim of this paper is to propose an efficient algorithm (with polynomial or lower time complexity...
International audienceWe present here algorithms for efficient computation of linear algebra problem...
AbstractThe complexity of matrix multiplication has attracted a lot of attention in the last forty y...
Abstract. In this work we describe an efficient implementation of a hierarchy of algorithms for the ...
This electronic version was submitted by the student author. The certified thesis is available in th...
This article deals with the computation of the characteristic polynomial of dense matrices over smal...
Cryptographic computations such as factoring integers and computing discrete logarithms over finite ...
International audienceThe FFLAS project has established that exact matrix multiplication over finite...
(eng) The FFLAS project has established that exact matrix multiplication over finite fields can be p...
The complexity of matrix multiplication (hereafter MM) has been intensively studied since 1969, when...
Multiple independent matrix problems of very small size appear in a variety of different fields. In ...
The biggest cost of computing with large matrices in any modern computer is related to memory latenc...
We present a method of computing with matrices over very small finite fields of size larger than 2. ...
In a few number of applications, a need arises to do some usual linear algebra operations on a very ...
Black box linear algebra algorithms treat matrices as black boxes that can be applied to input vecto...
The aim of this paper is to propose an efficient algorithm (with polynomial or lower time complexity...
International audienceWe present here algorithms for efficient computation of linear algebra problem...
AbstractThe complexity of matrix multiplication has attracted a lot of attention in the last forty y...
Abstract. In this work we describe an efficient implementation of a hierarchy of algorithms for the ...
This electronic version was submitted by the student author. The certified thesis is available in th...
This article deals with the computation of the characteristic polynomial of dense matrices over smal...
Cryptographic computations such as factoring integers and computing discrete logarithms over finite ...
International audienceThe FFLAS project has established that exact matrix multiplication over finite...
(eng) The FFLAS project has established that exact matrix multiplication over finite fields can be p...
The complexity of matrix multiplication (hereafter MM) has been intensively studied since 1969, when...
Multiple independent matrix problems of very small size appear in a variety of different fields. In ...
The biggest cost of computing with large matrices in any modern computer is related to memory latenc...