We propose a sparse arithmetic for kernel matrices, enabling efficient scattered data analysis. The compression of kernel matrices by means of samplets yields sparse matrices such that assembly, addition, and multiplication of these matrices can be performed with essentially linear cost. Since the inverse of a kernel matrix is compressible, too, we have also fast access to the inverse kernel matrix by employing exact sparse selected inversion techniques. As a consequence, we can rapidly evaluate series expansions and contour integrals to access, numerically and approximately in a data-sparse format, more complicated matrix functions such as $A^\alpha$ and $\exp(A)$. By exploiting the matrix arithmetic, also efficient Gaussian process learni...
Sparse linear algebra kernels play a critical role in numerous applications, covering from exascale ...
A Gaussian Process (GP) is a prominent mathematical framework for stochastic function approximation ...
Efficient learning with non-linear kernels is often based on extracting features from the data that ...
International audienceThe computational cost of many signal processing and machine learning techniqu...
In kernel based methods such as Regularization Networks large datasets pose signi- cant problems s...
: A wide variety of problems in differentll and integral equations require a-plication and inversion...
We introduce the concept of samplets by transferring the construction of Tausch-White wavelets to s...
Compressive sensing accurately reconstructs a signal that is sparse in some basis from measurements,...
Dense kernel matrices $\Theta \in \mathbb{R}^{N \times N}$ obtained from point evaluations of a cova...
Abstract. Sparse matrix-vector multiplication is an important computational kernel that tends to per...
169 pagesKernel functions are used in a variety of scientific settings to measure relationships or i...
Cette thèse a pour objectif d’étudier et de valider expérimentalement les bénéfices, en terme de qua...
This thesis focuses on developing efficient algorithmic tools for processing large datasets. In many...
Sparse matrix representations are ubiquitous in computational science and machine learning, leading ...
Two efficient algorithms are proposed to seek the sparse representation on high-dimensional Hilbert ...
Sparse linear algebra kernels play a critical role in numerous applications, covering from exascale ...
A Gaussian Process (GP) is a prominent mathematical framework for stochastic function approximation ...
Efficient learning with non-linear kernels is often based on extracting features from the data that ...
International audienceThe computational cost of many signal processing and machine learning techniqu...
In kernel based methods such as Regularization Networks large datasets pose signi- cant problems s...
: A wide variety of problems in differentll and integral equations require a-plication and inversion...
We introduce the concept of samplets by transferring the construction of Tausch-White wavelets to s...
Compressive sensing accurately reconstructs a signal that is sparse in some basis from measurements,...
Dense kernel matrices $\Theta \in \mathbb{R}^{N \times N}$ obtained from point evaluations of a cova...
Abstract. Sparse matrix-vector multiplication is an important computational kernel that tends to per...
169 pagesKernel functions are used in a variety of scientific settings to measure relationships or i...
Cette thèse a pour objectif d’étudier et de valider expérimentalement les bénéfices, en terme de qua...
This thesis focuses on developing efficient algorithmic tools for processing large datasets. In many...
Sparse matrix representations are ubiquitous in computational science and machine learning, leading ...
Two efficient algorithms are proposed to seek the sparse representation on high-dimensional Hilbert ...
Sparse linear algebra kernels play a critical role in numerous applications, covering from exascale ...
A Gaussian Process (GP) is a prominent mathematical framework for stochastic function approximation ...
Efficient learning with non-linear kernels is often based on extracting features from the data that ...