We study kernel quadrature rules with convex weights. Our approach combines the spectral properties of the kernel with recombination results about point measures. This results in effective algorithms that construct convex quadrature rules using only access to i.i.d. samples from the underlying measure and evaluation of the kernel and that result in a small worst-case error. In addition to our theoretical results and the benefits resulting from convex weights, our experiments indicate that this construction can compete with the optimal bounds in well-known examples.Comment: 25 page
International audienceThis paper studies an intriguing phenomenon related to the good generalization...
The design of sparse quadratures for the approximation of integral operators related to symmetric po...
Batch Bayesian optimisation and Bayesian quadrature have been shown to be sample-efficient methods o...
We study kernel quadrature rules with convex weights. Our approach combines the spectral properties ...
We analyze the Nyström approximation of a positive definite kernel associated with a probability mea...
The kernel herding algorithm is used to construct quadrature rules in a reproducing kernel Hilbert s...
The kernel thinning (KT) algorithm of Dwivedi and Mackey (2021) compresses a probability distributio...
The standard Kernel Quadrature method for numerical integration with random point sets (also called ...
This article reviews and studies the properties of Bayesian quadrature weights, which strongly affec...
International audienceWe show that kernel-based quadrature rules for computing integrals can be seen...
The standard Kernel Quadrature method for numerical integration with random point sets (also called...
International audienceWe study quadrature rules for functions living in an RKHS, using nodes sampled...
We introduce kernel thinning, a new procedure for compressing a distribution $\mathbb{P}$ more effec...
In this paper, we study error bounds for {\em Bayesian quadrature} (BQ), with an emphasis on noisy s...
We propose and study kernel conjugate gradient methods (KCGM) with random projections for least-squa...
International audienceThis paper studies an intriguing phenomenon related to the good generalization...
The design of sparse quadratures for the approximation of integral operators related to symmetric po...
Batch Bayesian optimisation and Bayesian quadrature have been shown to be sample-efficient methods o...
We study kernel quadrature rules with convex weights. Our approach combines the spectral properties ...
We analyze the Nyström approximation of a positive definite kernel associated with a probability mea...
The kernel herding algorithm is used to construct quadrature rules in a reproducing kernel Hilbert s...
The kernel thinning (KT) algorithm of Dwivedi and Mackey (2021) compresses a probability distributio...
The standard Kernel Quadrature method for numerical integration with random point sets (also called ...
This article reviews and studies the properties of Bayesian quadrature weights, which strongly affec...
International audienceWe show that kernel-based quadrature rules for computing integrals can be seen...
The standard Kernel Quadrature method for numerical integration with random point sets (also called...
International audienceWe study quadrature rules for functions living in an RKHS, using nodes sampled...
We introduce kernel thinning, a new procedure for compressing a distribution $\mathbb{P}$ more effec...
In this paper, we study error bounds for {\em Bayesian quadrature} (BQ), with an emphasis on noisy s...
We propose and study kernel conjugate gradient methods (KCGM) with random projections for least-squa...
International audienceThis paper studies an intriguing phenomenon related to the good generalization...
The design of sparse quadratures for the approximation of integral operators related to symmetric po...
Batch Bayesian optimisation and Bayesian quadrature have been shown to be sample-efficient methods o...