International audienceOptimal transport (OT) and maximum mean discrepancies (MMD) are now routinely used in machine learning to compare probability measures. We focus in this paper on \emph{Sinkhorn divergences} (SDs), a regularized variant of OT distances which can interpolate, depending on the regularization strength $\varepsilon$, between OT ($\varepsilon=0$) and MMD ($\varepsilon=\infty$). Although the tradeoff induced by that regularization is now well understood computationally (OT, SDs and MMD require respectively $O(n^3\log n)$, $O(n^2)$ and $n^2$ operations given a sample size $n$), much less is known in terms of their \emph{sample complexity}, namely the gap between these quantities, when evaluated using finite samples \emph{vs.} ...
International audienceThis paper presents a unified framework for smooth convex regularization of di...
Applications of optimal transport have recently gained remarkable attention thanks to the computati...
We present several new complexity results for the entropic regularized algorithms that approximately...
International audienceOptimal transport (OT) and maximum mean discrepancies (MMD) are now routinely ...
Comparing probability distributions is a fundamental problem in data sciences. Simple norms and dive...
This thesis proposes theoretical and numerical contributions to use Entropy-regularized Optimal Tran...
Comparing probability distributions is a fundamental problem in data sciences. Simple norms and dive...
This thesis proposes theoretical and numerical contributions to use Entropy-regularized Optimal Tran...
International audienceWe introduce in this paper a novel strategy for efficiently approximating the ...
The notion of entropy-regularized optimal transport, also known as Sinkhorn divergence, has recently...
The use of optimal transport (OT) distances, and in particular entropic-regularised OT distances, is...
International audienceThe ability to compare two degenerate probability distributions (i.e. two prob...
International audienceCorrectly estimating the discrepancy between two data distributions has always...
Correctly estimating the discrepancy between two data distributions has always been an important tas...
Distributional reinforcement learning~(RL) is a class of state-of-the-art algorithms that estimate t...
International audienceThis paper presents a unified framework for smooth convex regularization of di...
Applications of optimal transport have recently gained remarkable attention thanks to the computati...
We present several new complexity results for the entropic regularized algorithms that approximately...
International audienceOptimal transport (OT) and maximum mean discrepancies (MMD) are now routinely ...
Comparing probability distributions is a fundamental problem in data sciences. Simple norms and dive...
This thesis proposes theoretical and numerical contributions to use Entropy-regularized Optimal Tran...
Comparing probability distributions is a fundamental problem in data sciences. Simple norms and dive...
This thesis proposes theoretical and numerical contributions to use Entropy-regularized Optimal Tran...
International audienceWe introduce in this paper a novel strategy for efficiently approximating the ...
The notion of entropy-regularized optimal transport, also known as Sinkhorn divergence, has recently...
The use of optimal transport (OT) distances, and in particular entropic-regularised OT distances, is...
International audienceThe ability to compare two degenerate probability distributions (i.e. two prob...
International audienceCorrectly estimating the discrepancy between two data distributions has always...
Correctly estimating the discrepancy between two data distributions has always been an important tas...
Distributional reinforcement learning~(RL) is a class of state-of-the-art algorithms that estimate t...
International audienceThis paper presents a unified framework for smooth convex regularization of di...
Applications of optimal transport have recently gained remarkable attention thanks to the computati...
We present several new complexity results for the entropic regularized algorithms that approximately...