International audienceWe consider a high-dimensional mixture of two Gaussians in the noisy regime where even an oracle knowing the centers of the clusters misclassifies a small but finite fraction of the points. We provide a rigorous analysis of the generalization error of regularized convex classifiers, including ridge, hinge and logistic regression, in the high-dimensional limit where the number n of samples and their dimension d go to infinity while their ratio is fixed to $\alpha$ = $n/d$. We discuss surprising effects of the regularization that in some cases allows to reach the Bayes-optimal performances. We also illustrate the interpolation peak at low regularization, and analyze the role of the respective sizes of the two clusters
We establish risk bounds for Regularized Empirical Risk Minimizers (RERM) when the loss is Lipschitz...
We examine the performance of an ensemble of randomly-projected Fisher Linear Discriminant classifie...
Mixtures-of-Experts models and their maximum likelihood estimation (MLE) via the EM algorithm have b...
International audienceWe consider a high-dimensional mixture of two Gaussians in the noisy regime wh...
11 pages + 45 pages Supplementary Material / 5 figuresWe consider a commonly studied supervised clas...
Generalised linear models for multi-class classification problems are one of the fundamental buildin...
Generalised linear models for multi-class classification problems are one of the fundamental buildin...
<p>A) A two-dimensional example illustrate how a two-class classification between the two data sets ...
International audienceThis paper carries out a large dimensional analysis of the standard regularize...
We analyze the connection between minimizers with good generalizing properties and high local entrop...
We consider the problem of binary classification when the covariates conditioned on the each of the ...
Finite gaussian mixture models are widely used in statistics thanks to their great flexibility. Howe...
Abstract—We study high-dimensional asymptotic performance limits of binary supervised classification...
We prove theoretical guarantees for an averaging-ensemble of randomly projected Fisher linear discri...
A popular approach for estimating an unknown signal x0 ∈ Rn from noisy, linear measurements y = Ax0 ...
We establish risk bounds for Regularized Empirical Risk Minimizers (RERM) when the loss is Lipschitz...
We examine the performance of an ensemble of randomly-projected Fisher Linear Discriminant classifie...
Mixtures-of-Experts models and their maximum likelihood estimation (MLE) via the EM algorithm have b...
International audienceWe consider a high-dimensional mixture of two Gaussians in the noisy regime wh...
11 pages + 45 pages Supplementary Material / 5 figuresWe consider a commonly studied supervised clas...
Generalised linear models for multi-class classification problems are one of the fundamental buildin...
Generalised linear models for multi-class classification problems are one of the fundamental buildin...
<p>A) A two-dimensional example illustrate how a two-class classification between the two data sets ...
International audienceThis paper carries out a large dimensional analysis of the standard regularize...
We analyze the connection between minimizers with good generalizing properties and high local entrop...
We consider the problem of binary classification when the covariates conditioned on the each of the ...
Finite gaussian mixture models are widely used in statistics thanks to their great flexibility. Howe...
Abstract—We study high-dimensional asymptotic performance limits of binary supervised classification...
We prove theoretical guarantees for an averaging-ensemble of randomly projected Fisher linear discri...
A popular approach for estimating an unknown signal x0 ∈ Rn from noisy, linear measurements y = Ax0 ...
We establish risk bounds for Regularized Empirical Risk Minimizers (RERM) when the loss is Lipschitz...
We examine the performance of an ensemble of randomly-projected Fisher Linear Discriminant classifie...
Mixtures-of-Experts models and their maximum likelihood estimation (MLE) via the EM algorithm have b...