International audienceThis paper deals with robust regression and subspace estimation and more precisely with the problem of minimizing a saturated loss function. In particular, we focus on computational complexity issues and show that an exact algorithm with polynomial time-complexity with respect to the number of data can be devised for robust regression and subspace estimation. This result is obtained by adopting a classification point of view and relating the problems to the search for a linear model that can approximate the maximal number of points with a given error. Approximate variants of the algorithms based on ramdom sampling are also discussed and experiments show that it offers an accuracy gain over the traditional RANSAC for a ...
Real-world datasets are often characterised by outliers; data items that do not follow the same stru...
Modern learning problems in nature language processing, computer vision, computational biology, etc....
Due to the highly non-convex nature of large-scale robust parameter estimation, avoiding poor local ...
International audienceThis paper deals with robust regression and subspace estimation and more preci...
In modern statistics, the robust estimation of parameters of a regression hyperplane is a central pr...
© 2019 Elsevier B.V. Dimension reduction is often an important step in the analysis of high-dimensio...
ii In this dissertation, we discuss the problem of robust linear subspace estimation using low-rank ...
Many datasets are collected automatically, and are thus easily contaminated by outliers. In order to...
The aim of the paper is to give a coherent account of the robustness approach based on shrinking nei...
We consider a fundamental problem in unsupervised learning called subspace recovery: given a collect...
In this article, we consider a large class of computational problems in robust statistics that can b...
We propose a new procedure for computing an approximation to regression estimates based on the minim...
Algorithms such as Least Median of Squares (LMedS) and Ran-dom Sample Consensus (RANSAC) have been v...
This thesis is concerned, on the one hand, with the design of reduced order models that optimally ap...
International audienceIn this paper, we are interested in the recovery of an unknown signal corrupte...
Real-world datasets are often characterised by outliers; data items that do not follow the same stru...
Modern learning problems in nature language processing, computer vision, computational biology, etc....
Due to the highly non-convex nature of large-scale robust parameter estimation, avoiding poor local ...
International audienceThis paper deals with robust regression and subspace estimation and more preci...
In modern statistics, the robust estimation of parameters of a regression hyperplane is a central pr...
© 2019 Elsevier B.V. Dimension reduction is often an important step in the analysis of high-dimensio...
ii In this dissertation, we discuss the problem of robust linear subspace estimation using low-rank ...
Many datasets are collected automatically, and are thus easily contaminated by outliers. In order to...
The aim of the paper is to give a coherent account of the robustness approach based on shrinking nei...
We consider a fundamental problem in unsupervised learning called subspace recovery: given a collect...
In this article, we consider a large class of computational problems in robust statistics that can b...
We propose a new procedure for computing an approximation to regression estimates based on the minim...
Algorithms such as Least Median of Squares (LMedS) and Ran-dom Sample Consensus (RANSAC) have been v...
This thesis is concerned, on the one hand, with the design of reduced order models that optimally ap...
International audienceIn this paper, we are interested in the recovery of an unknown signal corrupte...
Real-world datasets are often characterised by outliers; data items that do not follow the same stru...
Modern learning problems in nature language processing, computer vision, computational biology, etc....
Due to the highly non-convex nature of large-scale robust parameter estimation, avoiding poor local ...