Machine learning systems based on minimizing average error have been shown to perform inconsistently across notable subsets of the data, which is not exposed by a low average error for the entire dataset. In consequential social and economic applications, where data represent people, this can lead to discrimination of underrepresented gender and ethnic groups. Distributionally Robust Optimization (DRO) seemingly addresses this problem by minimizing the worst expected risk across subpopulations. We establish theoretical results that clarify the relation between DRO and the optimization of the same loss averaged on an adequately weighted training dataset. A practical implication of our results is that neither DRO nor curating the training set...
Due to the prevalence of machine learning algorithms and the potential for their decisions to profou...
Supervised machine learning techniques developed in the Probably Approximately Correct, Maximum A Po...
Machine learning algorithms with empirical risk minimization are vulnerable under distributional shi...
Extensive efforts have been made to understand and improve the fairness of machine learning models b...
A central problem in statistical learning is to design prediction algorithms that not only perform w...
Robustness to distributional shift is one of the key challenges of contemporary machine learning. At...
Many instances of algorithmic bias are caused by distributional shifts. For example, machine learnin...
In recent years, machine learning models are being increasingly deployed in various applications inc...
Distributionally robust optimization (DRO) has shown lot of promise in providing robustness in learn...
This dissertation develops a comprehensive statistical learning framework that is robust to (distrib...
Modern machine learning models may be susceptible to learning spurious correlations that hold on ave...
Motivated by data-driven decision making and sampling problems, we investigate probabilistic interpr...
International audienceUnwanted bias is a major concern in machine learning, raising in particular si...
Classical machine learning methods may lead to poor prediction performance when the target distribut...
Machine learning algorithms with empirical risk minimization are vulnerable under distributional shi...
Due to the prevalence of machine learning algorithms and the potential for their decisions to profou...
Supervised machine learning techniques developed in the Probably Approximately Correct, Maximum A Po...
Machine learning algorithms with empirical risk minimization are vulnerable under distributional shi...
Extensive efforts have been made to understand and improve the fairness of machine learning models b...
A central problem in statistical learning is to design prediction algorithms that not only perform w...
Robustness to distributional shift is one of the key challenges of contemporary machine learning. At...
Many instances of algorithmic bias are caused by distributional shifts. For example, machine learnin...
In recent years, machine learning models are being increasingly deployed in various applications inc...
Distributionally robust optimization (DRO) has shown lot of promise in providing robustness in learn...
This dissertation develops a comprehensive statistical learning framework that is robust to (distrib...
Modern machine learning models may be susceptible to learning spurious correlations that hold on ave...
Motivated by data-driven decision making and sampling problems, we investigate probabilistic interpr...
International audienceUnwanted bias is a major concern in machine learning, raising in particular si...
Classical machine learning methods may lead to poor prediction performance when the target distribut...
Machine learning algorithms with empirical risk minimization are vulnerable under distributional shi...
Due to the prevalence of machine learning algorithms and the potential for their decisions to profou...
Supervised machine learning techniques developed in the Probably Approximately Correct, Maximum A Po...
Machine learning algorithms with empirical risk minimization are vulnerable under distributional shi...