We address the problem of algorithmic fairness: ensuring that sensitive information does not unfairly influence the outcome of a classifier. We present an approach based on empirical risk minimization, which incorporates a fairness constraint into the learning problem. It encourages the conditional risk of the learned classifier to be approximately constant with respect to the sensitive variable. We derive both risk and fairness bounds that support the statistical consistency of our methodology. We specify our approach to kernel methods and observe that the fairness requirement implies an orthogonality constraint which can be easily added to these methods. We further observe that for linear models the constraint translates into a simple dat...
The concerns regarding ramifications of societal bias targeted at a particular identity group (for e...
This thesis scrutinizes common assumptions underlying traditional machine learning approaches to fai...
Developing learning methods which do not discriminate subgroups in the population is the central goa...
We address the problem of algorithmic fairness: ensuring that sensitive information does not unfairl...
We tackle the problem of algorithmic fairness, where the goal is to avoid the unfairly influence of ...
This thesis investigates the problem of fair statistical learning. We argue that critical notions of...
In past work on fairness in machine learning, the focus has been on forcing the prediction of classi...
In past work on fairness in machine learning, the focus has been on forcing the prediction of classi...
In past work on fairness in machine learning, the focus has been on forcingthe prediction of classif...
In past work on fairness in machine learning, the focus has been on forcing the prediction of classi...
International audienceUnwanted bias is a major concern in machine learning, raising in particular si...
International audienceUnwanted bias is a major concern in machine learning, raising in particular si...
Recently a parametric family of fairness metrics to quantify algorithmic fairness has been proposed ...
We investigate the problem of algorithmic fairness in the case where sensitive and non-sensitive fea...
We investigate the problem of algorithmic fairness in the case where sensitive and non-sensitive fea...
The concerns regarding ramifications of societal bias targeted at a particular identity group (for e...
This thesis scrutinizes common assumptions underlying traditional machine learning approaches to fai...
Developing learning methods which do not discriminate subgroups in the population is the central goa...
We address the problem of algorithmic fairness: ensuring that sensitive information does not unfairl...
We tackle the problem of algorithmic fairness, where the goal is to avoid the unfairly influence of ...
This thesis investigates the problem of fair statistical learning. We argue that critical notions of...
In past work on fairness in machine learning, the focus has been on forcing the prediction of classi...
In past work on fairness in machine learning, the focus has been on forcing the prediction of classi...
In past work on fairness in machine learning, the focus has been on forcingthe prediction of classif...
In past work on fairness in machine learning, the focus has been on forcing the prediction of classi...
International audienceUnwanted bias is a major concern in machine learning, raising in particular si...
International audienceUnwanted bias is a major concern in machine learning, raising in particular si...
Recently a parametric family of fairness metrics to quantify algorithmic fairness has been proposed ...
We investigate the problem of algorithmic fairness in the case where sensitive and non-sensitive fea...
We investigate the problem of algorithmic fairness in the case where sensitive and non-sensitive fea...
The concerns regarding ramifications of societal bias targeted at a particular identity group (for e...
This thesis scrutinizes common assumptions underlying traditional machine learning approaches to fai...
Developing learning methods which do not discriminate subgroups in the population is the central goa...