Missing values in real-world data pose a significant and unique challenge to algorithmic fairness. Different demographic groups may be unequally affected by missing data, and the standard procedure for handling missing values where first data is imputed, then the imputed data is used for classification -- a procedure referred to as "impute-then-classify" -- can exacerbate discrimination. In this paper, we analyze how missing values affect algorithmic fairness. We first prove that training a classifier from imputed data can significantly worsen the achievable values of group fairness and average accuracy. This is because imputing data results in the loss of the missing pattern of the data, which often conveys information about the predictive...
We investigate fairness in classification, where automated decisions are made for individuals from d...
Algorithmic fairness plays an increasingly critical role in machine learning research. Several group...
How can we control for latent discrimination in predictive models? How can we provably remove it? Su...
As we enter a new decade, more and more governance in our society is assisted by autonomous decision...
Abstract: Nowadays, there is an increasing concern in machine learning about the causes underlying u...
We investigate the fairness concerns of training a machine learning model using data with missing va...
Analysis of the fairness of machine learning (ML) algorithms recently attracted many researchers' in...
Training datasets for machine learning often have some form of missingness. For example, to learn a ...
Although many fairness criteria have been proposed to ensure that machine learning algorithms do not...
Predictive algorithms are playing an increasingly prominent role in society, being used to predict r...
Context: Machine learning software can generate models that inappropriately discriminate against spe...
Addressing fairness concerns about machine learning models is a crucial step towards their long-term...
Background Classifying samples in incomplete datasets is a common aim for machine learning practitio...
Machine learning algorithms called classifiers make discrete predictions about new data by training ...
Machine learning may be oblivious to human bias but it is not immune to its perpetuation. Marginalis...
We investigate fairness in classification, where automated decisions are made for individuals from d...
Algorithmic fairness plays an increasingly critical role in machine learning research. Several group...
How can we control for latent discrimination in predictive models? How can we provably remove it? Su...
As we enter a new decade, more and more governance in our society is assisted by autonomous decision...
Abstract: Nowadays, there is an increasing concern in machine learning about the causes underlying u...
We investigate the fairness concerns of training a machine learning model using data with missing va...
Analysis of the fairness of machine learning (ML) algorithms recently attracted many researchers' in...
Training datasets for machine learning often have some form of missingness. For example, to learn a ...
Although many fairness criteria have been proposed to ensure that machine learning algorithms do not...
Predictive algorithms are playing an increasingly prominent role in society, being used to predict r...
Context: Machine learning software can generate models that inappropriately discriminate against spe...
Addressing fairness concerns about machine learning models is a crucial step towards their long-term...
Background Classifying samples in incomplete datasets is a common aim for machine learning practitio...
Machine learning algorithms called classifiers make discrete predictions about new data by training ...
Machine learning may be oblivious to human bias but it is not immune to its perpetuation. Marginalis...
We investigate fairness in classification, where automated decisions are made for individuals from d...
Algorithmic fairness plays an increasingly critical role in machine learning research. Several group...
How can we control for latent discrimination in predictive models? How can we provably remove it? Su...