Recent research has highlighted the vulnerabilities of modern machine learning based systems to bias, especially for segments of society that are under-represented in training data. In this work, we develop a novel, tunable algorithm for mitigating the hidden, and potentially unknown, biases within training data. Our algorithm fuses the original learning task with a variational autoencoder to learn the latent structure within the dataset and then adaptively uses the learned latent distributions to re-weight the importance of certain data points while training. While our method is generalizable across various data modalities and learning tasks, in this work we use our algorithm to address the issue of racial and gender bias in facial detec...
Trustworthiness, and in particular Algorithmic Fairness, is emerging as one of the most trending top...
The problem of algorithmic bias in machine learning has gained a lot of attention in recent years du...
Measuring algorithmic bias is crucial both to assess algorithmic fairness, and to guide the improvem...
Recent discoveries have revealed that deep neural networks might behave in a biased manner in many r...
This thesis provides a new approach to reduce racial bias issues and inaccuracies caused by unbalanc...
Bias in classifiers is a severe issue of modern deep learning methods, especially for their applicat...
Deep learning has fostered the progress in the field of face analysis, resulting in the integration ...
Computer vision algorithms, e.g. for face recognition, favour groups of individuals that are better ...
Within the last years Face Recognition (FR) systems have achieved human-like (or better) performance...
International audienceIn spite of the high performance and reliability of deep learning algorithms i...
The problem of algorithmic bias in machine learning has recently gained a lot of attention due to it...
Neural networks achieve the state-of-the-art in image classification tasks. However, they can encode...
How can we control for latent discrimination in predictive models? How can we provably remove it? Su...
Face Recognition (FR) is increasingly influencing our lives: we use it to unlock our phones; police ...
We propose a discrimination-aware learning method to improve both the accuracy and fairness of biase...
Trustworthiness, and in particular Algorithmic Fairness, is emerging as one of the most trending top...
The problem of algorithmic bias in machine learning has gained a lot of attention in recent years du...
Measuring algorithmic bias is crucial both to assess algorithmic fairness, and to guide the improvem...
Recent discoveries have revealed that deep neural networks might behave in a biased manner in many r...
This thesis provides a new approach to reduce racial bias issues and inaccuracies caused by unbalanc...
Bias in classifiers is a severe issue of modern deep learning methods, especially for their applicat...
Deep learning has fostered the progress in the field of face analysis, resulting in the integration ...
Computer vision algorithms, e.g. for face recognition, favour groups of individuals that are better ...
Within the last years Face Recognition (FR) systems have achieved human-like (or better) performance...
International audienceIn spite of the high performance and reliability of deep learning algorithms i...
The problem of algorithmic bias in machine learning has recently gained a lot of attention due to it...
Neural networks achieve the state-of-the-art in image classification tasks. However, they can encode...
How can we control for latent discrimination in predictive models? How can we provably remove it? Su...
Face Recognition (FR) is increasingly influencing our lives: we use it to unlock our phones; police ...
We propose a discrimination-aware learning method to improve both the accuracy and fairness of biase...
Trustworthiness, and in particular Algorithmic Fairness, is emerging as one of the most trending top...
The problem of algorithmic bias in machine learning has gained a lot of attention in recent years du...
Measuring algorithmic bias is crucial both to assess algorithmic fairness, and to guide the improvem...