Within the last years Face Recognition (FR) systems have achieved human-like (or better) performance, leading to extensive deployment in large-scale practical settings. Yet, especially for sensible domains such as FR we expect algorithms to work equally well for everyone, regardless of somebody's age, gender, skin colour and/or origin. In this paper, we investigate a methodology to quantify the amount of bias in a trained Convolutional Neural Network (CNN) model for FR that is not only intuitively appealing, but also has already been used in the literature to argue for certain debiasing methods. It works by measuring the "blindness" of the model towards certain face characteristics in the embeddings of faces based on internal cluster valida...
We propose a discrimination-aware learning method to improve both the accuracy and fairness of biase...
Deep learning has fostered the progress in the field of face analysis, resulting in the integration ...
Deep neural networks used in computer vision have been shown to exhibit many social biases such as g...
Within the last years Face Recognition (FR) systems have achieved human-like (or better) performance...
Face Recognition (FR) is increasingly influencing our lives: we use it to unlock our phones; police ...
Face recognition (FR) systems have a growing effect on critical decision-making processes. Recent ...
International audienceIn spite of the high performance and reliability of deep learning algorithms i...
Measuring algorithmic bias is crucial both to assess algorithmic fairness, and to guide the improvem...
The presence of bias in deep models leads to unfair outcomes for certain demographic subgroups. Rese...
Recent discoveries have revealed that deep neural networks might behave in a biased manner in many r...
There are demographic biases present in current facial recognition (FR) models. To measure these bia...
Convolutional neural networks (CNNs) give the state-of-the-art performance in many pattern recogniti...
This paper is the first to explore an automatic way to detect bias in deep convolutional neural netw...
Image recognition technology systems have existed in the realm of computer security since nearly the...
Deep learning-based person identification and verification systems have remarkably improved in terms...
We propose a discrimination-aware learning method to improve both the accuracy and fairness of biase...
Deep learning has fostered the progress in the field of face analysis, resulting in the integration ...
Deep neural networks used in computer vision have been shown to exhibit many social biases such as g...
Within the last years Face Recognition (FR) systems have achieved human-like (or better) performance...
Face Recognition (FR) is increasingly influencing our lives: we use it to unlock our phones; police ...
Face recognition (FR) systems have a growing effect on critical decision-making processes. Recent ...
International audienceIn spite of the high performance and reliability of deep learning algorithms i...
Measuring algorithmic bias is crucial both to assess algorithmic fairness, and to guide the improvem...
The presence of bias in deep models leads to unfair outcomes for certain demographic subgroups. Rese...
Recent discoveries have revealed that deep neural networks might behave in a biased manner in many r...
There are demographic biases present in current facial recognition (FR) models. To measure these bia...
Convolutional neural networks (CNNs) give the state-of-the-art performance in many pattern recogniti...
This paper is the first to explore an automatic way to detect bias in deep convolutional neural netw...
Image recognition technology systems have existed in the realm of computer security since nearly the...
Deep learning-based person identification and verification systems have remarkably improved in terms...
We propose a discrimination-aware learning method to improve both the accuracy and fairness of biase...
Deep learning has fostered the progress in the field of face analysis, resulting in the integration ...
Deep neural networks used in computer vision have been shown to exhibit many social biases such as g...