Fair representation learning provides an effective way of enforcing fairness constraints without compromising utility for downstream users. A desirable family of such fairness constraints, each requiring similar treatment for similar individuals, is known as individual fairness. In this work, we introduce the first method thatenables data consumers to obtain certificates of individualfairness for existing andnew data points. The key idea is to map similar individuals toclose latent representations and leverage this latent proximity to certify individual fairness. That is, our method enables the data producer to learn and certify a representation where for a data point all similar individuals are at ℓ∞-distance at most epsilon, thus allowing...
Accuracy and individual fairness are both crucial for trustworthy machine learning, but these two as...
Controlling bias in training datasets is vital for ensuring equal treatment, or parity, between diff...
We tackle the problem of algorithmic fairness, where the goal is to avoid the unfairly influence of ...
Fair representation learning transforms user data into a representation that ensures fairness and ut...
We revisit the notion of individual fairness proposed by Dwork et al. A central challenge in operati...
Developing learning methods which do not discriminate subgroups in the population is the central goa...
As machine learning (ML) is increasingly used for decision making in scenarios that impact humans, t...
In recent years, a great deal of fairness notions has been proposed. Yet, most of them take a reduct...
We consider the problem of certifying the individual fairness (IF) of feed-forward neural networks (...
Fairness in machine learning is getting rising attention as it is directly related to real-world app...
We consider the problem of certifying the individual fairness (IF) of feed-forward neural networks (...
We consider the problem of whether a Neural Network (NN) model satisfies global individual fairness....
International audienceUnwanted bias is a major concern in machine learning, raising in particular si...
In this work, we propose Fair-CDA, a fine-grained data augmentation strategy for imposing fairness c...
Automated data-driven decision systems are ubiquitous across a wide variety of online ser-vices, fro...
Accuracy and individual fairness are both crucial for trustworthy machine learning, but these two as...
Controlling bias in training datasets is vital for ensuring equal treatment, or parity, between diff...
We tackle the problem of algorithmic fairness, where the goal is to avoid the unfairly influence of ...
Fair representation learning transforms user data into a representation that ensures fairness and ut...
We revisit the notion of individual fairness proposed by Dwork et al. A central challenge in operati...
Developing learning methods which do not discriminate subgroups in the population is the central goa...
As machine learning (ML) is increasingly used for decision making in scenarios that impact humans, t...
In recent years, a great deal of fairness notions has been proposed. Yet, most of them take a reduct...
We consider the problem of certifying the individual fairness (IF) of feed-forward neural networks (...
Fairness in machine learning is getting rising attention as it is directly related to real-world app...
We consider the problem of certifying the individual fairness (IF) of feed-forward neural networks (...
We consider the problem of whether a Neural Network (NN) model satisfies global individual fairness....
International audienceUnwanted bias is a major concern in machine learning, raising in particular si...
In this work, we propose Fair-CDA, a fine-grained data augmentation strategy for imposing fairness c...
Automated data-driven decision systems are ubiquitous across a wide variety of online ser-vices, fro...
Accuracy and individual fairness are both crucial for trustworthy machine learning, but these two as...
Controlling bias in training datasets is vital for ensuring equal treatment, or parity, between diff...
We tackle the problem of algorithmic fairness, where the goal is to avoid the unfairly influence of ...