Algorithmic fairness plays an increasingly critical role in machine learning research. Several group fairness notions and algorithms have been proposed. However, the fairness guarantee of existing fair classification methods mainly depends on specific data distributional assumptions, often requiring large sample sizes, and fairness could be violated when there is a modest number of samples, which is often the case in practice. In this paper, we propose FaiREE, a fair classification algorithm that can satisfy group fairness constraints with finite-sample and distribution-free theoretical guarantees. FaiREE can be adapted to satisfy various group fairness notions (e.g., Equality of Opportunity, Equalized Odds, Demographic Parity, etc.) and ac...
The notion of individual fairness is a formalization of an ethical principle, "Treating like cases a...
Machine learning systems are increasingly being used to make impactful decisions such as loan applic...
Context: Machine learning software can generate models that inappropriately discriminate against spe...
Fairness in automated decision-making systems has gained increasing attention as their applications ...
Algorithmic Fairness is an established area of machine learning, willing to reduce the influence of ...
Machine learning algorithms have been increasingly deployed in critical automated decision-making sy...
This research seeks to benefit the software engineering society by providing a simple yet effective ...
Machine Learning (ML) software has been widely adopted in modern society, with reported fairness imp...
Training ML models which are fair across different demographic groups is of critical importance due ...
Machine learning algorithms are becoming integrated into more and more high-stakes decision-making p...
Fairness in machine learning is getting rising attention as it is directly related to real-world app...
We investigate fairness in classification, where automated decisions are made for individuals from d...
International audienceUnwanted bias is a major concern in machine learning, raising in particular si...
The adoption of automated, data-driven decision making in an ever expanding range of applications ha...
The field of fair machine learning aims to ensure that decisions guided by algorithms are equitable....
The notion of individual fairness is a formalization of an ethical principle, "Treating like cases a...
Machine learning systems are increasingly being used to make impactful decisions such as loan applic...
Context: Machine learning software can generate models that inappropriately discriminate against spe...
Fairness in automated decision-making systems has gained increasing attention as their applications ...
Algorithmic Fairness is an established area of machine learning, willing to reduce the influence of ...
Machine learning algorithms have been increasingly deployed in critical automated decision-making sy...
This research seeks to benefit the software engineering society by providing a simple yet effective ...
Machine Learning (ML) software has been widely adopted in modern society, with reported fairness imp...
Training ML models which are fair across different demographic groups is of critical importance due ...
Machine learning algorithms are becoming integrated into more and more high-stakes decision-making p...
Fairness in machine learning is getting rising attention as it is directly related to real-world app...
We investigate fairness in classification, where automated decisions are made for individuals from d...
International audienceUnwanted bias is a major concern in machine learning, raising in particular si...
The adoption of automated, data-driven decision making in an ever expanding range of applications ha...
The field of fair machine learning aims to ensure that decisions guided by algorithms are equitable....
The notion of individual fairness is a formalization of an ethical principle, "Treating like cases a...
Machine learning systems are increasingly being used to make impactful decisions such as loan applic...
Context: Machine learning software can generate models that inappropriately discriminate against spe...