Biases in culture, gender, ethnicity, etc. have existed for decades and have affected many areas of human social interaction. These biases have been shown to impact machine learning (ML) models, and for natural language processing (NLP), this can have severe consequences for downstream tasks. Mitigating gender bias in information retrieval (IR) is important to avoid propagating stereotypes. In this work, we employ a dataset consisting of two components: (1) relevance of a document to a query and (2) "gender" of a document, in which pronouns are replaced by male, female, and neutral conjugations. We definitively show that pre-trained models for IR do not perform well in zero-shot retrieval tasks when full fine-tuning of a large pre-trained B...
Language models (LMs) exhibit and amplify many types of undesirable biases learned from the training...
The statistical regularities in language corpora encode well-known social biases into word embedding...
Pre-trained language models encode undesirable social biases, which are further exacerbated in downs...
Despite their impressive performance in a wide range of NLP tasks, Large Language Models (LLMs) have...
Large text corpora used for creating word embeddings (vectors which represent word meanings) often c...
Natural language models and systems have been shown to reflect gender bias existing in training data....
Gender biases in language generation systems are challenging to mitigate. One possible source for th...
Gender bias is a significant issue in machine translation, leading to ongoing research efforts in de...
With widening deployments of natural language processing (NLP) in daily life, inherited social biase...
Large text corpora used for creating word embeddings (vectors which represent word meanings) often c...
Implicit and explicit gender biases in media representations of individuals have long existed. Women...
While understanding and removing gender biases in language models has been a long-standing problem i...
We investigate whether it is feasible to remove gendered information from resumes to mitigate potent...
In this work we show how large language models (LLMs) can learn statistical dependencies between oth...
Word embedding captures the semantic and syntactic meaning of words into dense vectors. It contains ...
Language models (LMs) exhibit and amplify many types of undesirable biases learned from the training...
The statistical regularities in language corpora encode well-known social biases into word embedding...
Pre-trained language models encode undesirable social biases, which are further exacerbated in downs...
Despite their impressive performance in a wide range of NLP tasks, Large Language Models (LLMs) have...
Large text corpora used for creating word embeddings (vectors which represent word meanings) often c...
Natural language models and systems have been shown to reflect gender bias existing in training data....
Gender biases in language generation systems are challenging to mitigate. One possible source for th...
Gender bias is a significant issue in machine translation, leading to ongoing research efforts in de...
With widening deployments of natural language processing (NLP) in daily life, inherited social biase...
Large text corpora used for creating word embeddings (vectors which represent word meanings) often c...
Implicit and explicit gender biases in media representations of individuals have long existed. Women...
While understanding and removing gender biases in language models has been a long-standing problem i...
We investigate whether it is feasible to remove gendered information from resumes to mitigate potent...
In this work we show how large language models (LLMs) can learn statistical dependencies between oth...
Word embedding captures the semantic and syntactic meaning of words into dense vectors. It contains ...
Language models (LMs) exhibit and amplify many types of undesirable biases learned from the training...
The statistical regularities in language corpora encode well-known social biases into word embedding...
Pre-trained language models encode undesirable social biases, which are further exacerbated in downs...