Deep learning based visual-linguistic multimodal models such as Contrastive Language Image Pre-training (CLIP) have become increasingly popular recently and are used within text-to-image generative models such as DALL-E and Stable Diffusion. However, gender and other social biases have been uncovered in these models, and this has the potential to be amplified and perpetuated through AI systems. In this paper, we present a methodology for auditing multimodal models that consider gender, informed by concepts from transnational feminism, including regional and cultural dimensions. Focusing on CLIP, we found evidence of significant gender bias with varying patterns across global regions. Harmful stereotypical associations were also uncovered re...
Artificial intelligence systems copy and amplify existing societal biases, a problem that by now is ...
Large Language Models (LLMs) have made substantial progress in the past several months, shattering s...
Natural language models and systems have been shown to reflect gender bias existing in training data....
Deep learning based visual-linguistic multimodal models such as Contrastive Language Image Pre-train...
Large multimodal deep learning models such as Contrastive Language Image Pretraining (CLIP) have be...
Large multimodal deep learning models such as Contrastive Language Image Pretraining (CLIP) have bec...
Deep neural networks used in computer vision have been shown to exhibit many social biases such as g...
Gender bias in artificial intelligence (AI) and natural language processing has garnered significant...
Generative multimodal models based on diffusion models have seen tremendous growth and advances in ...
While understanding and removing gender biases in language models has been a long-standing problem i...
Masked Language Models (MLMs) pre-trained by predicting masked tokens on large corpora have been use...
Gender biases in language generation systems are challenging to mitigate. One possible source for th...
As machine learning-enabled Text-to-Image (TTI) systems are becoming increasingly prevalent and seei...
Demographic biases are widely affecting artificial intelligence. In particular, gender bias is clea...
Model-based evaluation metrics (e.g., CLIPScore and GPTScore) have demonstrated decent correlations ...
Artificial intelligence systems copy and amplify existing societal biases, a problem that by now is ...
Large Language Models (LLMs) have made substantial progress in the past several months, shattering s...
Natural language models and systems have been shown to reflect gender bias existing in training data....
Deep learning based visual-linguistic multimodal models such as Contrastive Language Image Pre-train...
Large multimodal deep learning models such as Contrastive Language Image Pretraining (CLIP) have be...
Large multimodal deep learning models such as Contrastive Language Image Pretraining (CLIP) have bec...
Deep neural networks used in computer vision have been shown to exhibit many social biases such as g...
Gender bias in artificial intelligence (AI) and natural language processing has garnered significant...
Generative multimodal models based on diffusion models have seen tremendous growth and advances in ...
While understanding and removing gender biases in language models has been a long-standing problem i...
Masked Language Models (MLMs) pre-trained by predicting masked tokens on large corpora have been use...
Gender biases in language generation systems are challenging to mitigate. One possible source for th...
As machine learning-enabled Text-to-Image (TTI) systems are becoming increasingly prevalent and seei...
Demographic biases are widely affecting artificial intelligence. In particular, gender bias is clea...
Model-based evaluation metrics (e.g., CLIPScore and GPTScore) have demonstrated decent correlations ...
Artificial intelligence systems copy and amplify existing societal biases, a problem that by now is ...
Large Language Models (LLMs) have made substantial progress in the past several months, shattering s...
Natural language models and systems have been shown to reflect gender bias existing in training data....