Large Language Models (LLMs), like ChatGPT, are fundamentally tools trained on vast data, reflecting diverse societal impressions. This paper aims to investigate LLMs' self-perceived bias concerning indigeneity when simulating scenarios of indigenous people performing various roles. Through generating and analyzing multiple scenarios, this work offers a unique perspective on how technology perceives and potentially amplifies societal biases related to indigeneity in social computing. The findings offer insights into the broader implications of indigeneity in critical computing.Comment: 5 pages, 3 figure
The information superhighway. The global village. Cyberspace. These are only a few of the metaphors ...
Cultural code-switching concerns how we adjust our overall behaviours, manners of speaking, and appe...
The paper discusses the potential of large vision-language models as objects of interest for empiric...
A growing body of research examining the role of technology in indigenous knowledge production and d...
Generative AI models garnered a large amount of public attention and speculation with the release of...
As the capabilities of generative language models continue to advance, the implications of biases in...
Recent research has revealed undesirable biases in NLP data and models. However, these efforts large...
Assessments of algorithmic bias in large language models (LLMs) are generally catered to uncovering ...
International audienceIt is well known that AI-based language technology—large language models, mach...
Large neural network-based language models play an increasingly important role in contemporary AI. A...
Large language models (LLMs) have garnered significant attention for their remarkable performance in...
This study shows that the Indigenous population is underrepresented in AI-related industries and tha...
In the 2023-2024 academic year, the widespread availability of generative artificial intelligence, e...
Language data and models demonstrate various types of bias, be it ethnic, religious, gender, or soci...
Large language models offer significant potential for optimising professional activities, such as st...
The information superhighway. The global village. Cyberspace. These are only a few of the metaphors ...
Cultural code-switching concerns how we adjust our overall behaviours, manners of speaking, and appe...
The paper discusses the potential of large vision-language models as objects of interest for empiric...
A growing body of research examining the role of technology in indigenous knowledge production and d...
Generative AI models garnered a large amount of public attention and speculation with the release of...
As the capabilities of generative language models continue to advance, the implications of biases in...
Recent research has revealed undesirable biases in NLP data and models. However, these efforts large...
Assessments of algorithmic bias in large language models (LLMs) are generally catered to uncovering ...
International audienceIt is well known that AI-based language technology—large language models, mach...
Large neural network-based language models play an increasingly important role in contemporary AI. A...
Large language models (LLMs) have garnered significant attention for their remarkable performance in...
This study shows that the Indigenous population is underrepresented in AI-related industries and tha...
In the 2023-2024 academic year, the widespread availability of generative artificial intelligence, e...
Language data and models demonstrate various types of bias, be it ethnic, religious, gender, or soci...
Large language models offer significant potential for optimising professional activities, such as st...
The information superhighway. The global village. Cyberspace. These are only a few of the metaphors ...
Cultural code-switching concerns how we adjust our overall behaviours, manners of speaking, and appe...
The paper discusses the potential of large vision-language models as objects of interest for empiric...