To support the trustworthiness of AI systems, it is essential to have precise methods to determine what or who is to account for the behaviour, or the outcome, of AI systems. The assignment of responsibility to an AI system is closely related to the identification of individuals or elements that have caused the outcome of the AI system. In this work, we present an overview of approaches that aim at modelling responsibility of AI systems, discuss their advantages and shortcomings to deal with various aspects of the notion of responsibility, and present research gaps and ways forward
When humans interact with intelligent systems, their causal responsibility for outcomes becomes equi...
Systems based on Artificial Intelligence (AI) are increasingly normalized as part of work, leisure, ...
The opaque and incomprehensible nature of artificial intelligence (AI) raises questions about who ca...
Ensuring the trustworthiness of autonomous systems and artificial intelligenceis an important interd...
Ensuring the trustworthiness of autonomous systems and artificial intelligence is an important inter...
This paper investigates the attribution of responsibility to artificial intelligent systems (AI). It...
To develop and effectively deploy Trustworthy Autonomous Systems (TAS), we face various social, tech...
Abstract: The development of artificial intelligence (AI) systems has sparked a growing concern rega...
The aim of this thesis is to contribute with new insights on the concept of responsible artificial ...
To develop and effectively deploy Trustworthy Autonomous Systems (TAS), we face various social, tech...
The notion of "responsibility gap" with artificial intelligence (AI) was originally introduced in th...
Who is responsible for the events and consequences caused by using artificially intelligent tools, a...
Governments are increasingly using sophisticated self-learning algorithms to automate and standardiz...
As of 2021, there were more than 170 guidelines on AI ethics and responsible, trustworthy AI in circ...
I argue for responsibility internalism. That is, moral responsibility (i.e., accountability, or bein...
When humans interact with intelligent systems, their causal responsibility for outcomes becomes equi...
Systems based on Artificial Intelligence (AI) are increasingly normalized as part of work, leisure, ...
The opaque and incomprehensible nature of artificial intelligence (AI) raises questions about who ca...
Ensuring the trustworthiness of autonomous systems and artificial intelligenceis an important interd...
Ensuring the trustworthiness of autonomous systems and artificial intelligence is an important inter...
This paper investigates the attribution of responsibility to artificial intelligent systems (AI). It...
To develop and effectively deploy Trustworthy Autonomous Systems (TAS), we face various social, tech...
Abstract: The development of artificial intelligence (AI) systems has sparked a growing concern rega...
The aim of this thesis is to contribute with new insights on the concept of responsible artificial ...
To develop and effectively deploy Trustworthy Autonomous Systems (TAS), we face various social, tech...
The notion of "responsibility gap" with artificial intelligence (AI) was originally introduced in th...
Who is responsible for the events and consequences caused by using artificially intelligent tools, a...
Governments are increasingly using sophisticated self-learning algorithms to automate and standardiz...
As of 2021, there were more than 170 guidelines on AI ethics and responsible, trustworthy AI in circ...
I argue for responsibility internalism. That is, moral responsibility (i.e., accountability, or bein...
When humans interact with intelligent systems, their causal responsibility for outcomes becomes equi...
Systems based on Artificial Intelligence (AI) are increasingly normalized as part of work, leisure, ...
The opaque and incomprehensible nature of artificial intelligence (AI) raises questions about who ca...