As Christian List (2021) has recently argued, the increasing arrival of powerful AI systems that operate autonomously in high-stakes contexts creates a need for “future-proofing” our regulatory frameworks, i.e., for reassessing them in the face of these developments. One core part of our regulatory frameworks that dominates our everyday moral interactions is blame. Therefore, “future-proofing” our extant regulatory frameworks in the face of the increasing arrival of powerful AI systems requires, among others things, that we ask whether it makes sense to extend our blaming practices to these systems. In the paper, we argue for the admittedly surprising thesis that this question should be answered in the affirmative: contrary to what one migh...
When a computer system causes harm, who is responsible? This question has renewed significance given...
Recent advances in artificial intelligence (AI) and machine learning have prompted discussion about ...
As human science pushes the boundaries towards the development of artificial intelligence (AI), the ...
As Christian List (2021) has recently argued, the increasing arrival of powerful AI systems that ope...
Recent research shows – somewhat astonishingly – that people are willing to ascribe moral blame to A...
Artificial Intelligence (AI) systems are ubiquitous. From social media timelines, video recommendati...
This paper investigates the attribution of responsibility to artificial intelligent systems (AI). It...
I argue for responsibility internalism. That is, moral responsibility (i.e., accountability, or bein...
The notion of "responsibility gap" with artificial intelligence (AI) was originally introduced in th...
Intelligence Augmentation Systems (IAS) allow for more efficient and effective corporate processes b...
The article explores the effects increasing automation has on our conceptions of human agency. We co...
The notion of “responsibility gap” with artificial intelligence (AI) was originally introduced in th...
Decision-making algorithms are being used in important decisions, such as who should be enrolled in ...
When a computer system causes harm, who is responsible? This question has renewed significance given...
Recent advances in artificial intelligence (AI) and machine learning have prompted discussion about ...
As human science pushes the boundaries towards the development of artificial intelligence (AI), the ...
As Christian List (2021) has recently argued, the increasing arrival of powerful AI systems that ope...
Recent research shows – somewhat astonishingly – that people are willing to ascribe moral blame to A...
Artificial Intelligence (AI) systems are ubiquitous. From social media timelines, video recommendati...
This paper investigates the attribution of responsibility to artificial intelligent systems (AI). It...
I argue for responsibility internalism. That is, moral responsibility (i.e., accountability, or bein...
The notion of "responsibility gap" with artificial intelligence (AI) was originally introduced in th...
Intelligence Augmentation Systems (IAS) allow for more efficient and effective corporate processes b...
The article explores the effects increasing automation has on our conceptions of human agency. We co...
The notion of “responsibility gap” with artificial intelligence (AI) was originally introduced in th...
Decision-making algorithms are being used in important decisions, such as who should be enrolled in ...
When a computer system causes harm, who is responsible? This question has renewed significance given...
Recent advances in artificial intelligence (AI) and machine learning have prompted discussion about ...
As human science pushes the boundaries towards the development of artificial intelligence (AI), the ...