Christian List argues that responsibility gaps created by viewing artificial intelligence (AI) as intentional agents are problematic enough that regulators should only permit the use of autonomous AI in high-stakes settings where AI is designed to be moral or a liability transfer agreement will fill any gaps. This work challenges List’s proposed condition. A requirement for ‘moral’ AI is too onerous given technical challenges and other ways to check AI quality. Moreover, transfer agreements only plausibly fill responsibility gaps by applying independently-justified group responsibility attribution norms such that AI raises no unique regulatory norms
This book proposes three liability regimes to combat the wide responsibility gaps caused by AI syste...
The increasing use of AI and autonomous systems will have revolutionary impacts on society. Despite ...
AI is currently capable of making autonomous medical decisions, like diagnosis and prognosis, withou...
The notion of “responsibility gap” with artificial intelligence (AI) was originally introduced in th...
This Comment argues that the unique relationship between manufacturers, consumers, and their reinfor...
Who is responsible for the events and consequences caused by using artificially intelligent tools, a...
Contains fulltext : 233240.pdf (Publisher’s version ) (Open Access)The notion of "...
The creation and commercialization of these systems raise the question of how liability risks will p...
Recent advances in artificial intelligence (AI) and machine learning have prompted discussion about ...
Artificial Intelligence (AI) has become an increasingly prominent and influential technology in mode...
There are possible artificially intelligent beings who do not differ in any morally relevant respect...
The main challenge that artificial intelligence research is facing nowadays is how to guarantee the ...
I argue for responsibility internalism. That is, moral responsibility (i.e., accountability, or bein...
This paper tackles three misconceptions regarding discussions of the legal responsibility of artific...
This book proposes three liability regimes to combat the wide responsibility gaps caused by AI syste...
This book proposes three liability regimes to combat the wide responsibility gaps caused by AI syste...
The increasing use of AI and autonomous systems will have revolutionary impacts on society. Despite ...
AI is currently capable of making autonomous medical decisions, like diagnosis and prognosis, withou...
The notion of “responsibility gap” with artificial intelligence (AI) was originally introduced in th...
This Comment argues that the unique relationship between manufacturers, consumers, and their reinfor...
Who is responsible for the events and consequences caused by using artificially intelligent tools, a...
Contains fulltext : 233240.pdf (Publisher’s version ) (Open Access)The notion of "...
The creation and commercialization of these systems raise the question of how liability risks will p...
Recent advances in artificial intelligence (AI) and machine learning have prompted discussion about ...
Artificial Intelligence (AI) has become an increasingly prominent and influential technology in mode...
There are possible artificially intelligent beings who do not differ in any morally relevant respect...
The main challenge that artificial intelligence research is facing nowadays is how to guarantee the ...
I argue for responsibility internalism. That is, moral responsibility (i.e., accountability, or bein...
This paper tackles three misconceptions regarding discussions of the legal responsibility of artific...
This book proposes three liability regimes to combat the wide responsibility gaps caused by AI syste...
This book proposes three liability regimes to combat the wide responsibility gaps caused by AI syste...
The increasing use of AI and autonomous systems will have revolutionary impacts on society. Despite ...
AI is currently capable of making autonomous medical decisions, like diagnosis and prognosis, withou...