Artificial Intelligence (AI) is at a crucial point in its development: stable enough to be used in production systems, and increasingly pervasive in our lives. What does that mean for its safety? In his book Normal Accidents, the sociologist Charles Perrow proposed a framework to analyze new technologies and the risks they entail. He showed that major accidents are nearly unavoidable in complex systems with tightly coupled components if they are run long enough. In this essay, we apply and extend Perrow's framework to AI to assess its potential risks. Today's AI systems are already highly complex, and their complexity is steadily increasing. As they become more ubiquitous, different algorithms will interact directly, leading to tightly coup...
In this short consensus paper, we outline risks from upcoming, advanced AI systems. We examine large...
Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If ...
Artificial intelligence is part of our daily lives. How can we address its limitations and guide its...
Artificial Intelligence (AI) is at a crucial point in its development: stable enough to be used in ...
As AI technologies increase in capability and ubiquity, AI accidents are becoming more common. Based...
Artificially Intelligent (AI) systems have ushered in a transformative era across various domains, y...
If the intelligence of artificial systems were to surpass that of humans significantly, this would c...
What should we do when artificial intelligence (AI) goes wrong? AI has huge potential to improve the...
In AI safety research, the median timing of AGI arrival is often taken as a reference point, which v...
This paper discusses the development of AI and the threat posed by the theoretical achievement of ar...
In AI safety research, the median timing of AGI creation is often taken as a reference point, which ...
peer-reviewedIn one aspect of our life or another, today we all live with AI. For example, the mech...
Efforts to develop autonomous and intelligent systems (AIS) have exploded across a range of settings...
In this short consensus paper, we outline risks from upcoming, advanced AI systems. We examine large...
Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If ...
Artificial intelligence is part of our daily lives. How can we address its limitations and guide its...
Artificial Intelligence (AI) is at a crucial point in its development: stable enough to be used in ...
As AI technologies increase in capability and ubiquity, AI accidents are becoming more common. Based...
Artificially Intelligent (AI) systems have ushered in a transformative era across various domains, y...
If the intelligence of artificial systems were to surpass that of humans significantly, this would c...
What should we do when artificial intelligence (AI) goes wrong? AI has huge potential to improve the...
In AI safety research, the median timing of AGI arrival is often taken as a reference point, which v...
This paper discusses the development of AI and the threat posed by the theoretical achievement of ar...
In AI safety research, the median timing of AGI creation is often taken as a reference point, which ...
peer-reviewedIn one aspect of our life or another, today we all live with AI. For example, the mech...
Efforts to develop autonomous and intelligent systems (AIS) have exploded across a range of settings...
In this short consensus paper, we outline risks from upcoming, advanced AI systems. We examine large...
Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If ...
Artificial intelligence is part of our daily lives. How can we address its limitations and guide its...