In the philosophy of artificial intelligence (AI) we are often warned of machines built with the best possible intentions, killing everyone on the planet and in some cases, everything in our light cone. At the same time, however, we are also told of the utopian worlds that could be created with just a single superintelligent mind. If we’re ever to live in that utopia (or just avoid dystopia) it’s necessary we solve the control problem. The control problem asks how humans would be able to control an AI arbitrarily. Nick Bostrom and other AI researchers have proposed different theoretical solutions to the control problem. In this paper, I will not look at the empirical question of how to solve the control problem. Instead, I will ask if we ca...