May lethal autonomous weapons systems—‘killer robots’—be used in war? The majority of writers argue against their use, and those who have argued in favour have done so on a consequentialist basis. We defend the moral permissibility of killer robots, but on the basis of the non-aggregative structure of right assumed by Just War theory. This is necessary because the most important argument against killer robots, the responsibility trilemma proposed by Rob Sparrow, makes the same assumptions. We show that the crucial moral question is not one of responsibility. Rather, it is whether the technology can satisfy the requirements of fairness in the redistribution of risk. Not only is this possible in principle, but some killer robots will actually...