In AI safety research, the median timing of AGI arrival is often taken as a reference point, which various polls predict to happen in the middle of 21 century, but for maximum safety, we should determine the earliest possible time of Dangerous AI arrival. Such Dangerous AI could be either AGI, capable of acting completely independently in the real world and of winning in most real-world conflicts with humans, or an AI helping humans to build weapons of mass destruction, or a national state coupled with AI-based government system. In this article, I demonstrate that the earliest timing of Dangerous AI, corresponding to 10 per cent of its arrival probability, is before 2030. Several partly independent sources of information are in accordance:...