In this thesis, we used deep reinforcement learning to train autonomous agents and evaluated the impact of increasing the complexity of the training environment over time. This was compared to using a fixed complexity. Also, we investigated the impact of using a pre-trained agent as a starting point for training in an environment with a different complexity, compared to an untrained agent. The scope was limited to only training and analyzing agents playing a variant of the 2D game Snake. Random obstacles were placed on the map, and complexity corresponds to the amount of obstacles. Performance was measured in terms of eaten fruits. The results showed benefits in overall performance for the agent trained in increasingly complex environments....