Drawing on `interpretational' accounts of scientific representation, I argue that the use of so-called `toy models' provides no particular philosophical puzzle. More specifically; I argue that once one gives up the idea that models are accurate representations of their targets only if they are appropriately similar, then simple and highly idealised models can be accurate in the same way that more complex models can be. Their differences turn on trading precision for generality, but, if they are appropriately interpreted, toy models should nevertheless be considered accurate representations. A corollary of my discussion is a novel way of thinking about idealisation more generally: idealised models may distort features of their targets,...