I argue that fictional models, construed as models that misrepresent certain ontological aspects of their target systems, can nevertheless explain why the latter exhibit certain behaviour. They can do this by accurately representing whatever it is that that behaviour counterfactually depends on. However, we should be sufficiently sensitive to different explanatory questions, i.e., ‘why does certain behaviour occur?’ vs. ‘why does the counterfactual dependency invoked to answer that question actually hold?’. With this distinction in mind, I argue that whilst fictional models can answer the first sort of question, they do so in an unmysterious way (contra to what one might initially think about such models). Moreover, I claim that the second ...