We study time-inconsistent recursive stochastic control problems, i.e., for which Bellman's principle of optimality does not hold. For this class of problems classical optimal controls may fail to exist, or to be relevant in practice, and dynamic programming is not easily applicable. Therefore, the notion of optimality is defined through a game-theoretic framework by means of subgame-perfect equilibrium: we interpret our preference changes which, realistically, are inconsistent over time, as players in a game for which we want to find a Nash equilibrium. The approach followed in our work relies on the stochastic (Pontryagin) maximum principle: we adapt the classical spike variation technique to obtain a characterization of equilibrium strat...