The “Value Alignment Problem” is the challenge of how to align the values of artificial intelligence with human values, whatever they may be, such that AI does not pose a risk to the existence of humans. Existing approaches appear to conceive of the problem as "how do we ensure that AI solves the problem in the right way", in order to avoid the possibility of AI turning humans into paperclips in order to “make more paperclips” or eradicating the human race to “solve climate change”. This paper proposes an approach to Alignment rooted in the Enactive theory of mind that reconceptualises it as "how do we make relevant to AI what it relevant to humans". This conceptualisation is supported with a discussion of 4E cognition and goes on to sugges...