In symbolic Machine Learning, the incremental setting allows to refine/revise the available model when new evidence proves it is inadequate, instead of learning a new model from scratch. In particular, specialization operators allow to revise the model when it covers a negative example. While specialization can be obtained by introducing negated preconditions in concept definitions, the state-of-the-art in Inductive Logic Programming provides only for specialization operators that can negate single literals. This simplification makes the operator unable to find a solution in some interesting real-world cases. This paper proposes an empowered specialization operator for Datalog Horn clauses. It allows to negate conjunctions of pre-conditions...