Abstract. In many security applications a pattern recognition system faces an adversarial classification problem, in which an intelligent, adap-tive adversary modifies patterns to evade the classifier. Several strate-gies have been recently proposed to make a classifier harder to evade, but they are based only on qualitative and intuitive arguments. In this work, we consider a strategy consisting in hiding information about the classifier to the adversary through the introduction of some randomness in the decision function. We focus on an implementation of this strat-egy in a multiple classifier system, which is a classification architecture widely used in security applications. We provide a formal support to this strategy, based on an anal...