A multi-class perceptron can learn from examples to solve problems whose answer may take several different values. Starting from a general formalism, we consider the learning of rules by a Hebbian algorithm and by a Monte-Carlo algorithm at high temperature. In the benchmark “prototype-problem” we show that a simple rule may be more than an order of magnitude more efficient than the well-known solution, and in the conventional limit is in fact optimal. A multi-class perceptron is significantly more efficient than a more complicated architecture of binary perceptrons