We give an adversary strategy that forces the Perceptron algorithm to make (N \Gamma k + 1)=2 mistakes when learning k-literal disjunctions over N variables. Experimentally we see that even for simple random data, the number of mistakes made by the Perceptron algorithm grows almost linearly with N , even if the number k of relevant variable remains a small constant. In contrast, Littlestone's algorithm Winnow makes at most O(k log N ) mistakes for the same problem. Both algorithms use linear threshold functions as their hypotheses. However, Winnow does multiplicative updates to its weight vector instead of the additive updates of the Perceptron algorithm. 1 Introduction This paper addresses the familiar problem of predicting with a l...
Valiant (1984) and others have studied the problem of learning vari-ous classes of Boolean functions...
In this paper we consider the problem of learning a linear threshold function (a halfspace in n dime...
Perceptron-like learning rules are known to require exponentially many correction steps in order to ...
We give an adversary strategy that forces the Perceptron algorithm to make \Omega\Gamma kN) mistakes...
Plan for today: Last time we looked at the Winnow algorithm, which has a very nice mistake-bound for...
The problem of learning linear discriminant concepts can be solved by various mistake-driven update ...
Kernel-based linear-threshold algorithms, such as support vector machines and Perceptron-like algori...
We present a new type of multi-class learning algorithm called a linear-max algorithm. Linearmax alg...
Abstract. We analyze the performance of the widely studied Perceptron andWinnow algorithms for learn...
AbstractIt is easy to design on-line learning algorithms for learning k out of n variable monotone d...
This paper addresses the familiar problem of predicting with a linear threshold function. The instan...
AbstractWe reduce learning simple geometric concept classes to learning disjunctions over exponentia...
Abstract. Littlestone developed a simple deterministic on-line learning algorithm for learning k-lit...
The paper studies machine learning problems where each example is described using a set of Boolean f...
In theory, the Winnow multiplicative update has certain advantages over the Perceptron additive upda...
Valiant (1984) and others have studied the problem of learning vari-ous classes of Boolean functions...
In this paper we consider the problem of learning a linear threshold function (a halfspace in n dime...
Perceptron-like learning rules are known to require exponentially many correction steps in order to ...
We give an adversary strategy that forces the Perceptron algorithm to make \Omega\Gamma kN) mistakes...
Plan for today: Last time we looked at the Winnow algorithm, which has a very nice mistake-bound for...
The problem of learning linear discriminant concepts can be solved by various mistake-driven update ...
Kernel-based linear-threshold algorithms, such as support vector machines and Perceptron-like algori...
We present a new type of multi-class learning algorithm called a linear-max algorithm. Linearmax alg...
Abstract. We analyze the performance of the widely studied Perceptron andWinnow algorithms for learn...
AbstractIt is easy to design on-line learning algorithms for learning k out of n variable monotone d...
This paper addresses the familiar problem of predicting with a linear threshold function. The instan...
AbstractWe reduce learning simple geometric concept classes to learning disjunctions over exponentia...
Abstract. Littlestone developed a simple deterministic on-line learning algorithm for learning k-lit...
The paper studies machine learning problems where each example is described using a set of Boolean f...
In theory, the Winnow multiplicative update has certain advantages over the Perceptron additive upda...
Valiant (1984) and others have studied the problem of learning vari-ous classes of Boolean functions...
In this paper we consider the problem of learning a linear threshold function (a halfspace in n dime...
Perceptron-like learning rules are known to require exponentially many correction steps in order to ...