Abstract. Littlestone developed a simple deterministic on-line learning algorithm for learning k-literal disjunctions. This algorithm (called Winnow) keeps one weight for each of the n variables and does multiplicative updates to its weights. We develop a randomized version of Winnow and prove bounds for an adaptation of the algorithm for the case when the disjunction may change over time. In this case a possible target disjunction schedule T is a sequence of disjunctions (one per trial) and the shift size is the total number of literals that are added/removed from the disjunctions as one progresses through the sequence. We develop an algorithm that predicts nearly as well as the best disjunction schedule for an arbitrary sequence of exampl...
AbstractWe present efficient on-line algorithms for learning unions of a constant number of tree pat...
We investigate improvements of AdaBoost that can exploit the fact that the weak hypotheses are one-s...
We investigate improvements of AdaBoost that can exploit the fact that the weak hypotheses are one-s...
AbstractIt is easy to design on-line learning algorithms for learning k out of n variable monotone d...
We give an adversary strategy that forces the Perceptron algorithm to make \Omega\Gamma kN) mistakes...
Given some arbitrary distribution D over {0, 1}n and arbitrary target function c∗, the problem of ag...
Littlestone's Winnow algorithm for learning disjunctions of Boolean attributes where most attri...
AbstractWe reduce learning simple geometric concept classes to learning disjunctions over exponentia...
We present three new filtering algorithms for the Disjunctive constraint that all have a linear runn...
In most on-line learning research the total on-line loss of the algorithm is compared to the total l...
We give a deterministic algorithm for testing satisfiability of formulas in conjunctive normal form ...
The paper studies machine learning problems where each example is described using a set of Boolean f...
In this paper, we examine on-line learning problems in which the target concept is allowed to change...
Editor: Abstract. We study the problem of deterministically predicting boolean values by combining t...
Valiant (1984) and others have studied the problem of learning vari-ous classes of Boolean functions...
AbstractWe present efficient on-line algorithms for learning unions of a constant number of tree pat...
We investigate improvements of AdaBoost that can exploit the fact that the weak hypotheses are one-s...
We investigate improvements of AdaBoost that can exploit the fact that the weak hypotheses are one-s...
AbstractIt is easy to design on-line learning algorithms for learning k out of n variable monotone d...
We give an adversary strategy that forces the Perceptron algorithm to make \Omega\Gamma kN) mistakes...
Given some arbitrary distribution D over {0, 1}n and arbitrary target function c∗, the problem of ag...
Littlestone's Winnow algorithm for learning disjunctions of Boolean attributes where most attri...
AbstractWe reduce learning simple geometric concept classes to learning disjunctions over exponentia...
We present three new filtering algorithms for the Disjunctive constraint that all have a linear runn...
In most on-line learning research the total on-line loss of the algorithm is compared to the total l...
We give a deterministic algorithm for testing satisfiability of formulas in conjunctive normal form ...
The paper studies machine learning problems where each example is described using a set of Boolean f...
In this paper, we examine on-line learning problems in which the target concept is allowed to change...
Editor: Abstract. We study the problem of deterministically predicting boolean values by combining t...
Valiant (1984) and others have studied the problem of learning vari-ous classes of Boolean functions...
AbstractWe present efficient on-line algorithms for learning unions of a constant number of tree pat...
We investigate improvements of AdaBoost that can exploit the fact that the weak hypotheses are one-s...
We investigate improvements of AdaBoost that can exploit the fact that the weak hypotheses are one-s...