haimCfiz.huji.ac.il The performance of on-line algorithms for learning dichotomies is studied. In on-line learn-ing, the number of examples P is equivalent to the learning time, since each example is presented only once. The learning curve, or generalization error as a function of P, depends on the schedule at which the learning rate is lowered. For a target that is a perceptron rule, the learning curve of the perceptron algorithm can decrease as fast as p-1, if the sched-ule is optimized. If the target is not realizable by a perceptron, the perceptron algorithm does not generally converge to the solution with lowest generalization error. For the case of unrealizability due to a simple output noise, we propose a new on-line algorithm for a ...
We analyze the generalization ability of a simple perceptron acting on a structured input distributi...
AbstractWe study on-line learning in the linear regression framework. Most of the performance bounds...
We study on-line gradient-descent learning in multilayer networks analytically and numerically. The ...
The performance of on-line algorithms for learning dichotomies is studied. In on-line learning, the ...
We study on-line learning of a linearly separable rule with a simple perceptron. Training utilizes a...
Plan for today: Last time we looked at the Winnow algorithm, which has a very nice mistake-bound for...
We study learning from single presentation of examples (incremental or on-line learning) in single-...
In this paper we examine on-line learning with statistical framework. Firstly we study the cases wit...
In this dissertation, we consider techniques to improve the performance and applicability of algorit...
A new algorithm for on-line learning linear-threshold functions is proposed which efficiently combin...
Abstract: Within the context of Valiant's protocol for learning, the Perceptron algorithm is sh...
A new algorithm for on-line learning linear-threshold functions is proposed which efficiently combin...
We extend the geometrical approach to the Perceptron and show that, given n examples, learning is of...
In this paper we show how to extract a hypothesis with small risk from the ensemble of hypotheses ge...
Much of modern learning theory has been split between two regimes: the classical offline setting, wh...
We analyze the generalization ability of a simple perceptron acting on a structured input distributi...
AbstractWe study on-line learning in the linear regression framework. Most of the performance bounds...
We study on-line gradient-descent learning in multilayer networks analytically and numerically. The ...
The performance of on-line algorithms for learning dichotomies is studied. In on-line learning, the ...
We study on-line learning of a linearly separable rule with a simple perceptron. Training utilizes a...
Plan for today: Last time we looked at the Winnow algorithm, which has a very nice mistake-bound for...
We study learning from single presentation of examples (incremental or on-line learning) in single-...
In this paper we examine on-line learning with statistical framework. Firstly we study the cases wit...
In this dissertation, we consider techniques to improve the performance and applicability of algorit...
A new algorithm for on-line learning linear-threshold functions is proposed which efficiently combin...
Abstract: Within the context of Valiant's protocol for learning, the Perceptron algorithm is sh...
A new algorithm for on-line learning linear-threshold functions is proposed which efficiently combin...
We extend the geometrical approach to the Perceptron and show that, given n examples, learning is of...
In this paper we show how to extract a hypothesis with small risk from the ensemble of hypotheses ge...
Much of modern learning theory has been split between two regimes: the classical offline setting, wh...
We analyze the generalization ability of a simple perceptron acting on a structured input distributi...
AbstractWe study on-line learning in the linear regression framework. Most of the performance bounds...
We study on-line gradient-descent learning in multilayer networks analytically and numerically. The ...