We explore the effects of over-specificity in learning algorithms by investigating the behavior of a student, suited to learn optimally from a teacher B, learning from a teacher B' ? B. We only considered the supervised, on-line learning scenario with teachers selected from a particular family. We found that, in the general case, the application of the optimal algorithm to the wrong teacher produces a residual generalization error, even if the right teacher is harder. By imposing mild conditions to the learning algorithm form, we obtained an approximation for the residual generalization error. Simulations carried out in finite networks validate the estimate found
Zero temperature Gibbs learning is considered for a connected committee machine with K hidden units....
haimCfiz.huji.ac.il The performance of on-line algorithms for learning dichotomies is studied. In on...
The performance of on-line algorithms for learning dichotomies is studied. In on-line learning, the ...
We explore the effects of over-specificity in learning algorithms by investigating the behavior of a...
We developed a parallel strategy for learning optimally specific realizable rules by perceptrons, in...
A variational approach to the study of learning a linearly separable rule by a single layer perceptr...
Abstract — We analyze the generalization performance of a student in a model composed of linear perc...
We present a method for determining the globally optimal on-line learning rule for a soft committee ...
(c) 2010 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for...
We study learning from single presentation of examples (incremental or on-line learning) in single-...
On-line learning of a rule given by an N-dimensional Ising perceptron, is considered for the case wh...
A linearly separable Boolean function is learned by a diluted perceptron with optimal stability. A d...
. We present a method for determining the globally optimal on-line learning rule for a soft committe...
Abstract—In this paper we address the question of how closely everyday human teachers match a theore...
International audienceIt has been shown that, when used for pattern recognition with supervised lear...
Zero temperature Gibbs learning is considered for a connected committee machine with K hidden units....
haimCfiz.huji.ac.il The performance of on-line algorithms for learning dichotomies is studied. In on...
The performance of on-line algorithms for learning dichotomies is studied. In on-line learning, the ...
We explore the effects of over-specificity in learning algorithms by investigating the behavior of a...
We developed a parallel strategy for learning optimally specific realizable rules by perceptrons, in...
A variational approach to the study of learning a linearly separable rule by a single layer perceptr...
Abstract — We analyze the generalization performance of a student in a model composed of linear perc...
We present a method for determining the globally optimal on-line learning rule for a soft committee ...
(c) 2010 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for...
We study learning from single presentation of examples (incremental or on-line learning) in single-...
On-line learning of a rule given by an N-dimensional Ising perceptron, is considered for the case wh...
A linearly separable Boolean function is learned by a diluted perceptron with optimal stability. A d...
. We present a method for determining the globally optimal on-line learning rule for a soft committe...
Abstract—In this paper we address the question of how closely everyday human teachers match a theore...
International audienceIt has been shown that, when used for pattern recognition with supervised lear...
Zero temperature Gibbs learning is considered for a connected committee machine with K hidden units....
haimCfiz.huji.ac.il The performance of on-line algorithms for learning dichotomies is studied. In on...
The performance of on-line algorithms for learning dichotomies is studied. In on-line learning, the ...