We present a new general upper bound on the number of examples required to estimate all of the expectations of a set of random variables uniformly well. The quality of the estimates is measured using a variant of the relative error proposed by Haussler and Pollard. We also show that our bound is within a constant factor of the best possible. Our upper bound implies improved bounds on the sample complexity of learning according to Haussler’s decision theoretic model
Learnability in Valiant's PAC learning model has been shown to be strongly related to the exist...
AbstractThis paper concerns learning binary-valued functions defined on R, and investigates how a pa...
AbstractThis paper presents a general information-theoretic approach for obtaining lower bounds on t...
AbstractWe present a new general upper bound on the number of examples required to estimate all of t...
AbstractWe present a new general upper bound on the number of examples required to estimate all of t...
115 p.Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 1993.This thesis examines issues r...
This paper examines the problem of learning from examples in a framework that is based on, but more ...
AbstractThis paper presents a general information-theoretic approach for obtaining lower bounds on t...
AbstractWe present a new general-purpose algorithm for learning classes of [0, 1]-valued functions i...
AbstractThe PAC model of learning and its extension to real valued function classes provides a well-...
Presented on September 18, 2017 at 11:00 a.m. in the Klaus Advanced Computing Building, Room 1116E.I...
We propose new bounds on the error of learning algorithms in terms of a data-dependent notion of com...
We derive distribution-free uniform test error bounds that improve on VC-type bounds for validation....
We present a new algorithm for general reinforcement learning where the true environment is known to...
We study model selection strategies based on penalized empirical loss minimization. We point out a...
Learnability in Valiant's PAC learning model has been shown to be strongly related to the exist...
AbstractThis paper concerns learning binary-valued functions defined on R, and investigates how a pa...
AbstractThis paper presents a general information-theoretic approach for obtaining lower bounds on t...
AbstractWe present a new general upper bound on the number of examples required to estimate all of t...
AbstractWe present a new general upper bound on the number of examples required to estimate all of t...
115 p.Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 1993.This thesis examines issues r...
This paper examines the problem of learning from examples in a framework that is based on, but more ...
AbstractThis paper presents a general information-theoretic approach for obtaining lower bounds on t...
AbstractWe present a new general-purpose algorithm for learning classes of [0, 1]-valued functions i...
AbstractThe PAC model of learning and its extension to real valued function classes provides a well-...
Presented on September 18, 2017 at 11:00 a.m. in the Klaus Advanced Computing Building, Room 1116E.I...
We propose new bounds on the error of learning algorithms in terms of a data-dependent notion of com...
We derive distribution-free uniform test error bounds that improve on VC-type bounds for validation....
We present a new algorithm for general reinforcement learning where the true environment is known to...
We study model selection strategies based on penalized empirical loss minimization. We point out a...
Learnability in Valiant's PAC learning model has been shown to be strongly related to the exist...
AbstractThis paper concerns learning binary-valued functions defined on R, and investigates how a pa...
AbstractThis paper presents a general information-theoretic approach for obtaining lower bounds on t...