115 p.Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 1993.This thesis examines issues related to Valiant's Probably Approximately Correct (PAC) model for learning from examples. In this model, a student observes examples that consist of sample points drawn according to a fixed, unknown probability distribution and labeled by a fixed, unknown binary-valued function. Based on this empirical data, the student must select, from a set of candidate functions, a particular function, or "hypothesis," that will accurately predict the labels of future sample points. The expected mismatch between its prediction and the label of a new sample point is called a hypothesis' "generalization error."We treat a more realistic ...
We initiate the study of learning from multiple sources of limited data, each of which may be corrup...
AbstractThe PAC model of learning and its extension to real valued function classes provides a well-...
We study the interaction between input distributions, learning algorithms and finite sample sizes in...
This paper examines the problem of learning from examples in a framework that is based on, but more ...
How can we select the best performing data-driven model? How can we rigorously estimate its generali...
How can we select the best performing data-driven model? How can we rigorously estimate its generali...
How can we select the best performing data-driven model? How can we rigorously estimate its generali...
How can we select the best performing data-driven model? How can we rigorously estimate its generali...
AbstractWe present a new general upper bound on the number of examples required to estimate all of t...
We present a new general upper bound on the number of examples required to estimate all of the expec...
We study model selection strategies based on penalized empirical loss minimization. We point out a...
Abstract. We study model selection strategies based on penalized empirical loss minimization. We poi...
Wide-ranging digitalization has made it possible to capture increasingly larger amounts of data. In ...
We study the interaction between input distributions, learning algo-rithms, and finite sample sizes ...
We consider the problem of learning accurate models from multiple sources of “nearby ” data. Given d...
We initiate the study of learning from multiple sources of limited data, each of which may be corrup...
AbstractThe PAC model of learning and its extension to real valued function classes provides a well-...
We study the interaction between input distributions, learning algorithms and finite sample sizes in...
This paper examines the problem of learning from examples in a framework that is based on, but more ...
How can we select the best performing data-driven model? How can we rigorously estimate its generali...
How can we select the best performing data-driven model? How can we rigorously estimate its generali...
How can we select the best performing data-driven model? How can we rigorously estimate its generali...
How can we select the best performing data-driven model? How can we rigorously estimate its generali...
AbstractWe present a new general upper bound on the number of examples required to estimate all of t...
We present a new general upper bound on the number of examples required to estimate all of the expec...
We study model selection strategies based on penalized empirical loss minimization. We point out a...
Abstract. We study model selection strategies based on penalized empirical loss minimization. We poi...
Wide-ranging digitalization has made it possible to capture increasingly larger amounts of data. In ...
We study the interaction between input distributions, learning algo-rithms, and finite sample sizes ...
We consider the problem of learning accurate models from multiple sources of “nearby ” data. Given d...
We initiate the study of learning from multiple sources of limited data, each of which may be corrup...
AbstractThe PAC model of learning and its extension to real valued function classes provides a well-...
We study the interaction between input distributions, learning algorithms and finite sample sizes in...