This thesis makes contributions to two problems in learning theory: prediction with expert advice and learning mixtures of Gaussians. The problem of prediction with expert advice can be cast as a sequential game between an algorithm and an adversary as follows. At each time step, an algorithm chooses one of n options (or experts) and the adversary sets a cost for each expert. The algorithm's goal is to minimize its regret, i.e. its cost relative to the best expert in hindsight. The celebrated multiplicative weights algorithm is known to be optimal if the the game is terminated at a fixed, known time and the number of experts is large. Optimal algorithms are also known when the number of experts is 2, 3, or 4. If the game does not term...
This dissertation considers a particular aspect of sequential decision making under uncertainty in w...
We consider the problem of identifying the parameters of an unknown mixture of two ar-bitrary d-dime...
Abstract. We introduce a new formal model in which a learning algorithm must combine a collection of...
We consider the problem of prediction with expert advice in the setting where a forecaster is presen...
We consider the problem of prediction with expert advice in the setting where a forecaster is presen...
We provide an algorithm for properly learning mixtures of two single-dimensional Gaussians with-out ...
In this paper, we examine on-line learning problems in which the target concept is allowed to change...
International audienceWe investigate the problem of minimizing the excess generalization error with ...
We consider the problem of identifying the parameters of an unknown mixture of two ar-bitrary d-dime...
In this dissertation, we explore two fundamental sets of inference problems arising in machine learn...
Abstract. We propose and analyze a new vantage point for the learn-ing of mixtures of Gaussians: nam...
We present an extension to the Mixture of Experts (ME) model, where the individual experts are Gauss...
We present an extension to the Mixture of Experts (ME) model, where the individual experts are Gauss...
In the first part of this thesis, we examine the computational complexity of three fundamental stati...
The goal of this thesis is to develop a mathematical framework for optimal, accurate, and affordable...
This dissertation considers a particular aspect of sequential decision making under uncertainty in w...
We consider the problem of identifying the parameters of an unknown mixture of two ar-bitrary d-dime...
Abstract. We introduce a new formal model in which a learning algorithm must combine a collection of...
We consider the problem of prediction with expert advice in the setting where a forecaster is presen...
We consider the problem of prediction with expert advice in the setting where a forecaster is presen...
We provide an algorithm for properly learning mixtures of two single-dimensional Gaussians with-out ...
In this paper, we examine on-line learning problems in which the target concept is allowed to change...
International audienceWe investigate the problem of minimizing the excess generalization error with ...
We consider the problem of identifying the parameters of an unknown mixture of two ar-bitrary d-dime...
In this dissertation, we explore two fundamental sets of inference problems arising in machine learn...
Abstract. We propose and analyze a new vantage point for the learn-ing of mixtures of Gaussians: nam...
We present an extension to the Mixture of Experts (ME) model, where the individual experts are Gauss...
We present an extension to the Mixture of Experts (ME) model, where the individual experts are Gauss...
In the first part of this thesis, we examine the computational complexity of three fundamental stati...
The goal of this thesis is to develop a mathematical framework for optimal, accurate, and affordable...
This dissertation considers a particular aspect of sequential decision making under uncertainty in w...
We consider the problem of identifying the parameters of an unknown mixture of two ar-bitrary d-dime...
Abstract. We introduce a new formal model in which a learning algorithm must combine a collection of...