In this work we study the learnability of stochastic processes with respect to the conditional risk, i.e. the existence of a learning algorithm that improves its next-step performance with the amount of observed data. We introduce a notion of pairwise discrepancy between conditional distributions at different times steps and show how certain properties of these discrepancies can be used to construct a successful learning algorithm. Our main results are two theorems that establish criteria for learnability for many classes of stochastic processes, including all special cases studied previously in the literature
In this paper, we present a brief survey of reinforcement learning, with particular emphasis on stoc...
Lévy processes refer to a class of stochastic processes, for example, Poisson processes and Brownian...
We consider the dilemma of taking sequential action within a nebulous and costly stochastic system. ...
We informally call a stochastic process learnable if it admits a generalization error approaching ze...
We present data-dependent learning bounds for the general scenario of non-stationary non-mixing stoc...
Stochastic sequential decision-making problems are generally modeled and solved as Markov decision p...
This letter investigates the supervised learning problem with observations drawn from certain genera...
AbstractWe consider two models of on-line learning of binary-valued functions from drifting distribu...
AbstractValiant's protocol for learning is extended to the case where the distribution of the exampl...
Reinforcement Learning (RL) is a simulation-based tech-nique useful in solving Markov decision proce...
The speed with which a learning algorithm converges as it is presented with more data is a central p...
Abstract. We present a new analysis of the problem of learning with drifting distributions in the ba...
In this paper, we consider the problem of learning a subset of a domain from randomly chosen example...
We present new tools from probability theory that can be applied to the analysis of learning algorit...
This thesis investigates the following question: Can supervised learning techniques be successfully ...
In this paper, we present a brief survey of reinforcement learning, with particular emphasis on stoc...
Lévy processes refer to a class of stochastic processes, for example, Poisson processes and Brownian...
We consider the dilemma of taking sequential action within a nebulous and costly stochastic system. ...
We informally call a stochastic process learnable if it admits a generalization error approaching ze...
We present data-dependent learning bounds for the general scenario of non-stationary non-mixing stoc...
Stochastic sequential decision-making problems are generally modeled and solved as Markov decision p...
This letter investigates the supervised learning problem with observations drawn from certain genera...
AbstractWe consider two models of on-line learning of binary-valued functions from drifting distribu...
AbstractValiant's protocol for learning is extended to the case where the distribution of the exampl...
Reinforcement Learning (RL) is a simulation-based tech-nique useful in solving Markov decision proce...
The speed with which a learning algorithm converges as it is presented with more data is a central p...
Abstract. We present a new analysis of the problem of learning with drifting distributions in the ba...
In this paper, we consider the problem of learning a subset of a domain from randomly chosen example...
We present new tools from probability theory that can be applied to the analysis of learning algorit...
This thesis investigates the following question: Can supervised learning techniques be successfully ...
In this paper, we present a brief survey of reinforcement learning, with particular emphasis on stoc...
Lévy processes refer to a class of stochastic processes, for example, Poisson processes and Brownian...
We consider the dilemma of taking sequential action within a nebulous and costly stochastic system. ...