We consider stochastic bandit problems with a continuum set of arms and where the expected re-ward is a continuous and unimodal function of the arm. No further assumption is made regarding the smoothness and the structure of the expected reward function. We propose Stochastic Pentachotomy (SP), an algorithm for which we derive finite-time regret upper bounds. In particular, we show that, for any expected reward function µ that behaves as µ(x) = µ(x?) − C|x − x?|ξ locally around its maxi-mizer x? for some ξ, C> 0, the SP algorithm is order-optimal, i.e., its regret scales asO( T log(T)) when the time horizon T grows large. This regret scaling is achieved without the knowledge of ξ and C. Our algorithm is based on asymptotically optimal s...
We consider the finite-horizon multi-armed bandit problem under the standard stochastic assumption o...
Regret minimisation in stochastic multi-armed bandits is a well-studied problem, for which several o...
In this paper, we consider stochastic multi-armed bandits (MABs) with heavy-tailed rewards, whose p-...
International audienceWe consider stochastic bandit problems with a continuous set of arms and where...
We consider stochastic multi-armed bandits where the expected reward is a unimodal func-tion over pa...
International audienceThis paper introduces and addresses a wide class of stochastic bandit problems...
International audienceIn the classical multi-armed bandit problem, d arms are available to the decis...
International audienceIn the classical multi-armed bandit problem, d arms are available to the decis...
International audienceWe consider stochastic multi-armed bandit problems where the expected reward i...
We consider stochastic multi-armed bandit problems where the expected reward is a Lipschitzfunction ...
International audienceWe consider a generalization of stochastic bandits where the set of arms, $\cX...
In this thesis we address the multi-armed bandit (MAB) problem with stochastic rewards and correlate...
International audienceWe consider a generalization of stochastic bandit problems where the set of ar...
In the classical stochastic k-armed bandit problem, in each of a sequence of rounds, a decision make...
We consider a stochastic bandit problem with in-finitely many arms. In this setting, the learner has...
We consider the finite-horizon multi-armed bandit problem under the standard stochastic assumption o...
Regret minimisation in stochastic multi-armed bandits is a well-studied problem, for which several o...
In this paper, we consider stochastic multi-armed bandits (MABs) with heavy-tailed rewards, whose p-...
International audienceWe consider stochastic bandit problems with a continuous set of arms and where...
We consider stochastic multi-armed bandits where the expected reward is a unimodal func-tion over pa...
International audienceThis paper introduces and addresses a wide class of stochastic bandit problems...
International audienceIn the classical multi-armed bandit problem, d arms are available to the decis...
International audienceIn the classical multi-armed bandit problem, d arms are available to the decis...
International audienceWe consider stochastic multi-armed bandit problems where the expected reward i...
We consider stochastic multi-armed bandit problems where the expected reward is a Lipschitzfunction ...
International audienceWe consider a generalization of stochastic bandits where the set of arms, $\cX...
In this thesis we address the multi-armed bandit (MAB) problem with stochastic rewards and correlate...
International audienceWe consider a generalization of stochastic bandit problems where the set of ar...
In the classical stochastic k-armed bandit problem, in each of a sequence of rounds, a decision make...
We consider a stochastic bandit problem with in-finitely many arms. In this setting, the learner has...
We consider the finite-horizon multi-armed bandit problem under the standard stochastic assumption o...
Regret minimisation in stochastic multi-armed bandits is a well-studied problem, for which several o...
In this paper, we consider stochastic multi-armed bandits (MABs) with heavy-tailed rewards, whose p-...