The normalized maximum likelihood distribution achieves minimax coding (log-loss) re-gret given a fixed sample size, or horizon, n. It generally requires that n be known in advance. Furthermore, extracting the sequential predictions from the normalized maxi-mum likelihood distribution is computationally infeasible for most statistical models. Sev-eral computationally feasible alternative strategies have been devised. We characterize the achievability of asymptotic minimaxity by horizon-dependent and horizon-independent strategies. We prove that no horizon-independent strategy can be asymptotically minimax in the multinomial case. A weaker result is given in the general case subject to a condition on the horizon-dependence of the normalized ...
In this paper we present a direct and simple approach to obtain bounds on the asymptotic minimax ris...
We study online learning under logarithmic loss with regular parametric models. Hedayati and Bartlet...
Tasks such as data compression and prediction commonly require choosing a probability distribution o...
Abstract The normalized maximum likelihood distribution achieves minimax coding (log-loss) regret gi...
The normalized maximum likelihood model achieves the minimax coding (log-loss) regret for data of fi...
We study online prediction of individual sequences under logarithmic loss with parametric experts. T...
We study online learning under logarithmic loss with regular parametric models. In this setting, eac...
Abstract—The normalized maximized likelihood (NML) pro-vides the minimax regret solution in universa...
Abstract—The normalized maximized likelihood (NML) pro-vides the minimax regret solution in universa...
Abstract—Minimax prediction of binary sequences is investi-gated for cases in which the predictor is...
The paper considers sequential prediction of individual sequences with log loss (online density esti...
Four related problems are considered in this thesis. The first problem consists of determining an op...
. We consider the game of sequentially assigning probabilities to future data based on past observat...
Decision makers must often base their decisions on incomplete (coarse) data. Recent research has sho...
Consider estimating the mean vector ` from data N n (`; oe 2 I) with l q norm loss, q 1, when ` ...
In this paper we present a direct and simple approach to obtain bounds on the asymptotic minimax ris...
We study online learning under logarithmic loss with regular parametric models. Hedayati and Bartlet...
Tasks such as data compression and prediction commonly require choosing a probability distribution o...
Abstract The normalized maximum likelihood distribution achieves minimax coding (log-loss) regret gi...
The normalized maximum likelihood model achieves the minimax coding (log-loss) regret for data of fi...
We study online prediction of individual sequences under logarithmic loss with parametric experts. T...
We study online learning under logarithmic loss with regular parametric models. In this setting, eac...
Abstract—The normalized maximized likelihood (NML) pro-vides the minimax regret solution in universa...
Abstract—The normalized maximized likelihood (NML) pro-vides the minimax regret solution in universa...
Abstract—Minimax prediction of binary sequences is investi-gated for cases in which the predictor is...
The paper considers sequential prediction of individual sequences with log loss (online density esti...
Four related problems are considered in this thesis. The first problem consists of determining an op...
. We consider the game of sequentially assigning probabilities to future data based on past observat...
Decision makers must often base their decisions on incomplete (coarse) data. Recent research has sho...
Consider estimating the mean vector ` from data N n (`; oe 2 I) with l q norm loss, q 1, when ` ...
In this paper we present a direct and simple approach to obtain bounds on the asymptotic minimax ris...
We study online learning under logarithmic loss with regular parametric models. Hedayati and Bartlet...
Tasks such as data compression and prediction commonly require choosing a probability distribution o...