Forecast accuracy is typically measured in terms of a given loss function. However, as a consequence of the use of misspecified models in multiple model comparisons, relative forecast rankings are loss function dependent. In order to address this issue, a novel criterion for forecast evaluation that utilizes the entire distribution of forecast errors is introduced. In particular, we introduce the concepts of general-loss (GL) forecast superiority and convex-loss (CL) forecast superiority; and we develop tests for GL (CL) superiority that are based on an out-of-sample generalization of the tests introduced by Linton, Maasoumi, and Whang (2005, Review of Economic Studies 72, 735–765). Our test statistics are characterized by nonstandard limit...
In situations where a sequence of forecasts is observed, a common strategy is to examine "rationalit...
Empirical tests of forecast optimality have traditionally been conducted under the assumption of mea...
We argue that the current framework for predictive ability testing (e.g., West, 1996) is not necessa...
Recent work has emphasized the importance of evaluating estimates of a statistical functional (such ...
A rapidly growing literature emphasizes the importance of evaluating the forecast accuracy of empiri...
A nonparametric method for comparing multiple forecast models is developed and implemented. The hypo...
This paper considers the evaluation of forecasts of a given statistical functional, such as a mean, ...
We develop tests for out-of-sample forecast comparisons based on loss functions that contain shape p...
Heteroskedasticity is a common feature in empirical time series analysis, and in this paper we consi...
This article surveys the most important developments in volatility forecast comparison and model sel...
In recent years, an impressive body or research on predictive accuracy testing and model comparison ...
In situations where a sequence of forecasts is observed, a common strategy is to examine “rationali...
We consider the issue of forecast failure (or breakdown) and propose methods to assess retrospective...
Evaluation of forecast optimality in economics and finance has almost exclusively been conducted und...
This paper develops bootstrap methods for testing, whether, in a finite sample, competing out-of-sam...
In situations where a sequence of forecasts is observed, a common strategy is to examine "rationalit...
Empirical tests of forecast optimality have traditionally been conducted under the assumption of mea...
We argue that the current framework for predictive ability testing (e.g., West, 1996) is not necessa...
Recent work has emphasized the importance of evaluating estimates of a statistical functional (such ...
A rapidly growing literature emphasizes the importance of evaluating the forecast accuracy of empiri...
A nonparametric method for comparing multiple forecast models is developed and implemented. The hypo...
This paper considers the evaluation of forecasts of a given statistical functional, such as a mean, ...
We develop tests for out-of-sample forecast comparisons based on loss functions that contain shape p...
Heteroskedasticity is a common feature in empirical time series analysis, and in this paper we consi...
This article surveys the most important developments in volatility forecast comparison and model sel...
In recent years, an impressive body or research on predictive accuracy testing and model comparison ...
In situations where a sequence of forecasts is observed, a common strategy is to examine “rationali...
We consider the issue of forecast failure (or breakdown) and propose methods to assess retrospective...
Evaluation of forecast optimality in economics and finance has almost exclusively been conducted und...
This paper develops bootstrap methods for testing, whether, in a finite sample, competing out-of-sam...
In situations where a sequence of forecasts is observed, a common strategy is to examine "rationalit...
Empirical tests of forecast optimality have traditionally been conducted under the assumption of mea...
We argue that the current framework for predictive ability testing (e.g., West, 1996) is not necessa...