Of interest is comparing the out-of-sample forecasting performance of two competing models in the presence of possible instabilities. To that effect, we suggest using simple structural change tests, sup-Wald and UDmax for changes in the mean of the loss differences. It is shown that Giacomini and Rossi (2010) tests have undesirable power properties, power that can be low and non-increasing as the alternative becomes further from the null hypothesis. On the contrary, our statistics are shown to have higher monotonic power, especially the UDmax version. We use their empirical examples to show the practical relevance of the issues raised
We study the usefulness of unit root tests as diagnostic tools for selecting forecasting models. Dif...
We propose a new test for superior predictive ability. The new test compares favorably to the realit...
Mean square forecast error loss implies a bias–variance trade-off that suggests that structural brea...
Of interest is comparing the out-of-sample forecasting performance of two competing models in the pr...
We consider the issue of forecast failure (or breakdown) and propose methods to assess retrospective...
We consider the issue of forecast failure (or breakdown) and propose methods to assess retrospective...
Forecast accuracy is typically measured in terms of a given loss function. However, as a consequence...
We consider Wald type statistics designed for joint predictability and structural break testing base...
We compare the asymptotic relative efficiency of the Exp, Mean, and Sup functionals of the Wald, LM ...
In this paper we introduce a “power booster factor” for out-of-sample tests of predictability. The r...
We propose a new test for superior predictive ability. The new test compares favorably to the realit...
This paper shows that the long-run variance can frequently be negative when computing standard Diebo...
We consider tests for structural change, based on the SupF and Cramer-von-Mises type statistics of A...
We demonstrate the asymptotic equivalence between commonly used test statistics for out-of-sample fo...
Heteroskedasticity is a common feature in empirical time series analysis, and in this paper we consi...
We study the usefulness of unit root tests as diagnostic tools for selecting forecasting models. Dif...
We propose a new test for superior predictive ability. The new test compares favorably to the realit...
Mean square forecast error loss implies a bias–variance trade-off that suggests that structural brea...
Of interest is comparing the out-of-sample forecasting performance of two competing models in the pr...
We consider the issue of forecast failure (or breakdown) and propose methods to assess retrospective...
We consider the issue of forecast failure (or breakdown) and propose methods to assess retrospective...
Forecast accuracy is typically measured in terms of a given loss function. However, as a consequence...
We consider Wald type statistics designed for joint predictability and structural break testing base...
We compare the asymptotic relative efficiency of the Exp, Mean, and Sup functionals of the Wald, LM ...
In this paper we introduce a “power booster factor” for out-of-sample tests of predictability. The r...
We propose a new test for superior predictive ability. The new test compares favorably to the realit...
This paper shows that the long-run variance can frequently be negative when computing standard Diebo...
We consider tests for structural change, based on the SupF and Cramer-von-Mises type statistics of A...
We demonstrate the asymptotic equivalence between commonly used test statistics for out-of-sample fo...
Heteroskedasticity is a common feature in empirical time series analysis, and in this paper we consi...
We study the usefulness of unit root tests as diagnostic tools for selecting forecasting models. Dif...
We propose a new test for superior predictive ability. The new test compares favorably to the realit...
Mean square forecast error loss implies a bias–variance trade-off that suggests that structural brea...