Navigation:  Menu Commands > Tools Menu >

 

Predictive Distributions

 

Print this Topic Previous pageReturn to chapter overviewNext page

Compute out-of-sample predictive distributions for the observed variables. The distributions can be estimated from the initial parameter and posterior mode values, or from a sample from the prior or the posterior distribution of the parameters. The number of parameters used is in this case determined by the selected maximum number of posterior draws to use for prediction on the posterior sampling frame on the Options tab. The number of simulation paths per parameter value is determined in the forecasting frame on the Miscellaneous tab. Options in the forecasting frame also determine the maximum forecast horizon and if the paths should be adjusted such that their mean value equal the population mean.

Distributions can be estimated for unconditional and conditional predictions. Moreover, the observed variables can either be in their original form, using the annualization data in the data construction file, or using the observed variable transformation functions from the same file. Conditional forecasts can either be based on direct manipulation of certain structural shocks, by using the approach of Waggoner and Zha (1999) by restricting the moments of the structural shocks over the conditioning sample, or a combination of these two approaches. This third method means that a subset of all the shocks, conditional on the remaining shocks, has a distribution such that the conditioning assumptions are satisfied, while the remaining shocks have their usual distribution. Conditioning assumptions can be used for the observed variables as well as for the state variables.

The conditioning information provided in the data construction file via the Z field (see Table 2 for an example) can be influenced by selecting a subset of the conditioning variables and shocks (when using the direct manipulation approach). The former can be achieved from the select conditioning variables function on the Actions menu, while the latter is handled via the select conditioning shocks function on the same menu.

In addition, YADA can calculate prediction events and marginal predictive densities from the predictive distributions. A prediction event is defined from a variable taking a value between an upper and a lower bound for a certain number of periods. YADA can also perform a risk analysis based on the upper and lower bounds for the prediction events, thereby allowing for an assessment of downside and upside risks, as well as the balance of risks; see, e.g., Kilian and Manganelli (2007). The marginal predictive densities are period-specific (e.g., 2001Q2) kernel density estimates of the marginal predictive distribution.

Furthermore, when the observed variables are forecasted in their original form using draws from the prior or posterior distribution, then YADA can perform a decomposition of the forecast error variances into state variable, measurement error, shock, and parameter uncertainty. The decomposition is displayed in terms of their shares of the forecast error variances for the different forecast horizons considered.

When conditional predictions are calculated for the original variables, then YADA will also compute modesty statistics and write the results to a text-file. These results can then be retrieved from the View menu. For the conditional predictions, YADA also allows the user to add a small number to the upper and subtract the same number from the lower bound. This solves an issue with the patch function for HG2 graphics in matlab (default graphics engine since version 2014b), where the case when the upper and lower bounds coincide may lead to erroneously plotted bounds.

In the case of unconditional forecasts, it is also possible to focus on the calculation of the predictive likelihood for any subset of the observed variables and the choice between the marginal (T+h forecasts only) and the joint (T+1,T+2,...T+h forecasts). The predictive likelihood can, for example, be used to select between models in an out-of-sample forecasting comparison. The user can choose between fixed parameter calculcations and Monte Carlo integration. In addition, the fixed parameters cases (initial parameter values and posterior mode values) can focus on a plug-in estimate of the predictive likelihood and a Laplace approximation esitimate. Monte Carlo integration is either based on the prior or the posterior parameter draws and the procedure is taken from Warne, Coenen and Christoffel (2017). If there are missing observations at the end of the historical sample, i.e., a ragged edge dataset, then the marginal predictive likelihood can also be estimated for back- and nowcasts.

Furthermore, for the unconditional forecasts and the original data, it is also possible to compute the marginal predictive moments using the prior or the posterior draws, as well as the stationary predictive variances and their decomposition into state, shock, measurement error and parameter uncertainty. The stationary predictive variance is computed from the steady-state update estimate of the state covariance matrix and the steady-state of the observed variables. This means that parameter uncertainty is constant over the full prediction horizon since the steady-state point forecast is the steady-state.

The probability integral transform (PIT) can be computed based on the marginal predictive moments conditional on the parameters using the prior or the posterior draws. Optionally, one can also compute these statistics based on the simulated paths from the unconditional forecasts.

The predictive likelihood can also be computed for the conditional forecasts of the individual observed variables as well as of any selected subset of all these variables. This is possible for the prior or for the posterior parameter draws and the calculations are based on simulated paths from the predictive distributions using the kernel smoothing functions in matlab. A requirement for these calculations is the statistics toolbox which includes the function mvksdensity (introduced in R2016a).

The continuous ranked probability score (CRPS) for individual observed variables and the energy score (ES) for a joint set of variables can be estimated via simulated paths for the unconditional or the conditional forecasts based on the prior or the posterior draws. These scores cover marginal forecast horizons and are both proper scoring rules. In contrast to the log predictive score, which uses the log predictive likelihood (predicted density evaluated at observed value) only, the CRPS and ES are not local scoring rules. For further details on scoring rules, see Gneiting and Raftery (2007).

For unconditional forecasts it is also possible at the initial values and at the posterior mode values as well as for the prior and posterior draws to compute the predictive distributions subject to the zero lower bound. For the conditional forecasts, such predictions are also supported. Both forecast types also support centering of the forecasts around observed values assuming that the lower bound is not binding.

Concerning the unconditional and conditional predictive distributions subject to the zero lower bound, the predictive likelihood as well as the CRPS and ES can be calculated using simulated paths base on either the prior or the posterior parameter draws.

The unconditional point forecasts can be decomposed into the informational context of the individual observed variables of groups of such variables. The underlying decomposition is based on the observation weight computations for the state variables in the last historical period.

YADA also supports multiple sets of actuals, i.e. the outcome for a predicted observed variable. The default is the original data. For details, see the data construction file.

 

Additional Information

A more detailed description about prediction using Bayesian techniques can also be found in Section 12 of the YADA Manual.
Unconditional predictive distributions for the DSGE model are described in Section 12.1.1, while conditional predictive distributions are discussed in Section 12.2 for conditioning assumptions on observed variables and in Section 12.4 for conditioning assumptions on state variables.
A more detained description about modesty statistics for the DSGE model is provided in Section 12.3.
A more detailed description of prediction events and risk analysis is given in Section 12.5.
Addtional details on the predictive likelihood are provided in Section 12.6.
Details on the continuous ranked probability score (CRPS) and energy score (ES) are located in Section 12.8.
Details on the probability integral transform (PIT) are given in Section 12.9.
A more detailed discussion about transformations of the data is provided in Section 18.5.1.
For details about solving the model subject to the zero lower bound using anticipated shocks, see Section 3.4 of the manual.

 

 


Page url: http://www.texlips.net/yada/help/index.html?predictive_distributions.htm