Wednesday, November 29, 2023

From Bańbura et al. (2010) to Cascaldi-Garcia’s (2022) Pandemic Priors

Authors and guest post by Ole Rummel and Davaajargal Luvsannyam

This is the second in a series of blog posts that will present EViews add-in, LBVAR, aimed at estimating and forecasting a large Bayesian VAR model due to Banbura, Giannone and Reichlin (2010). We will discuss and replicate Cascaldi-Garcia (2022) on this blog.

Table of Contents

  1. Introduction
  2. Why we should be using Cascaldi-Garcia’s (2022) Pandemic Priors
  3. Implementing Cascaldi-Garcia’s (2022) Pandemic Priors in EViews
  4. Using the lbvar EViews add-in for Pandemic Priors
  5. Concluding Remarks
  6. References


Cascaldi-Garcia (2022) proposes an easy and straightforward solution to deal with the extreme COVID-19 episode in Bayesian VAR (BVAR) models, which have become the workhorse models in many central banks. More specifically, he illustrates how to augment the dummy observations employed in the Minnesota or Litterman prior with time dummies. These Pandemic Priors will be time dummies with uninformative priors, which are able to correctly adjust the historical relationships among the variables for the extreme values observed in specific sample periods. While designed for the COVID-19 pandemic, this approach provides an easy and straightforward solution to deal with any extreme episode, recover historical relationships and allows for the proper identification and propagation of structural shocks.

Following the notation we have throughout this exercise, assume a VAR model with n variables and p lags: \begin{align} Y_{t} = c + \bm{1}_{t=a} d_{a} + \bm{1}_{t=a+1} d_{a+1} + \ldots + \bm{1}_{t=a + h} d_{a + h} + A_{1}Y_{t - 1} + \ldots + A_{p} Y_{t - p} + u_t \end{align} where $ u_{t} $ are the innovations with $ E [u_{t} u_{t}^{\top}] = \Sigma $, $ c $ is a vector of $ n $ intercepts, $ d_{a} $ through $ d_{a + h} $ are $ h- $ vectors with $ n $ time dummies, for a pre-defined number of $ h $ periods from $ a $ though $ a + h $ which can be the COVID-19 period, and $ \bm{1}_{t = i} $ is an indicator function that equals unity for period set $ i = a, a + 1, \ldots, a + h $, and zero otherwise.

What do these dummies look like? Our starting point is the possibility that each variable in our model can potentially experience a different shift and persistence during the COVID-19 period, but that the individual time dummies are able to capture these heterogenous responses. Jumping ahead a bit, the empirical illustration below will be using six pandemic dummy variables (dum1, dum2, dum3, dum4, dum5 and dum6), which are shown below in Figure 1 for some of the sample period. We can see that dum1 takes on a value of 1 in 2020m3, i.e., March 2020, which is generally taken as the "official" start of the COVID-19 pandemic. The value of 1 then moves to April 2020, which is captured by dum2. We continue to move through the remaining months of 2020 Q2 and the first month of 2020 Q3: dum3 is set to 1 in May 2020, dum4 is set to 1 in June 2020, dum5 is set to 1 in July 2020 and dum6 is set to 1 in August 2020.

Figure 1: Pandemic dummies in Cascaldi-Garcia’s (2022) empirical illustration

We generate the dummies in EViews as follows. First, we create a new variable consisting of zeroes called dum1 by clicking on Genr on top of the workfile window. This opens a dialog box to Generate Series by Equation. We subsequently enter dum1 = 0 into the Enter equation window.

We then access the new variable dum1 by double-clicking, which opens it. In the next step, we click on Enter+/– which unlocks the spreadsheet. This allows us to make changes, more specifically, we select the entry for March 2020. At the moment, the value for that date is 0. We manually change this to 1 and press Enter+/– again to lock the spreadsheet. This concludes the creation of a dummy for March 2020. We then repeat this operation five times to create five additional dummies for April 2020 (dum2), May 2020 (dum3), June 2020 (dum4), July 2020 (dum5) and August 2020 (dum6).

As in Litterman (1986) and BGR, Cascaldi-Garcia (2022) imposes the prior that the variables are centred around the random walk with drift, but now extending the concept to the idea that the COVID-19 pandemic is an abnormal period where the relationship between the variables may diverge from history.

In other words, the Pandemic Priors can be represented as:

\begin{align} Y_{t} = c + \bm{1}_{t=a} d_{a} + \bm{1}_{t=a+1} d_{a+1} + \ldots + \bm{1}_{t=a + h} d_{a + h} + Y_{t - 1} + u_t \end{align} which is equivalent to shrinking the coefficient matrix $ A_{1} $ to $ I_{n} $ and the matrices $ A_{2}, \ldots, A_{p} $ to zero matrices.

The moments for the prior distribution of the coefficients are set as: \begin{align} E \left[ \left( A_{l} \right)_{i,j} \right] &= \begin{cases} \rho_{i} \quad (j = 1, i = 1)\\ 0 \quad \text{otherwise} \end{cases}\\ Var \left[ \left( A_{l} \right)_{i,j} \right] &= \begin{cases} \frac{\lambda_{1}^{2}}{p^{2}} \quad (i = j)\\ \lambda_{2} \frac{\lambda_{1}^{2}\sigma_{i}^{2}}{l^{2}\sigma_{j}^{2}} \quad \text{otherwise} \end{cases} \end{align} The coefficients in $ A_{1}, \ldots , A_{p} $ are assumed to be independent and normally distributed, the covariance matrix of the residuals is assumed to be diagonal, such that $ \Sigma = diag(\sigma_{1}^{2}, \ldots , \sigma_{n}^{2}) $, and the prior on the intercept is diffuse. The same diffuse prior is taken for the time dummies. The choices for $ \sigma_{i} $; the overall prior tightness, $ \lambda_{1} $; the factor $ \frac{1}{l^{2}} $ and the coefficient $ \lambda_{2} $ are following the standard practice described in BGR. In addition, the choices are flexible enough to accommodate beliefs about persistence, shrinkage toward the prior, variance decrease over lags and the importance of own lag, $ i $.

By setting $ \lambda_{2} = 1$, it is possible to impose a normal-inverse Wishart prior of the form: \begin{align} vec\left( B \right ) | \Psi &\sim N\left( vec\left( B_{0} \right), \Sigma_{u} \otimes \Omega_{0} \right) \\ \Psi &\sim IW \left( S_{0}, \alpha_{0} \right) \end{align} where $ B $ is the matrix that collects the reduced-form coefficients of the $ Y = XB + U $ VAR system, $ B_{0}, \Omega_{0}, S_{0} $ and $ \alpha_{0} $ are prior expectations and $ E \left[ \Psi \right] = \Sigma $.

In practice, these priors can be easily implemented through a series of dummy observations. Cascaldi-Garcia (2022) extends the BGR approach to allow for priors for the $ h $ time dummies described in equation (94). Formally, the left- and right-hand side dummy observations ($ Y_{d} $ and $ X_{d} $ respectively) are defined as: \begin{align} Y_{d} &= \begin{bmatrix} \frac{\text{diag}\left(\rho_{1} \sigma_{1} \ldots, \rho_{n} \sigma_{n} \right)}{\lambda} \\ \bm{0}_{n(p - 1) \times n} \\ \frac{\text{diag}\left(\rho_{1} \mu_{1} \ldots, \rho_{n} \mu_{n} \right)}{\tau} \\ \text{diag}\left(\sigma_{1} \ldots, \sigma_{n} \right) \\ \bm{0}_{1 \times n} \end{bmatrix} \\ X_{d} &= \begin{bmatrix} J_{p} \otimes \frac{\text{diag}\left(\rho_{1} \sigma_{1} \ldots, \rho_{n} \sigma_{n} \right)}{\lambda} & \bm{0}_{np \times 1} & \bm{0}_{np \times h} \\ \bm{1}_{1 \times p} \otimes \frac{\text{diag}\left(\sigma_{1} \ldots, \sigma_{n} \right)}{\tau} & \bm{0}_{n \times 1} & \bm{0}_{n \times h} \\ \bm{0}_{n \times np} & \bm{0}_{n \times 1} & \bm{0}_{n \times h} \\ \bm{0}_{1 \times np} & \epsilon & \phi I_{1 \times h} \\ \end{bmatrix} \end{align} where $ J_{p} = \text{diag}\left(1, 2, \ldots , p\right) $, and $ \epsilon $ imposes an uninformative prior on the intercept. As before, the first block of dummies (at the top of the respective matrices) imposes prior beliefs on the autoregressive coefficients, the second block constrains the sum of coefficients, the third block implements the prior for the variance-covariance matrix and the fourth block of dummies (at the bottom of the respective matrices) reflects the uninformative prior for the intercept.

Comparing equations (4) of the first blog and (6), however, the innovation due to Cascaldi-Garcia (2022) occurs in the last column of the $ X_{d} $ matrix on the right-hand side of the equation, which imposes priors also for the time dummies through $ \phi $, which is ordered last in $ X_{d} $. Following common practice, $ \sigma_{i} $ can be calibrated from the variances of the residuals of univariate AR models with $ p $ lags for each of the $ n $ variables in the information set. Setting $ \epsilon $ to a very small number makes the prior on the intercept fairly uninformative, and the same uninformative approach is followed for $ \phi $. In short, the final matrices for $ Y_{d} $ and $ X_{d} $ retain the five rows from equation $ (X) $, but the $ X_{d} $ matrix now has an additional column.

Why we should be using Cascaldi-Garcia’s (2022) Pandemic Priors

Lenza and Primiceri (2022) methodology for estimating a VAR after March 2020 conjectures that the shocks observed at the onset of the COVID-19 pandemic translate into substantially larger volatility in the underlying macroeconomic and financial time series. More specifically, if the volatility of all shocks were scaled up by exactly the same amount, with exactly the same persistence thereafter (which is referred to as the commonality assumption), it is possible to establish priors and estimate these parameters. As noted by the authors themselves, the communality assumption is an approximation that works well in a period in which all series experience the same (excessive) variation.

Statistical and empirical approaches to dealing with structural breaks in estimation have gone through two phases. In the first phase (Phase 1), econometricians as well as applied economists spent a lot of time monitoring and identifying structural breaks in the data and adjusting for them by using robust estimation methods. In most cases, it was possible – and even advisable – to model the structural break itself.

In the approaches of the second – arguably still ongoing – phase (Phase 2), we generally take the existence of a structural break in the data as given and, rather than model the break itself, we seek robust estimation/forecasting methods that are less susceptible to the effects of the structural break in the data on estimation. Which of the two approaches would be most suitable for the structural break induced by the COVID-19 pandemic?

Dealing with structural breaks is somewhat easier if the location of the structural break is known. This turns the analysis into one of modelling a structural break rather than first estimating and then modelling the break, which compounds possible errors on top of each other. We note that this refers to the first generation of structural break models.

In particular, if the structural break is of limited duration, we can try and capture the aberrant observations with the help of dummy variables. This is nothing else than the standard approach of modelling outliers in the data – equal to the pandemic period – with the help of dummy variables.

Predicting the macroeconomic impact of the COVID-19 pandemic with reduced-form time-series models is challenging because a shock of this scale was never directly observed in the available data. More specifically, the pandemic caused macroeconomic variables to display complex patterns that do not follow any historical behaviour. Moreover, the COVID-19 shock falls between two stools:

  • we know it happened (no need for break identification); but
  • it is too big to ignore

In short, the COVID-19 pandemic is a structural break that really cannot be ignored. But earlier approaches, such as break monitoring with robust modelling and data-dependent downweighting of historical data, may fall short in dealing with the effects of the COVID-19 pandemic shock, so where do we go from here? What is needed is an empirical approach that can deal with such unusual behaviour and retain historical relationships, generate reliable forecasts and provide correct interpretations of economic shocks.

The unprecedented nature of the COVID-19 pandemic and its impact on macroeconomic variables has led to several very clever new ideas and techniques for modelling and forecasting in the presence of structural breaks. Academics and researchers such as Lenza and Primiceri (2022), Primiceri and Tambalotti (2020) and Cascaldi-Garcia (2022) have returned to modelling the break process explicitly. This does not mean that the older robust approaches of Phase 2 no longer work, but we should take advantage of the fact that we know exactly when the break occurred – this is a luxury we do not often have. All three approaches are distinguished by their use of vector autoregressions (VARs) estimated using Bayesian techniques. Alternative approaches to dealing with the COVID-19 induced break were proposed by Schorfheide and Song (2021), who suggested discarding the extreme observations, and the complex settings in Carriero et al. (2022), involving modelling extreme observations as random shocks in the stochastic volatility of the VAR model.

As we have seen in the presentation, the structural breaks induced by the COVID-19 pandemic invalidate historical relationships in the data, produce unreliable forecasts and lead to incorrect interpretations of structural (or primitive) economic shocks. They do this on account of generating intercept shifts in the macroeconomic variables in the model in the selected periods. This is very much in line with the V-shaped recovery, which seems to have been the case in many economies after the COVID-19 period in 2020 H1.

Cascaldi-Garcia (2022) considers eight monthly US macroeconomic and financial variables: the excess bond premium of Gilchrist and Zakrajšek (2012), Standard and Poor’s (S&P) 500 stock market index, the Wu and Xia (2016) federal funds shadow rate, real personal consumption expenditure, the personal consumption expenditure price index, total non-farm payrolls, real industrial production and the number of unemployed as a percentage of the labour force.

Implementing Cascaldi-Garcia’s (2022) Pandemic Priors in EViews

We note some differences between Cascaldi-Garcia’s (2022) approach and that put forward by Lenza and Primiceri (2002). In addition to estimating the VAR by modelling a common shift and persistence of the volatility of the shocks during the extreme period of the COVID-19 pandemic, the latter also assume that the volatility of the shocks is scaled up (and decays) by exactly the same amount. This allows them to formulate a prior and establish these scale parameters.

By allowing for direct individual intercept shifts during the pandemic period rather than common volatility scale shifters and persistence, Cascaldi-Garcia’s (2022) approach is much simpler. In fact, under the Pandemic Priors, the individual time dummies will capture each variables’ different shifts and persistence.

As mentioned above, the Pandemic Priors build upon BGR by extending the dummy observation approach to encompass time dummies during the extreme period.


The data we are using are taken from Cascaldi-Garcia’s website. The EViews workfile data_pprior.wf1 contains the data that will be used in the estimation. To open the EViews workfile from within EViews, choose File, Open, EViews Workfile…, select data_pprior.wf1 from the appropriate Data folder and click on Open. Alternatively, you can double-click on the workfile icon from outside of EViews, which will open the workfile in EViews automatically.

The eight monthly US macroeconomic and financial variables are all in levels and consist of the excess bond premium (ebp) of Gilchrist and Zakrajšek (2012), the log of Standard & Poor’s (S&P) 500 stock market index (sp500l), the Wu and Xia (2016) federal funds shadow rate (fedfunds), the log of real personal consumption expenditure (pcel), the log of the personal consumption expenditure price index (pcepil), the log of total non-farm payrolls (payemsl), the log of real industrial production (indprol) and the number of unemployed as a percentage of the labour force (unrate). Note that Cascaldi-Garcia’s (2022) takes logs of five of the eight variables (sp500, pce, pcepi, payems and indpro) – these variables are indicated by an "l" at the end of their name. In essence, these eight variables capture a modern-day monetary model in the spirit of Christiano et al. (1999). The sample period runs from January 1975 to March 2022, for a total of 567 monthly observations.

Whenever we begin working with a new data set, it is always a good idea to take some time to simply examine the data, so the first thing we will do is to plot the data to make sure that it looks fine. This will help ensure that there were no mistakes in the data itself or in the process of reading in the data. It also provides us with a chance to observe the general (time-series) behaviour of the series we will be working with. A plot of our data is shown in Figure 2.

Figure 2: Time series of the eight underlying monthly US macroeconomic and financial time series (January 1975 – March 2022)

The excess bond premium (ebp) looks pretty stationary, and the log of the PCE price index (pcepil) does not show any dislocation during the pandemic period. The same is true, to some extent, for the log of the S&P 500 index (sp500l) and the shadow rate (fedfunds). On the other hand, the log of private consumption expenditure (pcel) shows a notable dip during the onset of the COVID-19 pandemic, as does the log of industrial production (indprol). Most obvious is the sharp spike in the unemployment rate (unrate), which is mirrored by the sharp fall in the log of employment (payemsl).

In short, our (subjective) visual inspection reveals evidence of structural breaks in four of the eight variables. But when should the COVID-19 period end? These aberrant observations are only in the data for a few months (some four months of double-digit entries in the case of unrate, more or less four months below 140,000 in the case of payemsl, roughly five months in the case of pcel and about seven months below 4.57 in the case of indprol), and yet their influence on the VAR is sizeable, as we will see below in a BVAR model that does not account for the structural break.

Using the lbvar EViews add-in for Pandemic Priors

Cascaldi-Garcia (2022) produces two graphical pieces (figure 4 and 5) of evidence that highlight the pitfalls of not accounting for the pandemic period and the benefits of using the Pandemic Priors. Our task at hand will be to replicate both of these figures 4 and 5 using EViews’ (updated) lbvar add-in for the estimation of (very) large Bayesian VARs as described by BGR.

In order to replicate these figures of Cascaldi-Garcia (2022), we use EViews’ lbvar add-in, which can be run either interactively via a dialog box that is available to us after we open the add-in from the Add-ins option at the top of the screen after we open EViews or using a few lines of EViews code. Personally, we have found working with a short EViews program to communicate the settings to the lbvar add-in more user-friendly than the dialog window, although they obviously both serve the same purpose. We will have a look at both approaches in what follows.

One thing we must do before estimation is to specify the prior means of our endogenous variables, which are all set to one. With EViews and the data_pprior.wf1 workfile open, this is most easily accomplished by typing the following command into the command line:

vector irw_s = @ones(8)

This is a shortcut, but there is – as always with EViews – an equivalent way using the dialog menus. The alternative way of creating the (8 × 1) vector of ones called irw_s is to go to Object in the top bar of the workfile window and select New Object…, Matrix-Vector-Coef and giving it the name of irw_s.

Figure 3: New Matrix Object

As we have eight endogenous variables, we must assign a prior mean to each, which is why we create an (8 × 1) vector called irw_s. At the moment, the eight values are all zero, but we need to change them to ones. We click on Edit+/– to ‘unlock’ the spreadsheet and change all the entries from 0 to 1. We then click Edit+/– again to lock the spreadsheet and close the vector object.

We are now ready to use EViews’ lbvar add-in. The documentation that comes with the add-in presents a list of possible (text) commands that need to be specified. All these options are also reflected in the dialog box that we will look at shortly. The optional settings are almost equivalent between the BGR approach and the Pandemic Priors, except for the fact that the Pandemic Priors have an additional four. These can be found at the bottom of Table 1. The options are included either via the boxes, drop-down menus and checkboxes in the dialog associated with the lbvar add-in or the EViews command language which we will get to shortly. We note that not all the possible options need to be included in every single application. Table 1 below shows all the available options, the respective commands, default settings (in parentheses) and a short description of what they do.

Object Name Description
lambda Prior parameter lambda (default setting is $ \lambda = 0.1 $)
sum Include sum-of-coefficients dummy observation prior (default setting is sum = 1, i.e., prior is switched on)
tau Prior parameter tau (default setting is $ \tau = 10 $ and $ \lambda = 1 $)
estimate Estimation:
  1. impulse response functions (IRFs, default setting) (estimate = 1)
  2. forecasting (estimate = 2)
horizon Number of horizons for IRFs (default setting is 48)
mcdraw Number of Monte Carlo draws (default setting is 100)
cband Overall percentage of confidence band (fraction less than 1, default setting is 0.68 or one standard deviation))
grid Grid search for optimal lambda and tau
fit Fit evaluation variables
tsample Training sample size (enter as tsample = “first_period last_period”)
suffix Forecast output suffix (default is _f)
fhorizon Forecast horizons (default is 12 periods)
sample Sample size (default setting is the current workfile sample size)
vd Variance decomposition (default setting is vd = 0, i.e., this option is switched off; option is switched on with vd = 1)
hd Historical decomposition (default setting is hd = 0, i.e., this option is switched off; option is switched on with hd = 1)
save Save IRFs to matrix (save = matrix_name)
ident Identification of shocks (1 = Cholesky or recursive decomposition (default), 2 = Generalised decomposition)
pand Include Pandemic Priors (pand = 1, Pandemic Priors switched on; pand = 0, Pandemic priors switched off)
covper Number of COVID-19 periods (default is zero)
dummy List of dummy variables
phi $ \phi $ for dummy observations (default setting is $ \phi = 0.001 $)
eps $ \epsilon $ prior parameter for intercept (default setting is $ \epsilon = 0.00001 $)

Note: The first 17 options and their settings apply to BGR, while the additional bottom five options need to be activated for Cascaldi-Garcia’s (2022) Pandemic Priors.

The lbvar add-in is run by selecting it from the Add-ins menu in the Command bar, after which the following dialog window will appear.

Figure 4: lbvar Dialog

The add-in has several prepopulated default settings, some of which may not be appropriate in all applications. In particular, these are the settings for:

  • the prior parameter lambda, which regulates the importance given to the priors, is set to 0.1
  • the checkbox for the sum-of-coefficients dummy observations prior, such that the prior is switched on (see Section 3.3.2 for a short discussion of this prior)
  • the prior parameter tau, which controls the shrinkage of the sum-of-coefficients prior, is set to 1
  • the Estimation option, which has been set to impulse response functions (IRFs)
  • the recursive (Cholesky) Identification of shocks (rather than the generalised option)
  • the Number of horizons for the IRFs, set to 48 months (or four years)
  • the Number of Monte Carlo (MC) draws, equal to 100
  • the percentage of the probability distribution covered by the confidence bands, which is 0.68 or 68 per cent
  • the automatic suffix applied to any variable’s forecast name, equal to _f
  • the number of forecast horizons, equal to 12 months (or one year)
  • the phi parameter for Pandemic Prior dummy observations: 0.001 (Pandemic Priors only)
  • the epsilon parameter for Pandemic Prior dummy observations: 0.00001 (Pandemic Priors only)

We note in passing that the lbvar add-in automatically imports the sample size from the current work file. If the current settings are not appropriate, we need to change them either manually in the dialog box or by including the respectively adjusted setting in the command window or EViews command code.

We start by replicating the unconditional twelve-month forecasts in Figure 5. More specifically, we first estimate the BVAR using the BGR approach, that does not account for the structural break induced by the COVID-19 pandemic and assumes unchanged parameter values throughout. Following Cascaldi-Garcia (2022), we include $ p = 12 $ lags and set the fixed overall tightness $ \lambda = 0.2 $ and $ \tau = 10 \lambda = 10 \times 0.2 = 2 $. The completed dialog box for this scenario should be as follows.

Figure 5: BGR options

After clicking OK, we should find eight new series in our workfile, which are the original series with an _fbgr suffix after their name, where _fbgr denotes the forecast using the BGR approach, i.e., without the Pandemic Priors. This is, unfortunately, the only output associated with forecasting that the lbvar add-in will give us. In other words, we do not get any uncertainty bands around the forecast as in Figure 4 (Cascaldi-Garcia).

Instead of the dialog box, we can convert the above commands into a few lines of EViews code to do the same thing. Taking our initial dialog box above as our starting point, we only need to change a few things. The general form of the lbvar EViews command code is:

lbvar(options) lags rw_prior impulse_variable @ endogenous variables

where the options included in the brackets associated with the command, i.e., lbvar(options), correspond to one or more of those listed in Table 1; lags denotes the number of lags, $ p $, in the $ \text{BVAR}(p) $ model; rw_prior is the name of the matrix (vector) holding the prior means for the endogenous variables; impulse_variable in the context of impulse response functions is pretty self-explanatory and endogenous_variables is the list of $ n $ endogenous variables in the BVAR. More specifically, to replicate the above setting from the dialog box, the command line will be:

lbvar(estimate=2, fhorizon=12, sum=1, lambda=0.2, tau=2, sample="1975m1 2022m3", pand=0, suffix=_fbgr) 12 irw_s ebp @ ebp sp500l fedfunds pcel pcepil payemsl indprol unrate

We can see several options from Table 1 appearing in parentheses after lbvar: estimate=2 selects the forecasting rather than the IRF option; fhorizon=12 sets the forecast horizon of twelve months; sum=1 activates the sum-of-coefficients dummy observations prior from equation (77); lambda=0.2 and tau=2 define those two parameters; sample=”1975m1 2022m3” defines the sample size and corresponds to the Sample size window in the lbvar dialog window; pand=0 switches off the Pandemic Priors; suffix=_fbgr adds the suffix _fbgr to the forecast variables; 12 defines the lag length, $ p $ (equivalent to the Number of lags box in the dialog window), of the $ \text{BVAR}(p) $; irw_s denotes the matrix (vector) holding the prior means for the endogenous variables; ebp is the impulse of impulse response functions is pretty self-explanatory and the entries after @ are the eight monthly endogenous variables in the BVAR. Note that we have asked EViews to store the forecasts coming from BGR as variable_name_fbgr.

Either of these two commands will generate the forecasts from the BGR model shown in brown in Figure 6.

Figure 6: Unconditional twelve-month-ahead forecasts as of March 2022 using the BGR and Pandemic Prior approaches
Notes: The green lines denote one year of historical data for the respective endogenous variables in the BVAR. The blue line denotes the twelve-month-ahead unconditional forecasts accounting for the COVID-19 pandemic with the Pandemic Priors, while the brown line shows the twelve-month-ahead unconditional forecasts assuming that the coefficient estimates remain unchanged over the estimation sample.

We will defer a discussion of the results until we have generated all the relevant output, that is, both sets of forecasts. The next step will therefore be to generate the unconditional twelve-months forecasts using the Pandemic Priors. Again, we can use either the lbvar dialog box or a few lines of EViews code for this. Let us start with the dialog box, as we can retain all the above settings. The main difference is that the Include Pandemic Priors checkbox comes into play. We use the same options that were employed above for the BGR approach, including setting a lag length of $ p = 12 $ lags and fixing the overall tightness $ \lambda = 0.2 $ and $ \tau = 10 × \lambda = 2 $. The main difference is that we check the Include Pandemic Priors checkbox.

Figure 7: LBVAR Dialog options with pandemic priors.

Note that we have asked EViews to store the forecasts coming from Cascaldi-Garcia’s (2022) model as variable_name_fpp Selecting the option to Include Pandemic Priors activates an additional dialog box for the dummies and the two hyper-parameters ($ \phi $ and $ \epsilon $) associated with them. This box appears after you click on OK. Some of the fields are preset, such as the number of COVID periods (0) and the phi ($ \phi $) and epsilon ($ \epsilon $) priors, which are equal to 0.001 and 0.00001 respectively. For information, the equivalent settings in Cascaldi-Garcia’s (2022) MATLAB code are 0.001 for both. In other words, when you use the below dialog window, please change the entry for the Epsilon prior from 0.00001 to 0.001. In other words, both $ \epsilon $ in equation (6) and $ \phi $ in equation (6) are set to 0.001; See Figure 8 below.

Figure 8: LBVAR Dialog options for COVID dummies

Cascaldi-Garcia (2022, p. 6) reports that the COVID-19 pandemic period is modelled by applying the Pandemic Priors from March 2020 to August 2020, such that we include $ h = 6 $ individual dummies. We therefore enter the number of COVID periods to dummy out, equal to six, and the associated six dummy variables, which we have called dum1, dum2, dum3, dum4, dum5 and dum6. As highlighted above, these six time dummies build on the assumption that the COVID-19 shock is akin to intercept shifts for the macroeconomic variables in the selected period (from March to August 2020).

After clicking OK in the second dialog box, the workfile will include a second set of forecasts for the eight endogenous variables, all of which can be identified by the suffix _fpp. These forecasts appear as the blue lines in Figure 3 above.

As before, instead of the dialog box, we can convert the above commands into a few lines of EViews code to do the same thing. Taking our initial dialog box above as our starting point, we only need to change a few things.

lbvar(estimate=1, sum=1, lambda=0.2, tau=2, sample="1975m1 2022m3", pand=1, dummy="dum1 dum2 dum3 dum4 dum5 dum6", covper=6, eps=0.001) 12 irw_s ebp @ ebp sp500l fedfunds pcel pcepil payemsl indprol unrate

The next few options (estimate=1, sum=1, lambda=0.2, tau=2, sample=“1975m1 2022m3”) are equivalent to the BGR case. The first difference occurs with the pand command. Remember that setting pand=0 switches the Pandemic Priors off and results in the BGR estimation, while setting pand=1 switches them on. We now have pand=1, which corresponds to including the Pandemic Priors. As mentioned above, the equivalent in the dialog window is the Include Pandemic Priors checkbox. As we can see, the Pandemic Priors have a few more options, starting with dummy = “dum1 dum2 dum3 dum4 dum5 dum6”. This informs the lbvar add-in that we will be using six time dummies. This command also tells EViews what the names of the time dummy variables in the workfile are. In our case, they are dum1, dum2, dum3, dum4, dum5, dum6. You can give the time dummies any name, as long as they are consistent across the workfile and the dummy = "" command. Cascaldi-Garcia (2022, p. 6) reports that the COVID-19 pandemic period is modelled by applying the Pandemic Priors from March 2020 to August 2020, such that we include h=6 individual dummies, which translates into covid_periods=6 statement, defining the number of COVID-19 periods to dummy out. The next optional command, covper=6, therefore specifies that the COVID period is set to six periods, i.e., the extraordinary period of extreme observations lasts for six months. This command is equivalent to the Number of Covid periods box in the second dialog window that is specific to the Pandemic Priors. The final command, eps = 0.001, specifies the value of epsilon ($ \epsilon $) in equation (6) to be the same as $ \phi $ in equation (6).

Now that we have generated two sets of forecasts, one incorporating the Pandemic Priors and one that does not, we are in a position to inspect the results in more detail. Looking at Figure 4 of Cascaldi-Garcia (2022) in isolation, we find that for most of the variables, the two forecasts are rather – if not very – similar. But there are notable deviations, namely for the two labour variables (employment and the unemployment rate) and, to a lesser degree, PCE as well industrial production. On the other hand, variables with unchanged autoregressive coefficients such as EBP, the S&P 500 stock market index and the PCE price index, display very similar unconditional twelve-month ahead forecasts across the two estimation approaches. This is apparent in both Figure 4 (Cascaldi-Garcia 2022) and Figure 3. In contrast, variables that are markedly affected by aberrant observations, such as employment and the unemployment rate, present substantially different unconditional forecasts, implying different economic interpretations. Forecasts using the BGR model in Figure 4 (Cascaldi-Garcia) and Figure 3 indicate that employment is expected to increase for a couple of months after which it levels off for four months and starts to decrease thereafter. Similarly, the unemployment forecast using the BGR approach falls slightly for two months before increasing over the remainder of the forecast horizon. Using the Pandemic Priors results in very different forecasts: employment increases steadily for ten months before levelling off, and the unemployment rate falls for seven months before increasing again, albeit to a lower level at the end of the forecast horizon than under the BGR scenario.

Comparing Figure 6 with Figure 4 of Cascaldi-Garcia (2022) gives us an indication how well the lbvar add-in replicates the forecasting results in Cascaldi-Garcia (2022). Overall, the correspondence is very satisfactory, including the kinks in the BGR forecasts for employment, industrial production and the unemployment rate. Finally, we can look at the impact and propagation of a one-off structural shock in the model. The next task therefore involves generating the IRFs for the BGR and Pandemic Prior approaches and replicating Figure 5 of Cascaldi-Garcia (2022). Towards that end, Figure 5 presents the impulse response functions (IRFs) of a – one-off – one standard deviation EBP shock in March 2020 on the remaining variables using the BGR and Pandemic Prior approaches. Solid black lines indicate the (posterior mean) responses using the Pandemic Priors and solid red lines those calculated using the BGR approach.

For the structural decomposition and the identification of the structural EBP shocks, Cascaldi-Garcia (2022) orders the EBP variable first in the BVAR and performs a standard Cholesky (or recursive) decomposition. We should note, though, that the approach is flexible enough to accommodate other conventional as well as state-of-the-art identification procedures, such as proxy VARs, sign restrictions, external instruments or maximisation of the variance decomposition.

Again, we have the choice between doing so via the dialog box or a few lines of EViews code.

lbvar(estimate=1, horizon=12, sum=1, lambda=0.2, tau=2, sample="1975m1 2022m3", ident=1, pand=0) 12 irw_s ebp @ ebp sp500l fedfunds pcel pcepil payemsl indprol unrate

After running the code, EViews produces the impulse response functions of the one standard deviation shock in EBP (the impulse variable) on the variables in the BVAR (including EBP itself), reproduced below as Figure 9. We find a dark blue line, which is the posterior mean of the 100 draws, with confidence bands covering 68 per cent of possible outcomes. Once again, we will defer a discussion of the results until we have generated both sets of IRFs.

Figure 9: Impulse response functions to a one standard deviation EBP shock (BGR approach)

The impulse_variable comes into play when we are interested in first identifying and then estimating a VAR model subject to a structural shock and its associated output, such as impulse response functions (IRFs), forecast error variance decompositions (FEVDs, or variance decompositions in short) and historical decompositions. In Cascaldi-Garcia (2022, Figure 5), the author shows estimated impulse responses to a one standard deviation EBP shock. This makes ebp the impulse_response variable, which is reflected in the above commands. The equivalent entry in the lbvar add-in dialog window is Impulse variable at the bottom of the right-hand side. If you wanted the structural shock to come from another variable, simply replace ebp by the variable of choice. Finally, the endogenous variables of the model then appear after the @ in the command line. As already mentioned, we have eight variables in our underlying model: ebp, sp500l, fedfunds, pcel, pcepil, payemsl, indprol unrate. These variables are specified in the first box, called Endogenous variables, in the lbvar add-in dialog window.

The analogous exercise with Pandemic Priors should also be straightforward to set up. We only need to change the Estimation drop-down menu to Impulse response. Alternatively, we can rely on the code below.

lbvar(estimate=1, sum=1, lambda=0.2, tau=2, sample="1975m1 2022m3", pand=1, dummy="dum1 dum2 dum3 dum4 dum5 dum6", covper=6, eps=0.001) 12 irw_s ebp @ ebp sp500l fedfunds pcel pcepil payemsl indprol unrate

After running the code, EViews produces the impulse response functions of the one standard deviation shock in EBP (the impulse variable) on the variables in the BVAR (including EBP itself), reproduced below as Figure 10. We find a dark blue line, which is the posterior mean of the 100 draws, with confidence bands covering 68 per cent of possible outcomes.

Figure 10: Impulse response functions to a one standard deviation EBP shock (Pandemic Prior approach)

The blue shaded areas in Figure 9 should be compared to the grey shaded areas centred around the black lines in Figure 5 of Cascaldi-Garcia. Similarly, the blue shaded areas in Figure 10 should be compared to the bands centred around the red lines traced out by the red dotted lines in Figure 5 of Cascaldi-Garcia. As before, we find that the IRFs to a one standard deviation structural EBP shock across the two estimation approaches more or less coincide for half of the variables. The four variables that display the highest degree of aberrant observations once again show the greatest degree of divergence between the BGR approach that keeps the coefficients constant over the pandemic period and the Pandemic Priors that do not. The notable exceptions occur for PCE, industrial production, employment and the unemployment rate. In fact, the IRFs for the latter two on the basis of the BGR methodology show notable kinks after two periods which are quite distinct from the equivalent IRFs using the Pandemic Priors.

Figures 9 and 10 show sizeable differences in both size and propagation. Using the BGR approach, we would expect both quicker and larger falls in PCE, industrial production and employment and a much quicker and larger jump in the unemployment rate. Accounting for the intercept shifts results in a much smoother and more delayed impact of the structural EBP shock. For the remaining four variables, the economic effects of an EBP shock are similar across the two different estimation approaches.

Concluding Remarks

Looking at Figures 9 and 10, we wonder whether the lbvar add-in really does a one standard deviation shock in the impulse variable, as the EBP IRF in both figures starts at one. Our expectation of a one standard deviation shock would have been a starting point closer to the unconditional standard deviation of the ebp variable, which is 0.55. In fact, the equivalent intercept value for the IRF in Figure 10 is closer to 0.23. This may indicate that the lbvar add-in does a unit shock instead. This is not the end of the world, as both methods are widely employed in the literature. But it does mean that we can only compare the shapes of the IRF across figures, and not their magnitude.


  1. Bańbura, M., Giannone, D. and Reichlin, L. (2010). Large Bayesian vector autoregressions. Journal of Applied Econometrics, 25(1): 71–92.
  2. Cascaldi-Garcia, D. (2022). Pandemic priors. Board of Governors of the Federal Reserve System, International Finance Discussion Papers, 1352.
  3. Carriero, A., Clark, T. E., Marcellino, M. and Mertens, E. (2022). Addressing COVID-19 outliers in BVARs with stochastic volatility. Review of Economics and Statistics, 1–38.
  4. Christiano L. J., Eichenbaum M., Evans C. (1996). The effects of monetary policy shocks: evidence from the Flow of Funds. Review of Economics and Statistics, 78(1): 16–34.
  5. Gilchrist, S. and Zakrajšek, E. (2012). Credit spreads and business cycle fluctuations. American Economic Review, 102(4): 1992–1720.
  6. Lenza, M. and Primiceri, G. E. (2022). How to estimate a VAR after March 2020. Journal of Applied Econometrics, 37(4): 688–699.
  7. Litterman, R B (1986). Forecasting with Bayesian vector autoregressions – five years of experience. Journal of Business & Economic Statistics, 4(1): 25–38.
  8. Primiceri, G. E. and Tambalotti, A. (2020). Macroeconomic forecasting in the time of COVID-19, mimeo.
  9. Schorfheide, F. and Song, D. (2021). Real-time forecasting with a (standard) mixed-frequency VAR during a pandemic. NBER Working Paper, 29535.
  10. Wu, J. C. and Xia, F. D. (2016). Measuring the macroeconomic impact of monetary policy at the zero lower bound. Journal of Money, Credit and Banking, 48(2-3): 253–291.

No comments:

Post a Comment