Monday, April 3, 2017

AutoRegressive Distributed Lag (ARDL) Estimation. Part 1 - Theory

One of our favorite bloggers, Dave Giles often writes about current trends in econometric theory and practice. One of his most popular topics is ARDL modeling, and he has a number of fantastic posts about it.

Since we have recently updated ARDL estimation in EViews 9.5, and are in the midst of adding some enhanced features to ARDL for the next version of EViews, EViews 10, we thought we would jot down our own thoughts on the theory and practice of ARDL models, particularly in regard to their use as a cointegration test.

This blog post will be in three parts. The first will discuss the theory behind ARDL models, the second will present the theory behind correct inference of the Bounds test, while the third will bring everything together with an example in EViews.



Overview

ARDL models are linear time series models in which both the dependent and independent variables are related not only contemporaneously, but across historical (lagged) values as well. In particular, if $y_t$ is the dependent variable and $x_1, \ldots, x_k$ are $k$ explanatory variables, a general ARDL$(p,q_1,\ldots,q_k)$ model is given by: \begin{align} y_t = a_0 + a_1t + \sum_{i=1}^p{\psi_i y_{t-i}} + \sum_{j=1}^{k}\sum_{l_j=0}^{q_j}{\beta_{j,l_j}x_{j,t-l_j}} + \epsilon_t \label{eq.ardl.1} \end{align} where $\epsilon_t$ are the usual innovations, $a_0$ is a constant term, and $a_1, \psi_i,$ and $\beta_{j,l_j}$ are respectively the coefficients associated with a linear trend, lags of $y_t$, and lags of the $k$ regressors $x_{j,t}$ for $j=1,\ldots k$. Alternatively, let $L$ denote the usual lag operator and define $\psi(L)$ and $\beta_j(L)$ as the lag polynomials: $$\psi(L) = 1 - \sum_{i=1}^{p}{\psi_iL^i} \quad \text{and} \quad \beta_j(L) = \sum_{l_j=0}^{q_j}{\beta_{j,l_j}L^{l_j}}$$ Then, equation (\ref{eq.ardl.1}) above can also be written as: \begin{align} \psi(L)y_t = a_0 + a_1t + \sum_{j=1}^{k}{\beta_{j}(L)x_{j,t}} + \epsilon_t \label{eq.ardl.1.lag} \end{align} Although ARDL models have been used in econometrics for decades, they have gained popularity in recent years as a method of examining cointegrating relationships. Two seminal contributions in this regard are Pesaran and Shin (1998, PS(1998)) and Pesaran, Shin and Smith (2001, PSS(2001)). In particular, they argue that ARDL models are especially advantageous in their ability to handle cointegration with inherent robustness to misspecification of integration orders of relevant variables. In this regard, we have three cases of interest:


  • All variables are I$(d)$ for some $0\leq d$ and are not cointegrated -- fractional orders of integration are in principle also possible. Here one can use familiar least squares techniques to estimate and interpret equation (\ref{eq.ardl.1}) in levels when $d=0$ and in appropriate differences when $d>0$.
  • All variables are I$(1)$ and are cointegrated. Here one can:
    • use least squares to estimate the cointegrating (long-run) relationship by regressing $y_t$ on $x_{j,t}$ for $j=1,\ldots k$ in levels; and/or,
    • use least square to estimate speed of adjustment of short-run dynamics to the cointegrating relationship by regressing the appropriate error-correction model (ECM).
  • Some variables are I$(0)$, others are I$(1)$, and amongst the latter, some are cointegrated.

It is precisely in this last case where traditional cointegration methodologies of Engle-Granger (1987), Phillips and Ouliaris (1990) or Johansen (1995), typically fail since all variables need to have identical orders of integration, usually I$(1)$. This requires pre-testing for the presence of a unit root in each of the variables under consideration, which is clearly subject to misclassification, particularly since unit root tests are known to suffer size and power problems in many cases of interest; see Perron and Ng (1996).

Alternatively, the PSS(2001) bounds test for cointegration is not subject to such limitations and readily accommodates the nuances of the third case. The test is in fact a parameter significance test on the long-run variables in the ECM of the underlying vector autoregression (VAR) model, and works when all or some variables are I$(0)$, I$(1)$, or even mutually cointegrated. Since there exists a one-to-one correspondence between an ECM of a VAR model and an ARDL model (see Banerjee et. al., 1993), and since ARDL models are estimated and interpreted using familiar least squares techniques, ARDL models are de facto the standard of estimation when one chooses to remain agnostic about the orders of integration of the underlying variables. It is precisely in this regard where the ARDL methodology shines.

Specification

Although the general ARDL model is specified in (\ref{eq.ardl.1}), there exist three alternative representations. While all three can be used for parameter estimation, the first is typically used for intertemporal dynamic estimation, the second for post-estimation derivation of the long-run (equilibrium) relationship, whereas the third is a reduction of equation (\ref{eq.ardl.1}) to the conditional error correction (CEC) representation in the PSS(2001) bounds test; see Banerjee et. al. (1993).

All three representations require some preliminary results. Using principles underlying the famous Beveridge-Nelson decomposition, recall that $\psi(L)$ and $\beta_j(L)$ can always be decomposed as: \begin{align*} \psi(L) = \psi(1) + (1-L)\widetilde{\psi}(L) \quad \text{and} \quad \beta_j(L) = \beta_j(1) + (1-L)\widetilde{\beta}_j(L) \end{align*} where \begin{align*} \widetilde{\psi}(L) = \sum_{i=0}^{p-1}{\widetilde{\psi}_{i}L^{i}} &\quad \text{and} \quad \widetilde{\psi}_i = -\sum_{r=i+1}^{p}\psi_r\\ \widetilde{\beta}_j(L) = \sum_{l_j=0}^{q_j-1}{\widetilde{\beta}_{j,l_j}L^{l_j}} &\quad \text{and} \quad \widetilde{\beta}_{j,l_j} = -\sum_{s=l_j+1}^{q_j}\beta_{j,s} \end{align*} and $$\psi(1) = 1 - \sum_{i=1}^{p}{\psi_i} \quad \text{and} \quad \beta_j(1) = \sum_{l_j=0}^{q_j}{\beta_{j,l_j}}$$ Next, note that $\psi(L) = 1 - \psi^\star(L)$ where $\psi^\star(L) = \sum_{i=1}^{p}{\psi_iL^i}$. Furthermore, observe that: $$ \psi^\star(L) = \sum_{i=1}^{p}{\psi_iL^i} = \left(\sum_{i=1}^{p}{\psi_iL^{i-1}}\right)L = \left(\psi^\star(1) + (1-L)\widetilde{\psi^\star}(L)\right)L $$ where $\widetilde{\psi^\star}(L) = \sum_{i=1}^{p-1}{\widetilde{\psi^\star}_{i}L^{i-1}}$, $\widetilde{\psi^\star}_i = -\sum_{r=i+1}^{p}\psi_r$, and $\psi^\star(1) = \sum_{i=1}^{p}{\psi_i}$. Finally, note that for any series $z_t$ one can always write: $$z_t = z_{t-1} + \Delta z_t$$

First Representation: (Intertemporal Dynamics Regression)

The typical starting point for most ARDL applications is the estimation of intertemporal dynamics. In this form, one is interested in estimating the relationship between $y_t$ on both its own lags as well as the contemporaneous and lagged values of the $k$ regressors $x_{j,t}$. This in fact the basis of the ARDL model studied in PS(1998). In particular, we cast equation (\ref{eq.ardl.1}) into the following representation: \begin{align} y_t &= a_0 + a_1t + \sum_{i=1}^p{\psi_i y_{t-i}} + \sum_{j=1}^{k}{\beta_j(L)x_{j,t}} + \epsilon_t \notag\\ &= a_0 + a_1t + \sum_{i=1}^p{\psi_i y_{t-i}} + \sum_{j=1}^{k}{\left(\beta_j(1) + (1-L)\widetilde{\beta}_j(L)\right)x_{j,t}} + \epsilon_t \notag\\ &= a_0 + a_1t + \sum_{i=1}^p{\psi_i y_{t-i}} + \sum_{j=1}^{k}{\beta_j(1)x_{j,t}} + \sum_{j=1}^{k}{\widetilde{\beta}_j(L)\Delta x_{j,t}} + \epsilon_t \label{eq.ardl.2} \end{align} where we use the first difference notation $\Delta = (1-L)$. Since equation (\ref{eq.ardl.2}) does not explicitly solve for $y_t$, it is typically interpreted as a regression for intertemporal dynamics. Of course, the model above uses theoretical coefficients whereas in a practical regression setting, it would be represented as: \begin{align} y_t &= a_0 + a_1t + \sum_{i=1}^p{b_{0,i} y_{t-i}} + \sum_{j=1}^{k}{b_{j}x_{j,t}} + \sum_{j=1}^{k}{\sum_{l_j=1}^{q_j-1}c_{j,l_j}\Delta x_{j,t-l_j}} + \epsilon_t \label{eq.ardl.2.reg} \end{align}

Second Representation: (Post-Regression Derivation of Long-Run Dynamics)

The second representation is in essence an attempt to derive the long-run relationship between $y_t$ and the $k$ regressors. As such, the representation solves for $y_t$ in terms of $x_{j,t}$. In particular, starting from equation (\ref{eq.ardl.1}), note that: \begin{align*} \psi(L)\Delta y_t &= (1-L)\psi(L)y_t\\ &= (1-L) \left(y_t - \sum_{i=1}^p{\psi_i y_{t-i}}\right)\\ &= (1-L)\left(a_0 + a_1t + \sum_{j=1}^{k}{\beta_j(L)x_{j,t}} + \epsilon_t\right)\\ &= a_1 + \sum_{j=1}^{k}{\beta_j(L)\Delta x_{j,t}} + \Delta\epsilon_t \end{align*} Next, assuming $\psi(L)$ is in fact invertible, that is, the roots of the characteristic polynomial $1 - \sum_{i=1}^{p}{\psi_iz^i} = 0$ all fall outside the unit circle and a stable relationship between $y_t$ and $x_{1,t}, \ldots, x_{k,t}$ does indeed exist, it holds that: $$\Delta y_t = \psi^{-1}(L) \left(a_1 + \sum_{j=1}^{k}{\beta_j(L)\Delta x_{j,t}} + \Delta\epsilon_t\right)$$ Furthermore, noting that $\psi(L)y_t = \psi(1)y_t + \widetilde{\psi}(L)\Delta y_t$, rewrite equation (\ref{eq.ardl.1}) as follows: \begin{align*} \psi(1)y_t &= \psi(L)y_t - \widetilde{\psi}(L)\Delta y_t\\ \psi(1)y_t &= a_0 + a_1t + \sum_{j=1}^{k}{\beta_j(1)x_{j,t}} + \sum_{j=1}^{k}{\widetilde{\beta}_j(L)\Delta x_{j,t}} - \widetilde{\psi}(L)\Delta y_t + \epsilon_t\\ &= a_0 + a_1t + \sum_{j=1}^{k}{\beta_j(1)x_{j,t}} + \sum_{j=1}^{k}{\widetilde{\beta}_j(L)\Delta x_{j,t}} - \widetilde{\psi}(L)\psi^{-1}(L)\left( a_1 + \sum_{j=1}^{k}{\beta_j(L)\Delta x_{j,t}} + \Delta\epsilon_t \right) + \epsilon_t\\ &= a_0^\star + a_1t + \sum_{j=1}^{k}{\beta_j(1)x_{j,t}} + \sum_{j=1}^{k}{\widetilde{\beta}^\star_j(L)\Delta x_{j,t}} + \epsilon_t^\star \end{align*} where \begin{align*} a_0^\star &= a_0 - \widetilde{\psi}(L)\psi^{-1}(L)a_1\\ &= a_0 - \widetilde{\psi}(1)\psi^{-1}(1)a_1\\ \widetilde{\beta}^\star_j(L) &= \widetilde{\beta}_j(L) - \widetilde{\psi}(L)\psi(L)^{-1}\beta_j(L)\\ \epsilon_t^\star &= \epsilon_t - \widetilde{\psi}(L)\psi(L)^{-1}\Delta\epsilon_t \end{align*} At last, the second representation is formulated as: \begin{align} y_t &= \psi^{-1}(1)\left( a_0^\star + a_1t + \sum_{j=1}^{k}{\beta_j(1)x_{j,t}} + \sum_{j=1}^{k}{\widetilde{\beta}^\star_j(L)\Delta x_{j,t}} + \epsilon_t^\star \right) \notag\\ &= \alpha_0 + \alpha_1 t + \sum_{j=1}^{k}{\theta_j(1)x_{j,t}} + \sum_{j=1}^{k}{\widetilde{\theta}_j(L)\Delta x_{j,t}} + \xi_t \label{eq.ardl.3} \end{align} where \begin{align*} \alpha_0 &= \psi^{-1}(1) \left(a_0 - \widetilde{\psi}(1)\alpha_1 \right)\\ \alpha_1 &= \psi^{-1}(1) a_1\\ \theta_j(1) &= \psi^{-1}(1)\beta_j(1)\\ \widetilde{\theta}_j(L) &= \psi^{-1}(1) \left(\widetilde{\beta}_j(L) - \widetilde{\psi}(L)\psi(L)^{-1}\beta_j(L)\right)\\ \xi_t &= \psi^{-1}(1)\left(\epsilon_t - \widetilde{\psi}(L)\psi(L)^{-1}\Delta\epsilon_t\right) \end{align*} From equation (\ref{eq.ardl.2}), we are typically interested in the long-run (trend) parameters captured by $\alpha_1$ and $\theta_j(1)$, for $j=1,\ldots,k$. In fact, given the one-to-one correspondence between the parameter estimates obtained in (\ref{eq.ardl.2}) and equation (\ref{eq.ardl.3}), having estimated the regression model (\ref{eq.ardl.2.reg}), one can use the parameter formulas above to derive estimates of the long-run parameters post-estimation. In particular, if $\widehat{a}_1,\widehat{b}_{0,1},\ldots,\widehat{b}_{0,p},\widehat{b}_1\ldots,\widehat{b}_k$ denote the relevant subset of estimated coefficients from the regression model (\ref{eq.ardl.2.reg}), then, a post-regression estimate of the long-run parameters is derived as follows: $$\widehat{\alpha}_1 = \frac{\widehat{a}_1}{1 - \displaystyle\sum_{i=1}^{p}{\widehat{b}_{0,i}}} \quad \text{and} \quad \widehat{\theta}_j(1) = \frac{\widehat{b}_j}{1 - \displaystyle\sum_{i=1}^{p}{\widehat{b}_{0,i}}}$$

Third Representation: (Conditional Error Correction Form and the Bounds Test)

The final representation is arguably the most interesting and one that typically receives the most attention in applied work. The objective here is to test for cointegration by reducing a typical VAR framework to its corresponding conditional error correction (CEC) form. As it happens to be, the CEC model of interest is in fact an ARDL model with a one-to-one correspondence with the model in (\ref{eq.ardl.2}). To see this, substitute the right hand side of equation (\ref{eq.ardl.1}) for $y_t$ in line 2 below. In particular: \begin{align} \Delta y_t &= y_t - y_{t-1} \notag\\ &= a_0 + a_1t + \psi^\star(L)y_t + \sum_{j=1}^{k}{\beta_j(L)x_{j,t}} + \epsilon_t - y_{t-1}\notag\\ &= a_0 + a_1t - y_{t-1} + \psi^\star(1)y_{t-1} + \widetilde{\psi^\star}(L)\Delta y_{t-1} + \sum_{j=1}^{k}{\beta_j(L)x_{j,t}} + \epsilon_t \notag\\ &= a_0 + a_1t - \left(1 - \psi^\star(1)\right)y_{t-1} + \widetilde{\psi^\star}(L)\Delta y_{t-1} + \sum_{j=1}^{k}{\beta_j(L)\left(x_{j,t-1} + \Delta x_{j,t}\right)} + \epsilon_t \notag\\ &= a_0 + a_1t - \psi(1)y_{t-1} + \widetilde{\psi^\star}(L)\Delta y_{t-1} + \sum_{j=1}^{k}{\left(\beta_j(1) + (1-L)\widetilde{\beta}_j(L)\right)x_{j,t-1}} + \sum_{j=1}^{k}{\beta_j(L)\Delta x_{j,t}} + \epsilon_t \notag\\ &= a_0 + a_1t - \psi(1)y_{t-1} + \sum_{j=1}^{k}{\beta_j(1)x_{j,t-1}} \notag\\ &+ \left(\widetilde{\psi^\star}(L)\Delta y_{t-1} + \sum_{j=1}^{k}{\widetilde{\beta}_j(L)\Delta x_{j,t-1}}\right) + \sum_{j=1}^{k}{\beta_j(L)\Delta x_{j,t}} + \epsilon_t \label{eq.ardl.4} \end{align} Equation (\ref{eq.ardl.4}) above is the CEC form derived from the ARDL model in equation (\ref{eq.ardl.1}). Rewriting this equation as: \begin{align} \Delta y_t &= a_0 + a_1t - \psi(1)\left( y_{t-1} - \sum_{j=1}^{k}{\frac{\beta_j(1)}{\psi(1)}x_{j,t-1}}\right) \notag\\ &+ \left(\widetilde{\psi^\star}(L)\Delta y_{t-1} + \sum_{j=1}^{k}{\widetilde{\beta}_j(L)\Delta x_{j,t-1}}\right) + \sum_{j=1}^{k}{\beta_j(L)\Delta x_{j,t}} + \epsilon_t \notag\\ &= a_0 + a_1t - \psi(1)EC_{t-1} \notag\\ &+ \left(\widetilde{\psi^\star}(L)\Delta y_{t-1} + \sum_{j=1}^{k}{\widetilde{\beta}_j(L)\Delta x_{j,t-1}}\right) + \sum_{j=1}^{k}{\beta_j(L)\Delta x_{j,t}} + \epsilon_t \label{eq.ardl.5} \end{align} it is readily verified that the error correction term, typically denoted as $EC_t$, is also the cointegrating relationship when $y_t$ and $x_{1,t},\ldots,x_{k,t}$ are cointegrated. In fact, PSS(2001) demonstrate that equation (\ref{eq.ardl.4}) is in fact (abstracting from differing lag values) the CEC of the VAR$(p)$ model: $$\pmb{\Phi}(L)(\pmb{z}_t - \pmb{\mu} - \pmb{\gamma}t) = \pmb{\epsilon}_t$$ where $\pmb{z}_t$ is a $(k+1)$-vector $(y_t,x_{1,t},\ldots, x_{k,t})^\top$ and $\pmb{\mu}$ and $\pmb{\gamma}$ are respectively the $(k+1)$-vectors of intercept and trend coefficients, and $\pmb{\Phi}(L) = \pmb{I}_{k+1} - \sum_{i=1}^{p}\pmb{\Phi}_iL^i$ is the $(k+1)$ square matrix lag polynomial. This is particularly important as the CEC is typically used as a platform for testing for the presence of cointegration.

Traditionally, the cointegration tests of Engle-Granger (1987), Phillips and Ouliaris (1990) or Johansen (1995), typically require all variables in the VAR to be I$(1)$. This clearly requires a battery of pre-testing for the presence of a unit root in each of the variables under consideration, and is clearly subject to misclassification, particularly since unit root tests are known to suffer size and power problems in many cases of interest. In contrast, PSS(2001) propose a test for cointegration that is not only robust to whether variables of interest are I$(0)$, I$(1)$, or mutually cointegrated, but is significantly easier to implement as it only requires estimation and inferential procedures used in the familiar least squares regressions. In this regard, PSS(2001) discuss the famous bounds test for cointegration as a test on parameter significance in the cointegrating relationship of the CEC model (\ref{eq.ardl.4}). In other words, the test is a standard $F-$ or Wald test for the following null and alternative hypotheses: \begin{align*} H_0 &: \quad \psi(1) \cap \left \{ \beta_j(1) \right \}_{j=1}^{k} = 0 \quad \text{(variables are not cointegrated)}\\ H_A &: \quad \psi(1) \cup \left \{ \beta_j(1) \right \}_{j=1}^{k} \neq 0 \quad \text{(variables are cointegrated)} \end{align*} Once the test statistic is computed, it is compared to two asymptotic critical values corresponding to polar cases of all variables being purely I$(0)$ or purely I$(1)$. As such, these critical values lie in the lower and upper tails, respectively, of a non-standard mixture distribution involving integral functions of Brownian motions. When the test statistic is below the lower critical value, one fails to reject the null and concludes that cointegration is not possible. In contrast, when the test statistic is above the upper critical value, one rejects the null and concludes that cointegration is indeed possible. In either of these two cases, knowledge of the cointegrating rank is not necessary. Alternatively, should the test statistic fall between the lower and upper critical values, testing is inconclusive, and knowledge of the cointegrating rank is required to proceed further.

We also remark here that it has been argued in Narayan (2005) that asymptotic critical values presented in PSS (2001) are usually unrealistic for most practical implementations since they are derived for sample sizes of $T=1000$. Accordingly, Narayan (2005) presents critical values for sample sizes ranging from $T=30$ to $T=80$ in increments of 5, which they argue ought to improve inferential reliability under most finite sample settings. EViews 10 will offer the user a choice of whether they'd like to use the PSS (2001) or the Narayan (2005) critical values.

Here it is also important to highlight that PSS(2001) offer five alternative interpretations of the CEC model (\ref{eq.ardl.4}), distinguished by whether deterministic terms integrate into the error correction term. When deterministic terms contribute to the error correction term, they are implicitly projected onto the span of the cointegrating vector. In other words, $\pmb{\mu}$ and $\pmb{\gamma}$ of the VAR$(p)$ model are restricted to a linear combination of the elements in the cointegrating vector. This clearly implies that $a_0$ and $a_1$ in equation (\ref{eq.ardl.4}) must too be similarly restricted. Below are summaries of the theoretical (DGP) and practical regression (REG) models, respectively, for each of the five interpretations along with the appropriate cointegrating relationship ($EC_t$) and the bounds test null-hypothesis ($H_0$).

Case 1: (No Constant and No Trend): $a_0 = a_1 = 0$, that is, ($\pmb{\mu} = \pmb{\gamma} = \pmb{0}$)

DGP: \begin{align*} \Delta y_t &=-\psi(1)y_{t-1} + \sum_{j=1}^{k}{\beta_j(1)x_{j,t-1}}\\ &+ \left(\widetilde{\psi^\star}(L)\Delta y_{t-1} + \sum_{j=1}^{k}{\widetilde{\beta}_j(L)\Delta x_{j,t-1}}\right) + \sum_{j=1}^{k}{\beta_j(L)\Delta x_{j,t}} + \epsilon_t \\ EC_t &= y_{t} - \sum_{j=1}^{k}{\frac{\beta_j(1)}{\psi(1)}x_{j,t}} \\ H_0 &:\quad \psi(1) = \beta_j(1) = 0, \quad \forall j \end{align*} Regression: \begin{align} \Delta y_t &= b_0y_{t-1} + \sum_{j=1}^{k}{b_{j}x_{j,t-1}} \notag\\ &+ \sum_{i=1}^{p-1}{c_{0,i}\Delta y_{t-i}} + \sum_{j=1}^{k}\sum_{l_j=1}^{q_j-1}{c_{j,l_j}\Delta x_{j,t-l_j}} + \sum_{j=1}^{k}{d_{j}\Delta x_{j,t}} + \epsilon_t \label{eq.ardl.6}\\ EC_t &= y_{t} - \sum_{j=1}^{k}{\frac{b_j}{b_0}x_{j,t}} \notag\\ H_0 &: \quad b_0 = b_j = 0, \quad \forall j \notag \end{align}
Case 2: (Restricted Constant and No Trend): $a_0 = \psi(1)\mu_y + \sum_{j=1}^{k}\beta_j(1)\mu_{x_j}$ and $a_1 = 0$ so that $\pmb{\gamma} = \pmb{0}$

DGP: \begin{align*} \Delta y_t &=-\psi(1)\left(y_{t-1} - \mu_y\right) + \sum_{j=1}^{k}{\beta_j(1)\left(x_{j,t-1} - \mu_{x_j}\right)} \\ &+ \left(\widetilde{\psi^\star}(L)\Delta y_{t-1} + \sum_{j=1}^{k}{\widetilde{\beta}_j(L)\Delta x_{j,t-1}}\right) + \sum_{j=1}^{k}{\beta_j(L)\Delta x_{j,t}} + \epsilon_t \\ EC_t &= y_{t} - \mu_y - \sum_{j=1}^{k}{\frac{\beta_j(1)}{\psi(1)}\left(x_{j,t} - \mu_{x_j}\right)} \\ H_0 &:\quad \psi(1) = \beta_j(1) = 0, \quad \forall j \end{align*} Regression: \begin{align} \Delta y_t &= a_0 + b_0y_{t-1} + \sum_{j=1}^{k}{b_{j}x_{j,t-1}} \notag\\ &+ \sum_{i=1}^{p-1}{c_{0,i}\Delta y_{t-i}} + \sum_{j=1}^{k}\sum_{l_j=1}^{q_j-1}{c_{j,l_j}\Delta x_{j,t-l_j}} + \sum_{j=1}^{k}{d_{j}\Delta x_{j,t}} + \epsilon_t \label{eq.ardl.7}\\ EC_t &= y_{t} - \sum_{j=1}^{k}{\frac{b_j}{b_0}x_{j,t}} - \frac{a_0}{b_0} \notag\\ H_0 &: \quad a_0 = b_0 = b_j = 0, \quad \forall j \notag \end{align}
Case 3: (Unrestricted Constant and No Trend): $a_0 \neq 0$ and $a_1 = 0$ so that $\pmb{\mu} \neq \pmb{0}$ and $\pmb{\gamma} = \pmb{0}$

DGP: \begin{align*} \Delta y_t &=a_0 -\psi(1)y_{t-1} + \sum_{j=1}^{k}{\beta_j(1)x_{j,t-1}} \\ &+ \left(\widetilde{\psi^\star}(L)\Delta y_{t-1} + \sum_{j=1}^{k}{\widetilde{\beta}_j(L)\Delta x_{j,t-1}}\right) + \sum_{j=1}^{k}{\beta_j(L)\Delta x_{j,t}} + \epsilon_t \\ EC_t &= y_{t} - \sum_{j=1}^{k}{\frac{\beta_j(1)}{\psi(1)}x_{j,t}} \\ H_0 &:\quad \psi(1) = \beta_j(1) = 0, \quad \forall j \end{align*} Regression: \begin{align} \Delta y_t &= a_0 + b_0y_{t-1} + \sum_{j=1}^{k}{b_{j}x_{j,t-1}} \notag\\ &+ \sum_{i=1}^{p-1}{c_{0,i}\Delta y_{t-i}} + \sum_{j=1}^{k}\sum_{l_j=1}^{q_j-1}{c_{j,l_j}\Delta x_{j,t-l_j}} + \sum_{j=1}^{k}{d_{j}\Delta x_{j,t}} + \epsilon_t \label{eq.ardl.8}\\ EC_t &= y_{t} - \sum_{j=1}^{k}{\frac{b_j}{b_0}x_{j,t}} \notag\\ H_0 &: \quad b_0 = b_j = 0, \quad \forall j \notag \end{align}
Case 4: (Unrestricted Constant and Restricted Trend): $a_0 \neq 0$ so that $\pmb{\mu} \neq \pmb{0}$ and $a_1 = \psi(1)\gamma_y + \sum_{j=1}^{k}\beta_j(1)\gamma_{x_j}$

DGP: \begin{align*} \Delta y_t &=a_0 - \psi(1)\left(y_{t-1} - \gamma_y t\right) + \sum_{j=1}^{k}{\beta_j(1)\left(x_{j,t-1} - \gamma_{x_j}t\right)} \\ &+ \left(\widetilde{\psi^\star}(L)\Delta y_{t-1} + \sum_{j=1}^{k}{\widetilde{\beta}_j(L)\Delta x_{j,t-1}}\right) + \sum_{j=1}^{k}{\beta_j(L)\Delta x_{j,t}} + \epsilon_t \\ EC_t &= y_{t} - \gamma_yt - \sum_{j=1}^{k}{\frac{\beta_j(1)}{\psi(1)}\left(x_{j,t} - \gamma_{x_j}t\right)} \\ H_0 &:\quad \psi(1) = \beta_j(1) = 0, \quad \forall j \end{align*} Regression: \begin{align} \Delta y_t &= a_0 + a_1t + b_0y_{t-1} + \sum_{j=1}^{k}{b_{j}x_{j,t-1}} \notag\\ &+ \sum_{i=1}^{p-1}{c_{0,i}\Delta y_{t-i}} + \sum_{j=1}^{k}\sum_{l_j=1}^{q_j-1}{c_{j,l_j}\Delta x_{j,t-l_j}} + \sum_{j=1}^{k}{d_{j}\Delta x_{j,t}} + \epsilon_t \label{eq.ardl.9}\\ EC_t &= y_{t} - \sum_{j=1}^{k}{\frac{b_j}{b_0}x_{j,t}} - \frac{a_1}{b_0}t \notag\\ H_0 &:\quad a_1 = b_0 = b_j = 0, \quad \forall j \notag \end{align}
Case 5: (Unrestricted Constant and Unrestricted Trend): $a_0 \neq 0$ and $a_1 \neq 0$ so that $\pmb{\mu} \neq \pmb{0}$ and $\pmb{\gamma} \neq \pmb{0}$

DGP: \begin{align*} \Delta y_t &=a_0 + a_1t - \psi(1)y_{t-1} + \sum_{j=1}^{k}{\beta_j(1)x_{j,t-1}} \notag\\ &+ \left(\widetilde{\psi^\star}(L)\Delta y_{t-1} + \sum_{j=1}^{k}{\widetilde{\beta}_j(L)\Delta x_{j,t-1}}\right) + \sum_{j=1}^{k}{\beta_j(L)\Delta x_{j,t}} + \epsilon_t\\ EC_t &= y_{t} - \sum_{j=1}^{k}{\frac{\beta_j(1)}{\psi(1)}x_{j,t}}\\ H_0 &:\quad \psi(1) = \beta_j(1) = 0, \quad \forall j \end{align*} Regression: \begin{align} \Delta y_t &= a_0 + a_1t + b_0y_{t-1} + \sum_{j=1}^{k}{b_{j}x_{j,t-1}} \notag\\ &+ \sum_{i=1}^{p-1}{c_{0,i}\Delta y_{t-i}} + \sum_{j=1}^{k}\sum_{l_j=1}^{q_j-1}{c_{j,l_j}\Delta x_{j,t-l_j}} + \sum_{j=1}^{k}{d_{j}\Delta x_{j,t}} + \epsilon_t \label{eq.ardl.10}\\ EC_t &= y_{t} - \sum_{j=1}^{k}{\frac{b_j}{b_0}x_{j,t}} \notag\\ H_0 &:\quad b_0 = b_j = 0, \quad \forall j \notag \end{align}
Whilst EViews 9.5 only supports the first four cases, EViews 10 will support all 5.

References:

Banerjee, A., Dolado, J. J., Galbraith, J. W., Hendry, D., et al. (1993). Co-integration, error correction, and the econometric analysis of non-stationary data. OUP Catalogue.
Engle, R. F. and Granger, C. W. (1987). Co-integration and error correction: representation, estimation, and testing. Econometrica: journal of the Econometric Society, pages 251--276.
Johansen, S. (1995). Likelihood-based inference in cointegrated vector autoregressive models. Oxford University Press on Demand.
Narayan, P. K. (2005). The saving and investment nexus for china: evidence from cointegration tests. Applied economics, 37(17):1979--1990.
Perron, P. and Ng, S. (1996). Useful modifications to some unit root tests with dependent errors and their local asymptotic properties. The Review of Economic Studies, 63(3):435--463.
Pesaran, M. H. and Shin, Y. (1998). An autoregressive distributed-lag modelling approach to cointegration analysis. Econometric Society Monographs, 31:371--413.
Pesaran, M. H., Shin, Y., and Smith, R. J. (2001). Bounds testing approaches to the analysis of level relationships. Journal of applied econometrics, 16(3):289--326.
Phillips, P. C. and Ouliaris, S. (1990). Asymptotic properties of residual based tests for cointegration. Econometrica: Journal of the Econometric Society, pages 165--193.

31 comments:

  1. Thanks EV. Just checked it and as of now it does not look like EV provides Narayan (2005) CVs unlkess I missed it! (EViews offers the user a choice of whether they'd like to use the PSS (2001) or the Narayan (2005) critical values.)

    ReplyDelete
    Replies
    1. Thank you for pointing this out. We have updated the post to read that the feature will be available in EViews 10.

      Delete
  2. Nice blog entry, thank you!

    Btw, there is now a user-contributed package for the open-source package GRETL (http://gretl.sourceforge.net/) available running for both the BDM and PSS cointegration tests the respective bootstrap version.

    https://sites.google.com/site/arturtarassow/code/ardl-bootstrap-cointegration

    ReplyDelete
  3. After estimating the model one can ask for "long run and short run estimates", which provides the coefficients and their p-values. I am not sure if the p-values computed in this section can be interpreted normally. For instance, in one case the bounds test shows that the variables share a long-run relation, but the long-run coefficients for all of them are highly insignificant. Eviews's help menu does not offer any information regarding this, can you please confirm whether these p-values can be interpreted?

    ReplyDelete
  4. I have observed the same problem and trying to resolve it ... However, I appreciate the above post.

    ReplyDelete
    Replies
    1. We will be releasing a second post with an explanation on ARDL inference. This will provide a detailed explanation of coefficient interpretation and statistical inferences.

      Delete
  5. Not sure about this update! Why does EV have to change how the results were reported. Previously, one can read Long run coefficients and the Error correction term and its t-stat directly. Now, now, the EC(-1) and its t-stat is not reported.
    How can I un-update my copy of EV9?

    ReplyDelete
    Replies
    1. All the relevant information is now in one unified view. Proceed to View -> Coefficient Diagnostics -> Cointegrating Form and Bounds Test. The EC(-1) term is located below the Bounds Test table. It is the line starting with EC=... Furthermore, the t-stats for the long-run coefficients are below the EC=... line. It is the table titled "Long Run Coefficients".

      Delete
    2. The user above mentions absence of CointEq(-1) in the cointegrating equation.They are right in fact, we need to see that coefficient in order to understand how long it does take to restore the equilibrium following a shock. The new update, unfortunately leaves it to be calculated by the users. Could you please bring this facility back? Thanks.

      Delete
    3. We will be updating soon with an improved view of the appropriate ECM regression. The view will indeed display an estimate of the adjustment to equilibrium, in other words, an estimate of coefficient associated with the EC(-1) term.

      Delete
    4. Thanks. That was my point. The error correction term and its p and t values are not "directly" reported now but rather its the EC equation that is reported with the new patch.

      Delete
  6. Thank you for the post. It’s great by demonstrating the theory behind the ARDL bounds test and the derivation of the approach.
    In the article, it stated that one can conclude the cointegration status through the standard F or Wald test for the following null and alternative hypotheses:
    H0: beta_y(1) and beta_x(1) = 0
    H1: beta_y(1) or beta_x(1) not = 0
    (* beta_y(1) indicates the coefficient of y(-1) and beta_x(1) indicates the coefficients of x(-1) from Equation (6) above.)

    If the tested F or Wald test rejects the null hypothesis and favour to its alternative, then we can conclude cointegration between yt and xt. I found that this is not true and it is only part of the story. There is insufficient evidence for us to conclude cointegration solely by the F test. As you can see from the alternative hypothesis, it’s defined as the union of beta_y(1) and beta_x(1) not equal to zero. This indicates that there is possible for only beta_y(1), only beta_x(1) or both not equal to zeroes. Cointegration only happens when both beta_y(1) and beta_x(1) are not equal to zeroes. If there is a case that happen only beta_y(1) or only beta_x(1) not equal to zero, as PSS discussed in their article PSS(2001), it falls into the degenerate case. Degenerate cases, as explained by PSS, they are non-cointegration.

    Remember the significance of the F-statistic from the test may due to the highly significance of beta_y(1) or beta_x(1) alone. If this happens, although we found that the F test is significant, but in fact, they are degenerate and not cointegrated. This explained one of the reader’s (Unknown, April 4, 2017 at 11:48 PM) concern about the significant of the F test but the long-run coefficients are highly insignificant.

    Therefore, to confirm the relationship between yt and xt, we should perform the t test on the coefficient of lagged dependent variable, in particular, beta_y(1) in addition. The t test has introduced in the original article, PSS(2001), but unfortunately, many researchers ignore the test. The t test hypotheses are defined as follows:
    H0: beta_y(1) = 0
    H1: beta_y(1) not = 0

    If the t test rejects the null and favour to its alternative, then only we can conclude there is a cointegration. These procedures are said to be the correct procedures in testing cointegration using the bounds test. In fact, these procedures are well-stated in the original article. In page 304, the last paragraph of the section 3, has stated as follows:
    “we suggest the following procedure for ascertaining the existence of a level relationship between yt and xt: test H0 of (17) [In particular, the F test] using the bounds procedure based on the Wald or F-statistic of (21) from Corollaries 3.1 and 3.2: (a) if H0 is not rejected, proceed no further; (b) if H0 is rejected, test H0(pi_yy)=0 [in particular, beta_y(1)=0] using the bounds procedure based on the t-statistic t(pi_yy) of (24) from Corollaries 3.3 and 3.4. If H0(pi_yy)=0 is false, a large value of t(pi_yy) should result, at least asymptotically, confirming the existence of a level relationship between yt and xt, which, however, may be degenerate (if beta_x(1)=0)”.

    ReplyDelete
    Replies
    1. Thank you. What would you suggest doing after the F-test rejects Ho but the t-test doesn't? A simple OLS model?

      Delete
    2. When F test rejects the null but the t test failed to reject, indicates that there is no cointegration. It will back to the usual procedure after knowing it is no-cointegration.

      Delete
  7. Some may ask although we found that beta_y(1) is significant, but how can we confirm cointegration as the significance of the F test can be also from the significance of the beta_x(1) only. How can we exclude the possibility of the only significance of the beta_x(1)? Here is a remark for the ARDL bounds test approach. From the PSS original article, it is well-stated that this approach only ignores the integration order of the regressors but it doesn’t include the dependent variable itself. The dependent variable has no excuse to not to be I(1). This is obvious from the line in the conclusion part of the paper (page 315):
    “This paper demonstrates that the problem of testing for the existence of a level relationship between yt and xt is non-standard even if all the regressors under consideration are I(0) because, under the null hypothesis of no level relationship between yt and xt, the process describing the yt process is I(1), irrespective of whether the regressors xt are purely I(0), purely I(1) or mutually cointegrated.”
    Clearly, the dependent variable has to be I(1). Given that if the dependent variable, yt, is I(1), then there is impossible for the case only beta_y(1) to be significant as if beta_y(1) not =0 and beta_x(1) = 0, it will reduce to be Dickey-Fuller unit root equation, which explains it is a I(0) process. Thus, by given the yt to be I(1), the introduced F test and t test on dependent variable from PSS(2001) are sufficient to conclude the cointegration status using the ARDL bounds test.
    For further details, please read our working paper, McNown et al. (2016) “Bootstrapping the Autoregressive Distributed Lag Test for Cointegration” which can be obtained from University of Colorado at Boulder Working Paper #16-08, http://www.colorado.edu/Economics/papers/WPs-16/wp16-08/wp16-08.pdf.

    ReplyDelete
    Replies
    1. Very interesting comment. Your assertion that Yt needs to be I(1) for the bounds test to work properly will deem many of the published applied research as erroneous to say the least. In that quote, and if I understood it properly, Pesaran et al. (2001) refers only to the null of NO level relationship
      "... because, under the null hypothesis of no level relationship between yt and xt, the process describing the yt process is I(1), irrespective of whether the regressors xt are purely I(0), purely I(1) or mutually cointegrated."
      also, page 309 of the same paper Pesaran et al sate that "Following the methodology developed in this paper, it is possible to test for the existence of a real wage equation involving the levels of these FIVE variables irrespective of whether they are purely I(0), purely I(1), or mutually cointegrated". The five variables in the paper INCLUDES the the dependent, Yt. Look forward to your comment. Thanks.

      Delete
    2. Thanks for your comment. Refers to your response on the quote, under the null of no level relationship, the yt is I(1), irrespective whether the regressors xt be purely I(0), pure (1) or mutually conintegrated. Meaning that even you do not reject its null of no level relationship between yt and xt, your yt has to be I(1) too. You also stated that in page 309, they tested the five variables using the ARDL bounds test. In the same paragraph in page 309, it also written that "Also the application of unit root tests to the five variables, perhaps not surprisingly, yields mixed results with strong evidence in favour of the unit root hypothesis only in the cases of real wages and productivity." In that application, the real wages is the dependent variable.

      Perhaps, I can show you another evidence that the bounds test is grounded with the assumption of yt to be I(1). If you refer to the footnotes of the tables of critical values (either the F test or t test), the computed critical values are based on the DGP y = y(-1) + u and x = Px(-1) + u where P=1 or 0. The yt follow by the DGP will be generated as I(1). The lower bound critical value is generated when xt is purely I(0) [P=0] whereas upper bound is when xt is purely I(1) [P=1]. Hope these can clear your concerns on the bounds test. Thanks.

      Delete
    3. Dear All, I did ask clarification whether if possible to have I(0) as dependent variable to the developer of this model. Surprisingly, the author said that there is no problem to have I(0) as dependent variable. And one more thing, the author of the ARDL paper also added that if all variables are integrated at I(0), then we can only used the I(0) critical value to have the cointegration, not necessary the I(0) critical bound. In this case, I will follow the original authors. Thank you

      Delete
    4. Indeed! If you read Part II or our ARDL series, in particular, the text under the section "Analysis of the Alternative Hypotheses", you will see that we have specifically stated that y_t can indeed be I(0) if the null hypothesis is rejected. These are cases captured by the alternative hypotheses H_A2 and H_A3.

      Delete
  8. Hi
    Can you please tell me when the coefficient associated with the EC(-1) term will be updated in eviews? I really need it to conclude the regression section of my project

    Thank you so much

    ReplyDelete
  9. Hello all!
    The blog is nice and informative.
    I have used ARDL model in Eviews and its user friendly.
    The modification inside the eviews have modify Bounds tests which is a good features and showing all the short long and Bounds test by one click
    But there is no Error correction coefficient (ECM). Also i tried to estimate the EC equation below the bounds test using OLS. But it didnt work. Every time i got the wrong ECT. Although the short-run coefficients seems to be correct.
    Kindly it is requested please rectify and display the Error correction coefficient (ECM) with its Standard error, P-values, T- statistics as before. It will be highly appreciated.

    ReplyDelete
  10. Sorry for the delayed responses. The next iteration of our ARDL improvements is going through final testing, and should be available within a few days. Sorry for the inconvenience, but we think the improvements are worth the wait.

    ReplyDelete
  11. Wow, this time ARDL upgrade is amazing... Many thanks! BTW, I really wonder when the Eviews 10 will be available. Many thanks!

    ReplyDelete
    Replies
    1. It's our pleasure! Enjoy the new upgrade.

      Delete
  12. Do you plan to show some examples using EViews so enthusiasts who are not experts can understand whether two data series are cointegrated or not based upon your discussion?

    ReplyDelete
    Replies
    1. Absolutely. Please anticipate our 2nd part of the ARDL series which will focus correct interpretation of the Bounds Test. We will also extend the series with a 3rd part focusing on actual examples.

      Delete
  13. Any news to when the 2nd part will be available?

    ReplyDelete
    Replies
    1. We've actually decided to expand it to three parts, but both Part II and III should be available in the next few days.

      Delete