Tuesday, May 16, 2017

AutoRegressive Distributed Lag (ARDL) Estimation. Part 3 - Practice

In Part 1 and Part 2 of this series, we discussed the theory behind ARDL and the Bounds Test for cointegration. Here, we demonstrate just how easily everything can be done in EViews 9 or higher.

While our two previous posts in this series have been heavily theoretically motivated, here we present a step by step procedure on how to implement Part 1 and Part 2 in practice.

  1. Get a feel for the nature of the data.

  2. Ensure all variables are integrated of order I$(d)$ with $d < 2$.

  3. Specify how deterministics enter the ARDL model. Choose DGP $i=1,\ldots,5$ from those outlined in Part 1 and Part2.

  4. Determine the appropriate lag structure of the model selected in Step 3.

  5. Estimate the model in Step 4 using Ordinary Least Squares (OLS).

  6. Ensure residuals from Step 5 are serially uncorrelated and homoskedastic.

  7. Perform the Bounds Test.

  8. Estimate speed of adjustment, if appropriate.

The following flow chart illustrates the procedure.



Working Example

The motivation for this entry is the classical term structure of interest rates (TSIR) literature. In a nutshell, the TSIR postulates that there exists a relationship linking the yields on bonds of different maturities. Formally: $$R(k,t) = \frac{1}{k}\sum_{j=1}^{k}\pmb{\text{E}}_tR(1,t+j-1) + L(k,t)$$ where $\pmb{\text{E}}_t$ is the expectation operator conditional on the information at time $t$, $R(k,t)$ is the yield to maturity at time $t$ of a $k$ period pure discount bond, and $L(k,t)$ are the premia typically accounting for risk. To see that cointegration is indeed possible, repeated applications of the trick, $R(k,t) = R(k,t-1) + \Delta R(k,t)$, where $\Delta R(k,t) = R(k,t) - R(k,t-1)$, leads to the following expression: $$R(k,t) - R(1,t) = \frac{1}{k}\sum_{i=1}^{k-1}\sum_{j=1}^{i}\pmb{\text{E}}_t \Delta R(1,t+j) + L(k,t)$$ It is now evident that if $R(k,t)$ are I$(1)$ processes, $\Delta R(1,t+j)$ must be I$(0)$ processes, and the linear combination $R(k,t) - R(1,t)$ are therefore I$(0)$ processes provided $L(k,t)$ is as well. In other words, the $k$ period yield to maturity is always cointegrated with the first period yield to maturity, with cointegrating vector $(1,-1)^\top$. In fact, a little more work shows that the principle holds for the spread between any two arbitrary times $k_1$ and $k_2$. That is, \begin{align*} R(k_2,t) - R(k_1,t) &= R(k_2,t) - R(1,t) + R(1,t) - R(k_1,t)\\ &= \frac{1}{k_2}\sum_{i=1}^{k_2-1}\sum_{j=1}^{i}\pmb{\text{E}}_t \Delta R(1,t+j) + L(k_2,t) - \frac{1}{k_1}\sum_{i=1}^{k_1-1}\sum_{j=1}^{i}\pmb{\text{E}}_t \Delta R(1,t+j) + L(k_1,t)\\ &\sim \text{I}(0) \end{align*} Now that we have established a theoretical basis for the exercise, we delve into practice with real data. In fact, we will work with Canadian maturities collected directly from the Canadian Socioeconomic Database from Statistics Canada, or CANSIM for short. In particular, we will be looking at cointegrating relationships between two types of marketable debt instruments: the yield on a Treasury Bill, which is a short-term (maturing at 1 month, 3 months, 6 months, and 1 year from date of issue) discounted security, and the yield on Benchmark Bonds, otherwise known as Treasury Notes, which are medium-term (maturing at 2 years, 5 years, 7 years, and 10 years from date of issue) securities with bi-yearly interest payouts. The workfile can be found here.

Data Summary

The first step in any empirical analysis is an overview of the data itself. In particular, the subsequent analysis makes use of data on Treasury Bill yields maturing in 1,3,6, and 12 months, appropriately named TBILL; in addition to using data on Benchmark Bond yields (Treasury Notes) maturing in years 2,5, and 10, appropriately named BBY. Consider their graphs below:



Notice that each graph exhibits a structural change around June 2007, marking the beginning of the US housing crisis. We have indicated its presence using a vertical red line. We will incorporate this information into our analysis by indicating the post crisis period with the dummy variable dum0708. Namely, the variable assumes a value of 1 in each of the months following June 2007. Moreover, a little background research on the Central Bank of Canada (CBC) reveals that starting January 2001, the CBC would commit to a new set of transparency and inflation targeting measures to recover from the late 90's dot-com crash as well as the disinflationary period in the earlier part of that decade. For this reason, to avoid having to analyze too many policy paradigm shifts, we will only focus on data in the period after January 2001. We can achieve everything with the following set of commands:
'Set sample from Jan 2001 to end.
smpl Jan/2001 @last

'Create dummy for post 07/08 crisis
series dum0708 = @recode(@dateval("2007/06")<@date,1,0)

Testing Integration Orders

We begin our analysis by ensuring that no series under consideration is integrated of order 2 or higher. To do this, we run a unit root test on the first difference of each series. In this case, the standard ADF test will suffice. A particularly easy way of doing this is creating a group object with all variables of interest, and then running a unit root test on the group, specifying that the test should be done on the individual series. In the group view then, proceed to Proc/Unit Root Test..., and choose the appropriate options.



The following table illustrates the result.



Notice in the lower table that the column heading Prob. lists the $p$-values associated with each individual series. Since the $p$-value is 0 for each of the series under consideration and the null hypothesis is a unit root, we will reject the null at all significance levels. In particular, since the test was conducted under first differences, we conclude that there are no unit roots in first differences, and so each of the series must be either I$(0)$ or I$(1)$. We can therefore proceed onto the second step.

Deterministic Specifications

Selecting an appropriate model to fit the data is both art and science. Nevertheless, there are a few guidelines. Any model in which the series are not centered about zero will typically require a constant term, whereas any model in which the series exhibit a trend, will in general have better fit when a trend term is incorporated. Our discussion in Part 1 and Part 2 of this series discussed the possibility of selecting from five different DGP specifications, termed Case 1 through Case 5. In fact, we will consider several different model specifications with various variable combinations.

  • Model 1: The Model under consideration will look for a relationship between the 10 Year Benchmark Bond Yield and the 1 Month T-Bill. In particular, the model will restrict the constant to enter the cointegrating relationship, corresponding to the DGP and Regression Model specified in Case 2 in Part 1 and Part 2.


  • Model 2: The Model under consideration will look for a relationship between the 6, 3, and 1 Month T-Bills. Here, the model will leave the constant unrestricted, corresponding to the DGP and Regression Model specified in Case 3 in Part 1 and Part 2.


  • Model 3: The Model under consideration will look for a relationship between the 2 Year Benchmark Bond Yield, and the 1 Year and 1 Month T-Bills. Here, the model will again leave the constant unrestricted, corresponding to the DGP and Regression Model specified in Case 3 in Part 1 and Part 2.

We will see how to select these in EViews when we discuss estimation below.

Specifying ARDL Lag Structure

Selecting an appropriate number of lags for the model under consideration is again, both science and art. Unless the number of lags is specified by economic theory, the econometrician has several tools at his disposal to select lag length optimally. One possibility is to select the maximal number of lags for the dependent variable, say $p$, and the maximal number of lags for each of the regressor variables, say $q$, and then run a barrage of regressions with all the different possible combinations of lags that can be formed using this specification. In particular, if there are $k$ regressors, the maximum number of combinations of the set of numbers $\{1, \ldots p\}$ and $k$ additional sets of numbers $\{0,\ldots, q\}$, is $p\times (q + 1)^k$. For instance, with EViews default values $p = q = 4$, the total number of models under consideration would be 100. The optimal combination is then set as that which minimizes some information criterion, say Akaike (AIC), Schwarz (BIC), Hannan-Quinn (HQ), or even the adjusted $R^2$. EViews offers the user an option on how to select from among these, and we will discuss this when we explore estimation next.

Estimation, Residual Diagnostics, Bounds Test, and Speed of Adjustment

ARDL models are typically estimated using standard least squares techniques. In EViews, this implies that one can estimate ARDL models manually using an equation object with the Least Squares estimation method, or resort to the built-in equation object specialized for ARDL model estimation. We will use the latter. Open the equation dialog by selection Quick/Estimate Equation or by selecting Object/New Object/Equation and then selecting ARDL from the Method dropdown menu. Proceed by specifying each of the following:

  • List the relevant dynamic variables in the Dynamic Specification field. This is a space delimited list where the dependent variable is followed by the regressors which will form the long-run equation. Do NOT list variables which are not part of the long-run equation, but part of the estimated model. Those variables will be specified in the Fixed Regressors field below.

  • Specify whether Automatic or Fixed lag selection will be used. Note that even if Automatic lag selection is preferred, maximum lag-orders need to be specified for the dependent variable as well as the regressors. If you wish to specify how automatic selection is computed, please click on the Options tab and select the preferred information criterion under the Model selection criteria dropdown menu. Finally, note that in EViews 9, if Fixed lag selection is preferred, all regressors will have the same number of lags. EViews 10 will allow the user to fix lags specific to each regressor under consideration.

  • In the Fixed Regressors field, specify all variables other than the constant and trend, which will enter the model for estimation, but will not be a part of the long-run relationship. This list can include variables such as dummies or other exogenous variables.

  • In the Fixed Regressors field, specify how deterministic specifications enter the long-run relationship. This is a dropdown menu which corresponds to the 5 different DGP cases mentioned earlier, and explored in Part 1 and Part 2 of this series. In particular, the Trend Specification dropdown menu offers the following options:
    • None: This corresponds to Case 1 -- the no constant and trend case.

    • Rest. constant: This corresponds to Case 2 -- the restricted constant and no trend case.

    • Unrest. constant: This corresponds to Case 3 -- the unrestricted constant and no trend case.

    • Rest. linear trend: This corresponds to Case 4 -- the restricted linear trend and unrestricted constant case.

    • Unrest. constant and trend: This corresponds to Case 5 -- the unrestricted constant and unrestricted linear trend case. Note that this case will be available starting with EViews version 10.

We now demonstrate the above for each of the 4 models specified earlier. In all models we will use automatic lag selection and a dummy for the post-2008 housing crisis period.

Model 1: No Cointegrating Relationship

In this model, the dependent variable is the 10 Year Benchmark Bond Yield, while the dynamic regressor is the 1 Month T-Bill. Moreover, the DGP under consideration is a restricted constant, or Case 2, and we include the variable dum0708 as our non-dynamic regressor. We have the following output.



We have the following output.





To verify whether the residuals from the model are serially uncorrelated, in the estimation view, proceed to View/Residual Diagnostics/Serial Correlation LM Test..., and select the number of lags. In our case, we chose 2. Here's the output.



Since the null hypothesis is that the residuals are serially uncorrelated, the $F$-statistic $p$-value of 0.7475 indicates that we will fail to reject this null. We therefore conclude that the residuals are serially uncorrelated.

Similarly, testing for residual homoskedasticity, in the estimation view, proceed to View/Residual Diagnostics/Heteroskedasticity Tests..., and select a type of test. In our case, we chose Breusch-Pagan-Godfrey. Here's the output.



Since the null hypothesis is that the residuals are homoskedastic, the $F$-statistic $p$-value of 0.1198 indicates that we will fail to reject this null even for a significance level of 10\%. We therefore conclude that the residuals are homoskedastic at 10\% significance.

To test for the presence of cointegration, in the estimation view, proceed to View/Coefficient Diagnostics/Long Run Form and Bounds Test. Below the table of coefficient estimates, we have two additional tables presenting the error correction $EC$ term and the $F$-Bounds test. The output is below.



The $F$-statistic value 2.279536 is evidently below the I$(0)$ critical value bound. Our analysis in Part 2 of this series indicates that we fail to reject the null hypothesis that there is no equilibrating relationship.

In fact, we can visualize the fit of the long-run equation and the dependent variable by extracting the $EC$ term and subtracting from it the dependent variable. This can be done as follows. In the estimation view, proceed to Proc/Make Cointegrating Relationship and save the series under a name, say cointno. Since the cointegrating relationship is the $EC$ term, we would like to extract just the long-run relationship. To do this, simply subtract the series cointno from the dependent variable. In other words, make a new series $\text{LRno} = \text{BBY10Y} - \text{cointno}$. Finally, form a group with the variables BBY10Y and LRno, and plot. We have the following output.



Clearly, there is no use in performing a regression to study the speed of adjustment.

Model 2: Usual Cointegrating Relationship

In this model, the dependent variable is the 6 Months T-Bill, while the dynamic regressors are the 3 and 1 Month T-Bills. Moreover, the DGP under consideration specifies an unrestricted constant, or Case 3, and we include the variable dum0708 as our non-dynamic regressor. To avoid repetition, we will not present the output, but skip immediately to verifying whether the residuals from the model are serially uncorrelated and homoskedastic. We have the following outputs.





Given the $p$-values from both tests, we will reject the null hypothesis in both tests. Clearly, we have a problem with both serial correlation and heteroskedasticity. To solve the first problem, we will increase the number of lags for both the dependent variable and the regressors. To solve the second problem, we will use a HAC covariance matrix adjustment, which will correct the value of any test statistics that are computed in estimation. This can be done by going to the Options tab and adjusting the Coefficient Covariance matrix to HAC (Newey-West), and setting the details in the HAC Options. Remember that while serial correlation can lead to biased results, heteroskedasticity simply leads to inefficient estimation. Thus, removing serial correlation is of primary importance. We do both these tasks next.





We test again for the presence of serial correlation.



The $F$-statistic $p$-value of 0.3676 indicates that we no longer have a problem with serial correlation.

To test for the presence of cointegration, we proceed again to the Long Run Form and Bounds Test view. We have the following output.



The $F$-statistic value 9.660725 is evidently greater than the I$(1)$ critical value bound. Our analysis in Part 2 of this series indicates that we reject the null hypothesis that there is no equilibrating relationship. Moreover, since we have rejected the null and since we have not included a constant or trend in the cointegrating relationship, our exposition in Part 2 of this series indicates that we can use the $t$-Bounds Test critical values to determine which alternative emerges. In this particular case, the absolute value of the $t$-statistic is $|-5.043782| = 5.043782$, and it is greater than the absolute value of either the I$(0)$ or I$(1)$ $t$-bound. Recall that this indicates that we should reject the $t$-Bounds test null hypothesis, and conclude that the cointegrating relationship is either of the usual kind, or is valid but degenerate. Nevertheless, a look at the fit between the dependent variable and the equilibrating equation should lead us to believe that the relationship is indeed valid. The graph is presented below.



In this particular case, it makes sense to study the speed of adjustment equation. To view this, from the estimation output, proceed to View/Coefficient Diagnostics/Long Run Form and Bounds Test. We have the following output.



As expected, the $EC$ term, here represented as CointEq(-1), is negative with an associated coefficient estimate of $-0.544693$. This implies that about 54.47% of any movements into disequilibrium are corrected for within one period. Moreover, given the very large $t$-statistic, namely $-5.413840$, we can also conclude that the coefficient is highly significant. See Part 2 of this series for further details.

Model 3: Nonsensical Cointegrating Relationship

In this model, the dependent variable is the 2 Year Benchmark Bond Yield, while the dynamic regressors are the 1 Year and 1 Month T-Bills. Moreover, the DGP under consideration specifies an unrestricted constant, or Case 3, and we include the variable dum0708 as our non-dynamic regressor. To avoid repetition, we will only present tables where necessary to derive inference.

As usual, we first verify whether the residuals from the model are serially uncorrelated and homoskedastic. We have the following outputs.





Here it is evident that we do not have a problem with serial correlation, but, our residuals are heteroskedastic. As in the previous case, we reestimate using a HAC-corrected covariance matrix, and then proceed to test to the Long Run Form and Bounds Test view. We have the following output.



The $F$-statistic value 5.322963 is large enough to reject the null hypothesis at the 5% significance level, but not necessarily lower. Furthermore, since we have not included a constant or trend in the cointegrating relationship, we can make use of the $t$-Bounds Test critical values to determine which alternative hypothesis emerges. Here, the absolute value of the $t$-statistic is $|-1.774930| = 1.774930$, which is less than the absolute value of either the I$(0)$ or I$(1)$ $t$-bound. Accordingly, we fail to reject the $t$-Bounds test null hypothesis and conclude that the cointegrating relationship is in fact nonsensical. The following is a graph of the fit between the dependent variable and the equilibrating equation.



EViews Program and Files

We close this series with the EViews program script that will automate most of the output we have provided above. To use the script, you will need the EViews workfile: ARDL.EXAMPLE.WF1

'---------
'Preliminaries
'---------

'Open Workfile
'wfopen(type=txt) http://www5.statcan.gc.ca/cansim/results/cansim-1760043-eng-2216375457885538514.csv colhead=2 namepos=last names=(date, 'bby2y,bby5y,bby10y,tbill1m,tbill3m,tbill6m,tbill1y) skip=3
'pagecontract if @trend<244
'pagestruct @date(date)

wfuse pathto...ardl.example.WF1

'Set sample from Jan 2001 to end.
smpl Jan/2001 @last

'Create dummy for post 07/08 crisis
series dum0708 = @recode(@dateval("2007/06")<@date,1,0)

'Create Group of all Variables
group termstructure tbill1m tbill3m tbill6m tbill1y bby2y bby5y bby10y

'Graph all series
termstructure.line(m) across(@SERIES,iscale, iscalex, nodispname, label=auto, bincount=5)

'Do UR test on each series
termstructure.uroot(dif=1, adf, lagmethod=sic)

'---------
'No Relationship
'---------

'ARDL: 10y Bond Yields and 1 Month Tbills.
equation ardlno.ardl(trend=const) bby10y tbill1m  @ dum0708

'Run Residual Serial Correlation Test
ardlno.auto

'Run Residual Heteroskedasticity Test
ardlno.hettest @regs

'Make EC equation.
ardlno.makecoint cointno

'Plot Dep. Var and LR Equation
group groupno bby10y (bby10y - cointno)
freeze(mode=overwrite, graphno) groupno.line
graphno.axis(l) format(suffix="%")
graphno.setelem(1) legend(BBY10Y: 10 Year Canadian Benchmark Bond Yields)
graphno.setelem(2) legend(Long run relationship (BBY10Y - COINTNO))
show graphno

'---------
'Non Degenerate Relationship
'---------

'ARDL term structure of Bond Yields. (Non-Degenerate)
equation ardlnondeg.ardl(deplags=6, reglags=6, trend=uconst, cov=hac, covlag=a, covinfosel=aic) tbill6m tbill3m tbill1m   @ dum0708

'Run Residual Serial Correlation Test
ardlnondeg.auto

'Run Residual Heteroskedasticity Test
ardlnondeg.hettest @regs

'Make EC equation.
ardlnondeg.makecoint cointnondeg

'Plot Dep. Var and LR Equation
group groupnondeg tbill6m (tbill6m - cointnondeg)
groupnondeg.line

freeze(mode=overwrite, graphnondeg) groupnondeg.line
graphnondeg.axis(l) format(suffix="%")
graphnondeg.setelem(1) legend(TBILL6M: 6 Month Canadian T-Bill Yields)
graphnondeg.setelem(2) legend(Long run relationship (TBILL6M - COINTNONDEG))
show graphnondeg

'---------
'Degenerate Relationship
'---------

'ARDL term structure of Bond Yields. (Degenerate)
equation ardldeg.ardl(trend=uconst, cov=hac, covlag=a, covinfosel=aic) bby2y tbill1y tbill1m @ dum0708

'Run Residual Serial Correlation Test
ardldeg.auto

'Run Residual Heteroskedasticity Test
ardldeg.hettest @regs

'Make EC equation.
ardldeg.makecoint cointdeg

'Plot Dep. Var and LR Equation
group groupdeg bby2y (bby2y - cointdeg)
freeze(mode=overwrite, graphdeg) groupdeg.line
graphdeg.axis(l) format(suffix="%")
graphdeg.setelem(1) legend(BBY2Y: 2 Year Canadian Benchmark Bond Yields)
graphdeg.setelem(2) legend(Long run relationship (BBY2Y - COINTDEG))
show graphdeg

40 comments:

  1. Hi,

    Great series of posts. You mention that the fixed regressors do not appear in the long run equation, is a new feature, the ardl estimation in eviews 9 the fixed and dynamic regressors appear in the long run equation. Also, it would be useful to understand why they would not enter the long run equation if they are used to estimate the counteracting vector?

    Thanks

    ReplyDelete
    Replies
    1. That should read cointegrating vector sorry :)

      Delete
    2. The latest implementatio of ARDL estimation is entirely consistent with theory, and we strongly urge you to update to our latest releases. To answer your question, the ECM consists of short-run dynamics and the cointegration equation. In the long-run, the short-run dynamics are done away with and what remains is the cointegrating, or equilibrating equation. Thus, some variables such as dummies or fixed regressors which can be used to define the short-run dynamics in the ECM estimation, become entirely irrelevant in the long-run, and should therefore NOT be included among the cointegrating variables.

      Delete
    3. Great thanks, so Eviews 9, which is the version I am using is not the case? As these fixed regressors are included in the long run output, should this be ignored? Begs the question why are they included in the long run output?

      Delete
    4. should an interactive dummy be used as a fixed regressor? or should it placed other variables to have lags ??

      Delete
  2. Excellent post. Many thanks. One question: Should the graph of the fit between the dependent variable and the equilibrating equation be NORMALIZED?

    ReplyDelete
    Replies
    1. We've updated the graphs to show normalized curves.

      Delete
  3. Hi there,

    What part of the ARDL process should be used for forecasting purposes - Eviews seems to generate a forecast from the original ARDL - which appears to be equivalent to the unconstrained ECM. Should this converge to the long run, given the model is dynamically stable?

    ReplyDelete
    Replies
    1. Hi! You are right, the forecasts being used are not based on the constrained ECM and therefore do not a priori impose the cointegrating relationship in the forecast. The Pesaran and Shin (1998) original ARDL model (which is the one being used for EViews forecasting; the unconstrained ECM, if you will) demonstrates that the coefficients from that estimation are indeed consistent. Since the long run equation is determined from these parameters, it stands to reason that if a long term relationship exists, the forecasts being produced by EViews should converge to the cointegrating relationship.

      Delete
  4. One comment with regards to using Dummy variables such as dum0708 that may sometimes necessitates the modification/simulation of the reported asymptotic critical values. The dummy here does not tend to zero with the sample size T and more importantly the fraction of observations where dum0708 are non-zero are "too larger" almost 60% of the sample. Please see Pesaran et al. (2001).

    ReplyDelete
    Replies
    1. Agreed. We're mainly trying to illustrate how to use the features. Nevertheless, your comment is an important caveat. We will modify the content in the next few days to reflect this.

      Delete
    2. Thank you. I and I believe many EV users appreciate your work and prompt responses.

      Delete
  5. Thank you EViews Team....A great job.....

    ReplyDelete
  6. Hi,
    Is there any plan to do similar kind of stuff for Panel ARDL estimators like PMG, MG & DFE estimators? It would be great to see a theory and application blog posts on that.

    ReplyDelete
    Replies
    1. We will be producing similar blog posts on theoretical topics in the future, but topics and schedule will be somewhat ad-hoc.

      I'll point out that there is really little relationship between the Bounds Test use of ARDL and Panel ARDL models, other than the name, so it doesn't immediately follow that panel ARDL would be discussed simply because of these posts.

      Delete
    2. I agree with you. Although Panel ECM/Panel ARDL does not have the concept of bounds test, yet they are extension of it in the panel context and estimate the long-run relationship and presence of congregation can be inferred from ecm term.

      I am looking forward to many such elaborated theoretical topics with an application (or better REPLICATION).

      Many many thank you.

      Delete
  7. Hi IHS Team,
    Thank you for such an elaborating post. I have the following questions regarding this post:

    1. In Model 1, case 2 or restricted constant is chosen. My question is which variable decides trend specification: dependent variable or the regressor?
    2. I think it is also necessary to test the stability of the ARDL model.

    skd

    ReplyDelete
    Replies
    1. You're welcome! We hope you're enjoying the series.

      To answer your questions:

      1) Trend specification depends first and foremost on whether you want to have the trend specification present in the cointegrating relationship or not. In other words, if you choose to restrict the constant or time trend, what you're actually doing is saying that these deterministic variables will also be present in the long-run. However, choosing whether to include a deterministic variable is generally based on the nature of the dependent variable.

      2) Stability in the context of the Pesaran Shin (1998) ARDL model is indeed an important subject. They make the assumption that the ARDL model being studied is in fact stable. In this regard, if you are simply looking to estimate an ARDL model to see if the estimates are valid, you should be concerned about stability. Luckily, this is easily verified by testing whether the root of the characteristic equation are outside the unit circle. In other words, does the ARDL lag polynomial produce stationary results. Nevertheless, the Pesaran, Shin, and Smith (2001) paper is a TEST for cointegration. In other words, it must allow for the possibility that the underlying cointegrating relationship may in fact NOT be stable. In this regard, the PSS(2001) paper does not a priori impose stability of the ARDL lag polynomial. However, if cointegration does indeed exist, the ARDL model will in fact be stable!

      We hope that helps.

      Delete
    2. Yes, I am enjoying the series!
      Thank you.

      skd

      Delete
    3. Hello Eviews,
      Should I need to remove the structural break in the independent variables to make the model stability? Or let it be as the above explanation that ARDL model with cointegration will be stable.
      Thanks,
      Kate

      Delete
  8. what if the dummy is an interactive dummy? should it be a fixed regressor? or should it also involve lags?

    ReplyDelete
    Replies
    1. As was pointed out earlier, having dummy variables can be a tricky situation. In general, if dummy variables are included, the non-zero components of the variable must vanish asymptotically (in the long-run), otherwise the critical values that are provided in the Pesaran, Shin, and Smith (2001) paper may be invalid. Nevertheless, estimation is still consistent and valid. This is because, if the dummy does not vanish asymptotically, then it will clearly be a part of the long-run equation, and new critical values must be obtained to account for this.

      To answer your specific question, if interact a dummy with a regressor, what I'm really doing is creating a new regressor which is just 0 in some parts, and not in others. This is equivalent to including a new regressor with some special features. There's no harm in inlcuding such variables, however, one must again be certain whether such a variable will be present asymptotically or not. If it is present asymptotically (in the long-run), then it must be a part of the cointegrating relationship. If this is the case, it is difficult to tell whether a modification to the critical values is necessary. This will probably depend on whether the dummy variable being used for interaction is present asymptotically as well. As to whether lags on this variable can be included, there's certainly no theoretical reason why they can't be. Should they? This is entirely a question of whether doing so will produce a more accurate model estimation... in other words, part science, part art.

      Hope this helps.

      Delete
    2. You mean the interaction dummy should go with the dynamic regressors in the top box?

      Delete
  9. Once we reject the t-Bounds test null hypothesis, is there a FORMAL or more streamlined way (especially in Eviews) to test for Degenerate case besides or in addition to graphing the relationship between the dependent variable and the equilibrating equation? Thanks.

    ReplyDelete
  10. I am confused with the automatic lag selection. Under what circumstances should both p and q lags be same and when can these be different. Dave Giles suggests adding more lags to the dependent only, when serial correlation is a problem. You suggest increasing both p and q.The two approaches give different results. For example lot easier to have a 4 and 2 maximum to get good model while 4 4 does not give a good model.I would love to use different maximums for p and q if you suggest these are technically acceptable without ifs and buts. Secondly What is the range for annual data some say 2 max some go till 5-6.

    ReplyDelete
    Replies
    1. The number of lags is entirely dependent on the data and model you're analysing. There is no general rule.

      Delete
  11. Will the Eviews have CUSUM test for ARDL. Running the test in OLS seems to have a bug.If I put 65 year data the graph would only show 2013 and 2014 or some times 2002 to 2017. As I increase variables and dummies the graph starts to reduce years shown.Is this a bug?Many others reporting same on the web.

    ReplyDelete
    Replies
    1. EViews does not currently offer CUSUM for ARDL. We'll add it to the list of things to consider.

      Delete
  12. Good Day IHS Eviews.
    Thank you for sharing this valuable knowledge, everything becomes easier now.
    I estimated an ARDL model of 6 variables. After several attempts (using different lags ) to find a better estimate, i got a selected ARDL model using AIC as (1,1,0,0,1,2) while using SIC is ARDL (1,0,0,0,1,2). My questions are:
    1. Can I still use this model given these lags selection?
    2. Which among the AIC and SIC is more appropriate?
    ARDL (1,1,0,0,1,2) = AIC
    ARDL (1,0,0,0,1,2,) = SIC
    Thank you, in anticipation for your kind acknowledgement and assistance.
    Best regards.

    ReplyDelete
    Replies
    1. AIC and SIC are two very different things. AIC generally performs well when the objective is prediction. In fact, it is asymptotically equivalent to cross-validation. Moreover, AIC does not assume that your true model lies in the model space.

      On the other hand, SIC is something you would prefer if you are looking to obtain the most parsimonious representation among a group of models. In other words, SIC selects the simplest possible model to explain the data. Furthermore, SIC assumes that the true model lies in the model space.

      Thus, to answer your question: it really depends on what the objective of the exercise is, and neither method dominates the other entirely. You can also visualize the model selection graph and table by clicking on View/Model Selection Summary/{Table,Graph}. There you can see how close the the competing model selection criteria as well as the models within them performed.

      Delete
    2. Thank you for the prompt response.
      With this kind of lags selection ARDL(1,0,0,,0,1,2), is it appropriate to continue using the model?
      Thank you for your kind assistance.

      Delete
  13. Good post. i want to ask a question. what is the suitable criteria for choosing optimum lag in ardl estimation. AIC or SIC i have annual data of less than 60 years please guide me on which base i can choose criteria ... thanks.

    ReplyDelete
  14. Very useful post. I would like to raise a doubt. Can we use ARDL model when variables are seasonal in nature. Thanks in advance

    ReplyDelete
  15. Hello,
    Thanks for the great post series on ARDL. I would like to ask why while using the dummy variable to account for the structural break in the ARDL estimation, you also don't use the break point unit root test also to test for unit root? Shouldn't the break point unit root test be used instead of the ADF since there is a structural break? Thanks..

    ReplyDelete
    Replies
    1. Good point - it may be that the breakpoint unit root test is more appropriate in this case.

      Delete
  16. hi i'm not very conversant with e views and i'm simply trying to know where i get the long run and short coefficients, i have ran the long run form and bound test as well as the error correction form tests, i suspect that the coefficient of cointeq(-1) is the short run coefficient, but i have no clue what the long run coefficient is, i would appreciate it if i could get a response

    ReplyDelete
  17. Thank you for your great posts.
    I would like to ask if in the case that we include a dummy for a break (e.g. Great Recession) as a fixed regressor in the ARDL, then when performing the bounds test with the F-statistic, do we take also into account the dummy variable? That is, the F-statistic tests whether all the coefficients are equal to 0 including the dummy, or the dummy is excluded when performing the bounds test? Thanks in advance..

    ReplyDelete
  18. Hi IHS Eviews and thank you for the great posts.
    I would like to ask how did you manage to include a dummy(takes the values 1 for few periods and 0 everywhere else) as a fixed regressor in the ARDL estimation and this dummy does not appear in differences in the ARDL Error Correction Estimation?
    When I include a dummy as a fixed regressor, then it always appears in differences as the rest of the variables (except of course the cointegrating equation where the dummy is in levels as it should) but in your example it does not. Is this correct and if yes what is the meaning of having a dummy variable in differences? Thank you.

    ReplyDelete
    Replies
    1. Is your copy of EViews up to date? (Help->EViews Update)

      Delete
    2. Thanks, the version is now updated and the problem is fixed!

      Delete