tag:blogger.com,1999:blog-68832474046785494892021-02-28T07:48:52.322-08:00EViewsIHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.comBlogger51125tag:blogger.com,1999:blog-6883247404678549489.post-25978748104437565982021-02-16T11:11:00.006-08:002021-02-16T15:21:46.646-08:00Lasso Variable Selection<style> table { border: 0px solid black; border-collapse: separate; border-spacing: 10px; } td { border: 0px solid black; } .nb { border: 0px solid black; } .step { counter-reset: section; list-style-type: none; } .step li::before { counter-increment: section; content: "Step "counter(section) ": "; } .seccol { } .subseccol { color: #fa5e5e } </style> <script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { lb: ['{\\left(}'], rb: ['{\\right)}'], rbrace: ['{\\left(#1\\right)}', 1], cbrace: ['{\\left\\{#1\\right\\}}', 1], sbrace: ['{\\left[#1\\right]}', 1], bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1], series: ['{\\left\\{#1_{#2}\\right\\}_{#2=#3}^{#4}}', 4], xsum: ['{\\sum_{#1=#2}^{#3}{#4}}', 4], var: ['{\\operatorname\{var\}}'], sign: ['{\\operatorname\{sign\}}'], diag: ['{\\operatorname\{diag\}}'], med: ['{\\operatorname\{median\}}'], vec: ['{\\operatorname\{vec\}}'], tr: ['{\\operatorname\{tr\}}'], min: ['{\\operatorname\{min\}}'] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"> </script> <span style="font-family: "verdana" sans-serif"> In this blog post we will show how Lasso variable selection works in EViews by comparing it with a baseline least squares regression. We will be evaluating the prediction and variable selection properties of this technique on the same <a href="https://web.stanford.edu/~hastie/StatLearnSparsity_files/DATA/diabetes.html">dataset</a> used in the well-known paper “Least Angle Regression” by Efron, Hastie, Johnstone, and Tibshirani. The analysis will show the generally superior in-sample fit and out-of-sample forecast performance of Lasso variable selection compared with a baseline least squares model. <a name='more'></a><br /><br /> Lasso variable selection, <a href="http://eviews.com/EViews12/ev12ecest_n.html#varsel">new to EViews 12</a> and also known as the Lasso-OLS hybrid, post-Lasso OLS, the relaxed Lasso (under certain conditions), or post-estimation OLS, uses Lasso as a variable selection technique followed by ordinary least squares estimation on the selected variables.<br /><br /> <h3 class="seccol">Table of Contents</h3> <ol> <li><a href="#sec1">Background</a> <li><a href="#sec2">Dataset</a> <li><a href="#sec3">Analysis</a> </ol><br /> <br /><br /> <h3 class="seccol", id="sec1">Background</h3> In today’s data-rich environment it is useful to have methods of extracting information from complex datasets with large numbers of variables. A popular way of doing this is with dimension reduction techniques such as principal components analysis or dynamic factor models. By reducing the number of variables in a model, we can reduce overfitting, reduce the complexity of the model and make it easier to interpret, and decrease computation time. However, dimension reduction methods have the risk of losing useful information contained in variables that are not included in the reduced set, and may potentially have poorer predictive power.<br/><br/> Lasso is useful because it is a shrinkage estimator: it shrinks the size of the coefficients of the independent variables depending on their predictive power. Some coefficients may shrink down to zero, allowing us to restrict the model to variables with nonzero coefficients.<br/><br/> Lasso is just one method out of a family of penalized least squares estimators (other members include ridge regression and elastic net). Starting with the linear regression cost function: \begin{align*} J = \frac{1}{2m}\xsum{i}{1}{m}{\rbrace{y_i - \beta_0 -\xsum{j}{1}{p}{x_{ij}\beta_j}}} \end{align*} where $y_i$ is the dependent variable, $x_{ij}$ are the independent variables, $\beta_j$ are the coefficients, $m$ is the number of data points, and $p$ the number of independent variables, we obtain the coefficients $\beta_j$ by minimizing $J$. If the model based on linear regression is overfit and does not make good predictions on new data, then one solution is to construct a Lasso model by adding a penalty term: \begin{align*} J = \frac{1}{2m}\xsum{i}{1}{m}{\rbrace{y_i - \beta_0 -\xsum{j}{1}{p}{x_{ij}\beta_j}}} + \lambda\xsum{j}{1}{p}{|\beta_j|} \end{align*} where the parameters are the same as before with the addition of the regularization parameter $\lambda$. By adding these extra terms the cost of $\beta_j$ is increased, so to minimize the cost function the values of $\beta_j$ have to be reduced. Smaller values of $\beta_j$ will "smooth out" the function so it fits the data less tightly, leaving it more likely to generalize well to new data. The regularization parameter $\lambda$ determines how much the cost of $\beta_j$ is increased. Lasso estimation in EViews can automatically select an appropriate value with cross-validation, which is a data-driven method of choosing $\lambda$ based on its predictive ability.<br/><br/> If we have a dataset with many independent variables, ordinary least squares models may produce estimates with large variances and therefore unstable forecasts. By applying Lasso regression to the data and removing variables that have been shrunk to zero, then applying OLS to the reduced number of variables, we may be able to improve forecasting performance. In this way we can perform dimension reduction on our data based on the predictive accuracy of our model. <br/><br/><br/><br/> <h3 class="seccol", id="sec2">Dataset</h3> In the table below we show part of the data used for this example.<br/><br/> <!-- :::::::::: FIGURE 1 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/lassosel/images/spreadsheet.png"><img height="auto" src="http://www.eviews.com/blog/lassosel/images/spreadsheet.png" title="Data Preview" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 1: Data Preview</small><br/> <small>(Click to enlarge)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 1 :::::::::: --> The ten original variables are age, sex, body mass index (bmi), average blood pressure (bp), and six blood serum measurements for 442 patients. They have all been standardized as described in the paper. The dependent variable is a measure of disease progression one year after the other measurements were taken and has been scaled to have mean zero. We are interested in the accuracy of the fit and predictions from any model we develop of this data and in the relative importance of each regressor.<br/><br/><br/><br/> <h3 class="seccol", id="sec3">Analysis</h3> We first perform an OLS regression on the dataset to give us a baseline for comparison.<br/><br/> <!-- :::::::::: FIGURE 2 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/lassosel/images/ls_all.png"><img height="auto" src="http://www.eviews.com/blog/lassosel/images/ls_all.png" title="OLS Regression" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 2: OLS Regression</small><br/> <small>(Click to enlarge)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 2 :::::::::: --> One thing to note in this estimation result is that the adjusted R-squared for this model is .5066, indicating that the model explains approximately 51% of the variation in the dependent variable. We see that certain variables (BMI, BP, LTG, and SEX) have both a greater impact on the progression of diabetes after one year and are the most statistically significant.<br/><br/> Next, we run a Lasso regression over the same dataset and look at the plot of the coefficients against the L1 norm of the coefficients. This gives us a sense of how each coefficient contributes to the dependent variable. We can see that as the degree of regularization decreases (the L1 norm increases) more coefficients enter the model.<br/><br/> <!-- :::::::::: FIGURE 3 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/lassosel/images/coef_evol.png"><img height="auto" src="http://www.eviews.com/blog/lassosel/images/coef_evol.png" title="Coefficient Evolution" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 3: Coefficient Evolution</small><br/> <small>(Click to enlarge)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 3 :::::::::: --> Let’s take a closer look at the coefficients.<br /><br /> <!-- :::::::::: FIGURE 4 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/lassosel/images/lasso.png"><img height="auto" src="http://www.eviews.com/blog/lassosel/images/lasso.png" title="Lasso Regression" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 4: Lasso Regression</small><br/> <small>(Click to enlarge)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 4 :::::::::: --> The set of coefficients at the minimum value of lambda (.004516) are all nonzero. However, when we move to the lambda value in the next column (6.401), which is the largest value of lambda that is within one standard deviation of the minimum, we see that only four of the original ten regressors are nonzero. Compared with least squares, most of the coefficients in the first column have shrunk slightly toward zero, and more so in the next column with a larger regularization penalty (with the exception of an interesting sign change for HDL). Three of the variables retained (BMI, BP, and LTG) are the same as the variables identified by least squares as being both more influential on the outcome and statistically significant. But compared to least squares, this is a less complex model. Does reducing the number of variables in this way lead to a better fitting model? Evaluate a Lasso variable selection model with the same options and see.<br /><br /> <!-- :::::::::: FIGURE 5 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/lassosel/images/lasso_vs.png"><img height="auto" src="http://www.eviews.com/blog/lassosel/images/lasso_vs.png" title="Lasso Variable Selection" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 5: Lasso Variable Selection</small><br/> <small>(Click to enlarge)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 5 :::::::::: --> The unimpressive result of OLS applied to the variables selected from the Lasso fit is that adjusted R-squared has increased ever-so-slightly to .5068. Another thing to note is that while Lasso generally shrinks, or biases, the coefficients toward zero, OLS applied to Lasso expands, or debiases, them away from zero. This results in a decrease in the variance of the final model, as you can see by comparing the errors for the Lasso variable selection model with the first OLS model.<br /><br /> You may have noticed that the set of nonzero coefficients here is different than that for the Lasso example earlier. That’s because Lasso variable selection uses a different measure (AIC) to select the preferred model compared to Lasso. This is the same measure used for the other variable selection methods in EViews.<br /><br /> What about out-of-sample predictive power? We have randomly labeled each of the 442 observations as either training or test datapoints (the split is 70% training, 30% test). After doing least squares and Lasso variable selection on the training data, we use Series->View->Forecast Evaluation to compare the forecasts for least squares and Lasso variable selection over the test set:<br /><br /> <!-- :::::::::: FIGURE 6 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/lassosel/images/fcomp_orig.png"><img height="auto" src="http://www.eviews.com/blog/lassosel/images/fcomp_orig.png" title="Lasso Predictive Evaluation" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 6: Lasso Predictive Evaluation</small><br/> <small>(Click to enlarge)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 6 :::::::::: --> We have achieved very slightly better predictive performance for some measures (MAE, MAPE) and very slightly worse for others (RMSE, SMAPE).<br /><br /> This is all mildly interesting. But the real power of variable selection techniques comes when you have a larger dataset and want to reduce the set of variables under consideration to a more manageable set. To this end, we use the “extended” dataset provided by the authors that includes the ten original variables plus squares of nine variables and forty-five interaction terms, for a total of sixty-four variables.<br /><br /> First, we repeat the OLS regression from earlier with the new extended dataset:<br /><br /> <!-- :::::::::: FIGURE 7 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/lassosel/images/ls_extended.png"><img height="auto" src="http://www.eviews.com/blog/lassosel/images/ls_extended.png" title="Extended OLS" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 7: Extended OLS</small><br/> <small>(Click to enlarge)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 7 :::::::::: --> Adjusted R-squared is actually higher than it was for the original ten variables, at .5233, so the additional variables have added some explanatory power to the model.<br /><br /> Next, let’s go straight to Lasso variable selection on the extended dataset.<br /><br /> <!-- :::::::::: FIGURE 8 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/lassosel/images/lassovs_all.png"><img height="auto" src="http://www.eviews.com/blog/lassosel/images/lassovs_all.png" title="Extended Lasso Variable Selection" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 8: Extended Lasso Variable Selection</small><br/> <small>(Click to enlarge)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 8 :::::::::: --> Out of sixty-four original search variables, the selection procedure has kept fourteen. This is a significant reduction in complexity. The adjusted R-squared has increased from .5233 to .5308, and the standard error of the regression has decreased.<br /><br /> The in-sample R-squared and errors have moved in a modest but promising direction. What about out-of-sample prediction? We again compare the forecasts for least squares and Lasso variable selection over the test set:<br /><br /> <!-- :::::::::: FIGURE 9 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/lassosel/images/fcomp_ext.png"><img height="auto" src="http://www.eviews.com/blog/lassosel/images/fcomp_ext.png" title="Extended Lasso Predictive Evaluation" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 9: Extended Lasso Predictive Evaluation</small><br/> <small>(Click to enlarge)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 9 :::::::::: --> Now we can see a meaningful improvement in forecasting performance. All of the error measures have improved, some significantly. Applying Lasso variable selection to this larger dataset has led to reduced model complexity, a slight improvement in the in-sample fit, and improved forecasting performance over least squares.<br /><br /><br /><br /> <h3>Request a Demonstration</h3> If you would like to experience Lasso methods in EViews for yourself, you can request a demonstration copy <a href="http://www.eviews.com/demo">here</a>. </span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com0tag:blogger.com,1999:blog-6883247404678549489.post-6688321458965513982021-02-02T10:51:00.001-08:002021-02-02T10:51:19.453-08:00Univariate GARCH Models with Skewed Student’s-t Errors<style> table { border: 0px solid black; border-collapse: separate; border-spacing: 10px; } td { border: 0px solid black; } .nb { border: 0px solid black; } .step { counter-reset: section; list-style-type: none; } .step li::before { counter-increment: section; content: "Step "counter(section) ": "; } .seccol { } .subseccol { color: #fa5e5e } </style> <script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { lb: ['{\\left(}'], rb: ['{\\right)}'], rbrace: ['{\\left(#1\\right)}', 1], cbrace: ['{\\left\\{#1\\right\\}}', 1], sbrace: ['{\\left[#1\\right]}', 1], bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1], series: ['{\\left\\{#1_{#2}\\right\\}_{#2=#3}^{#4}}', 4], xsum: ['{\\sum_{#1=#2}^{#3}{#4}}', 4], var: ['{\\operatorname\{var\}}'], sign: ['{\\operatorname\{sign\}}'], diag: ['{\\operatorname\{diag\}}'], med: ['{\\operatorname\{median\}}'], vec: ['{\\operatorname\{vec\}}'], tr: ['{\\operatorname\{tr\}}'], min: ['{\\operatorname\{min\}}'] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"> </script> <span style="font-family: "verdana" sans-serif"> <i>Authors and guest post by Eren Ocakverdi</i><br /><br /> This blog piece intends to introduce a new add-in (i.e. <b>SKEWEDUGARCH</b>) that extends the current capability of EViews’ available features for the estimation of univariate GARCH models. <a name='more'></a><br /><br /> <h3 class="seccol">Table of Contents</h3> <ol> <li><a href="#sec1">Introduction</a> <li><a href="#sec2">Skewed Student’s-t Distribution </a> <li><a href="#sec3">Application to USDTRY currency </a> <li><a href="#sec4">Files</a> <li><a href="#sec5">References</a> </ol><br /> <h3 class="seccol", id="sec1">Introduction</h3> Volatility is an important concept in itself, but it has a special place in finance as it is usually associated with risk. Although investors believe in higher risk higher reward, it is not an easy task to exploit this trade-off. Price of an asset can change dramatically over a short period of time and in either direction, which makes it exceedingly difficult to predict. Volatility is responsible from such sharp movements, so it is important to develop a gauge to measure and identify its dynamics.<br/><br/> One of the critical observations regarding the returns of financial assets was that the volatilities were not fixed over time and tended to cluster around large changes. GARCH models are specifically designed to capture this behavior and describe the movement of volatility more accurately. Details of GARCH estimation in EViews can be found <a href='http://www.eviews.com/help/helpintro.html#page/content%2Farch-ARCH_and_GARCH_Estimation.html%23'>here</a>.<br/><br/> Conditional distribution of error terms of returns (i.e. mean equation) plays an important role in the estimation of GARCH-type models. Currently, EViews offers <a href='http://www.eviews.com/help/helpintro.html#page/content%2Farch-Basic_ARCH_Specifications.html%23ww165096'>three different assumptions</a> regarding the specification of this distribution.<br/><br/><br/><br/> <h3 class="seccol", id="sec2">Skewed Student’s-t Distribution</h3> Consistent with the stylized facts of financial markets, distribution of returns has fat tails (i.e. high kurtosis) and are not symmetrical (i.e. positively skewed). Although Student’s-t and GED specifications can account for the excess kurtosis, they are symmetrical densities by design. Lambert and Laurent (2001) suggest the use of a skewed Student’s-t density within the GARCH framework. The log likelihood contributions of a standardized skewed Student’s-t are as follows:<br /><br /> \begin{align*} l_t &= -\frac{1}{2} \log \rbrace{ \frac{\pi(\nu - 2) \Gamma \rbrace{\frac{\nu}{2}}^2}{\Gamma \rbrace{\frac{\nu + 1}{2}} } } + \log \rbrace{\frac{2}{\xi + \frac{1}{\xi}}} + \log(s)\\ &-\frac{1}{2}\log(\sigma^2_t) - \frac{\nu + 1}{2} \log \rbrace{1 + \frac{\rbrace{s\rbrace{y_t - X_t^\top \theta} + m}^2}{\sigma_t^2\rbrace{\nu - 2}}\xi^{-2I_t}} \end{align*} Here, $\xi$ is the asymmetry parameter and $\nu$ is the degrees-of-freedom of the distribution. Other parameters, $m,s$ and $I_t$ are given by: \begin{align*} m &= \frac{\Gamma \rbrace{\frac{\nu - 1}{2}} \sqrt{\nu - 2}}{\sqrt{\pi}\Gamma\rbrace{\frac{\nu}{2}}}\rbrace{\xi - \frac{1}{\xi}}\\ s &= \sqrt{\rbrace{\xi^2 + \frac{1}{\xi^2} - 1} - m^2}\\ I_t &= \begin{cases} \phantom{-}1 \quad \text{if} \quad \rbrace{\frac{y_t - X_t^\top \theta}{\sigma_t}} \geq - \frac{m}{s}\\ -1 \quad \phantom{\text{if}}\text{otherwise} \end{cases} \end{align*} For a symmetrical distribution, $ξ=1$, but since the add-in estimates the logarithmic transformation of the parameter, you should consider $\log(\xi)=0$ for testing the null hypothesis of symmetry.<br /><br /> Below is the comparison of theoretical distribution of Student’s-t and its (positively) skewed version. Skewness increases the chance of observing extreme values, which has important implications in finance.<br /><br /> <!-- :::::::::: FIGURE 1 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/skewedugarch/images/skewedtdist.png"><img height="auto" src="http://www.eviews.com/blog/skewedugarch/images/skewedtdist.png" title="Skewed t-Distribution" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 1: Skewed t-Distribution</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 1 :::::::::: --> <h3 class="seccol", id="sec3">Application to USDTRY currency</h3> FX markets are convenient places for studying the dynamics of volatility and Turkish Lira has recently come to the fore among emerging markets due to sudden capital outflows as well as currency shocks (<b>USDTRY.WF1</b>).<br /><br /> A simple visual inspection of squared returns shows us the magnitude of the shock that hit the markets on August 10th, 2018 (<b>SKEWEDUGARCH_EXAMPLE.PRG</b>). The impact was so severe that it dwarfed all other volatilities experienced during the analysis period of 2005-2020.<br /><br /> <!-- :::::::::: FIGURE 2 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/skewedugarch/images/returnssq.png"><img height="auto" src="http://www.eviews.com/blog/skewedugarch/images/returnssq.png" title="Squared Returns" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 2: Squared Returns</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 2 :::::::::: --> In order to estimate the conditional variance of returns, we start by fitting two alternative models (i.e. GARCH(1,1) and TGARCH(1,1)) with two different distributional assumptions (i.e. Normal and Student’s-t). Mean equation is same for all models: \begin{align*} r_t &= \bar{r} + e_t\\ e_t &= \epsilon_t \sigma_t \end{align*} \begin{align*} \textbf{Model 1}: \quad \sigma_t^2 &= \omega + \alpha_1 e_{t-1}^2 + \beta_1\sigma_{t-1}^2, \quad \text{where} \quad \epsilon_t \sim N(0,1)\\ \textbf{Model 2}: \quad \sigma_t^2 &= \omega + \alpha_1 e_{t-1}^2 + \beta_1\sigma_{t-1}^2 + \gamma_1 e_{t-1}^2(e_t < 0), \quad \text{where} \quad \epsilon_t \sim N(0,1)\\ \textbf{Model 3}: \quad \sigma_t^2 &= \omega + \alpha_1 e_{t-1}^2 + \beta_1\sigma_{t-1}^2, \quad \text{where} \quad \epsilon_t \sim \text{Student}(0,1,\nu)\\ \textbf{Model 2}: \quad \sigma_t^2 &= \omega + \alpha_1 e_{t-1}^2 + \beta_1\sigma_{t-1}^2 + \gamma_1 e_{t-1}^2(e_t < 0), \quad \text{where} \quad \epsilon_t \sim \text{Student}(0,1,\nu) \end{align*} <!-- :::::::::: FIGURES 3a and 3b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 3a :::::::::: --> <center> <a href="http://www.eviews.com/blog/skewedugarch/images/model1.png"><img height="auto" src="http://www.eviews.com/blog/skewedugarch/images/model1.png" title="Model 1 Results" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 3b :::::::::: --> <center> <a href="http://www.eviews.com/blog/skewedugarch/images/model2.png"><img height="auto" src="http://www.eviews.com/blog/skewedugarch/images/model2.png" title="Model 2 Results" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 3a: Model 1 Results</small> </center> </td> <td class="nb"> <center> <small>Figure 3b: Model 2 Results</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 3a and 3b :::::::::: --> <!-- :::::::::: FIGURES 4a and 4b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 4a :::::::::: --> <center> <a href="http://www.eviews.com/blog/skewedugarch/images/model3.png"><img height="auto" src="http://www.eviews.com/blog/skewedugarch/images/model3.png" title="Model 3 Results" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 4b :::::::::: --> <center> <a href="http://www.eviews.com/blog/skewedugarch/images/model4.png"><img height="auto" src="http://www.eviews.com/blog/skewedugarch/images/model4.png" title="Model 4 Results" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 4a: Model 3 Results</small> </center> </td> <td class="nb"> <center> <small>Figure 4b: Model 4 Results</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 4a and 4b :::::::::: --> From a purely statistical point of view ($p$-values and information criteria that is), fat tails and/or leverage effects better represent the Turkish Lira’s volatility dynamics. Distribution fit to standardized residuals and the analysis of news impact can be provided as supporting evidence in that respect.<br /><br /> <!-- :::::::::: FIGURES 5a and 5b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 5a :::::::::: --> <center> <a href="http://www.eviews.com/blog/skewedugarch/images/leverage.png"><img height="auto" src="http://www.eviews.com/blog/skewedugarch/images/leverage.png" title="Leverage" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 5b :::::::::: --> <center> <a href="http://www.eviews.com/blog/skewedugarch/images/nic.png"><img height="auto" src="http://www.eviews.com/blog/skewedugarch/images/nic.png" title="News Impact Curve" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 5a: Leverage</small> </center> </td> <td class="nb"> <center> <small>Figure 5b: News Impact Curve</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 5a and 5b :::::::::: --> Extreme events seem to occur more often than suggested by the normal distribution and the volatility response to these shocks are more severe in the case of depreciation than that of appreciation.<br /><br /> At this point, one may also wonder if there is any long memory effect in the volatility of returns. In order to do so, we first estimate an ARFIMA model for the squared return series and a simple FIGARCH model for the variance part of regular return series: \begin{align*} &\textbf{Fractional Mean Model}: \quad \rbrace{1 - L}^d(r_t^2 - \mu) = e_t, \quad \text{where} \quad e_t \sim N(0,\bar{\sigma})\\ &\textbf{Fractional Variance Model}: \quad \sigma_t^2 = \omega + \rbrace{1 - \beta_1 - \rbrace{1 - \alpha_1}\rbrace{1 - L}^d}e_{t-1}^2 + \beta_1\sigma_{t-1}^2, \quad \text{where} \quad \epsilon_t \sim \text{Student}(0,1,\nu) \end{align*} <!-- :::::::::: FIGURES 6a and 6b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 6a :::::::::: --> <center> <a href="http://www.eviews.com/blog/skewedugarch/images/model5.png"><img height="auto" src="http://www.eviews.com/blog/skewedugarch/images/model5.png" title="Fractional Mean Model" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 6b :::::::::: --> <center> <a href="http://www.eviews.com/blog/skewedugarch/images/model6.png"><img height="auto" src="http://www.eviews.com/blog/skewedugarch/images/model6.png" title="Fractional Variance Model" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 6a: Fractional Mean Model</small> </center> </td> <td class="nb"> <center> <small>Figure 6b: Fractional Variance Model</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 6a and 6b :::::::::: --> Fractional difference parameter is significantly different from 0 and 1 in both models, but it is also significantly smaller than 0.5 in the ARFIMA model suggesting that the squared return series has long memory properties. However, modelling the variance of the return series explicitly we have successfully explained the behaviour of volatility and mitigated the impact of (and need for) long memory.<br /><br /> Since the estimation of fractional difference parameter can be sensitive to the choice of truncation limits, it may not worth the effort unless the statistical properties of results from FIGARCH models are significantly better than that of rival GARCH models. Here, our previous TGARCH(1,1) model with Student’s-t errors is still the frontrunner in that respect.<br /><br /> What if the positive shocks (i.e. depreciation) happen less frequently but more severe than negative shocks (i.e. appreciation) implied by a symmetric distribution? In order to test this hypothesis, one needs to look for asymmetry towards larger positive extreme values. We can estimate our final model via add-in assuming a skewed Student’s-t distribution and see if we can further improve the fit.<br /><br /> <!-- :::::::::: FIGURE 7 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/skewedugarch/images/skewedgarch.png"><img height="auto" src="http://www.eviews.com/blog/skewedugarch/images/skewedgarch.png" title="Skewed GARCH Estimates" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 7: Skewed GARCH Estimates</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 7 :::::::::: --> Estimated parameter values change slightly vis-à-vis our original TGARCH model, but the asymmetry parameter seems to be positive and significant, supporting the evidence of skewness. Information criteria favors this version of the model over all other specifications above.<br /><br /> One of the main uses of GARCH models in financial institutions is the estimation of Value-at-Risk (VaR), a concept that tracks and calculates the potential loss that might happen during a trading activity of any sort. Commonly used symmetric error distributions for the purpose might lead to underestimation of right tail risk (i.e. in short trading positions). The chart below compares the daily VaR estimations from commonly used distributions and depicts effects of fat tails and skewness for a long position in TL (or a short position in USDTRY).<br /><br /> <!-- :::::::::: FIGURE 8 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/skewedugarch/images/valueatrisk.png"><img height="auto" src="http://www.eviews.com/blog/skewedugarch/images/valueatrisk.png" title="Value-at-Risk" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 8: Value-at-Risk</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 8 :::::::::: --> At its peak around the summer of 2018, currency shock led 99% VaR threshold of a TL-denominated asset or portfolio to jump to a daily loss of 14.5%. It would have been considered as an astronomical event a year ago, since it was only around 1% back then. Increasing the likelihood of extreme events and incorporating the asymmetric tail behaviour of the shocks, would further add 5.1 and 3.5 bps, respectively and would carry this limit to 23.1%!<br /><br /><br /><br /> <hr /> <h3 class="seccol", id="sec4">Files</h3> <ul> <li><a href="http://www.eviews.com/blog/skewedugarch/workfiles/usdtry.wf1"'><b class="wf">USDTRY.WF1</b></a></li> <li><a href="http://www.eviews.com/blog/skewedugarch/workfiles/skewedugarch_example.prg"'><b class="wf">SKEWEDUGARCH_EXAMPLE.PRG</b></a></li> </ul><br /><br /> <hr /> <h3 class="seccol", id="sec5">References</h3> <ol class="bib2xhtml"> <li id="lambert-laurent-2001"> Lambert P and Laurent S (2001), <i>"Modelling Financial Time Series Using GARCH-Type Models and a Skewed Student Density"</i>, Mimeo, Universite de Liege. </li> </ol></span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com0tag:blogger.com,1999:blog-6883247404678549489.post-85782422361418923402021-01-20T09:28:00.003-08:002021-01-20T09:35:09.379-08:00Automatic Factor Selection: Working with FRED-MD Data<style> table { border: 0px solid black; border-collapse: separate; border-spacing: 10px; } td { border: 0px solid black; } .nb { border: 0px solid black; } .step { counter-reset: section; list-style-type: none; } .step li::before { counter-increment: section; content: "Step "counter(section) ": "; } .seccol { } .subseccol { color: #fa5e5e } </style> <script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { lb: ['{\\left(}'], rb: ['{\\right)}'], rbrace: ['{\\left(#1\\right)}', 1], cbrace: ['{\\left\\{#1\\right\\}}', 1], sbrace: ['{\\left[#1\\right]}', 1], bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1], series: ['{\\left\\{#1_{#2}\\right\\}_{#2=#3}^{#4}}', 4], xsum: ['{\\sum_{#1=#2}^{#3}{#4}}', 4], var: ['{\\operatorname\{var\}}'], sign: ['{\\operatorname\{sign\}}'], diag: ['{\\operatorname\{diag\}}'], med: ['{\\operatorname\{median\}}'], vec: ['{\\operatorname\{vec\}}'], tr: ['{\\operatorname\{tr\}}'], min: ['{\\operatorname\{min\}}'] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"> </script> <span style="font-family: "verdana" sans-serif"> This is the first of two posts devoted to automatic factor selection and panel unit root tests with cross-sectional dependence. Both features were recently released with EViews 12. Here, we summarize and work with two seminal contributions to automatic factor selection by Bai and Ng (2002) and Ahn and Horenstein (2013). <a name='more'></a><br /><br /> <h3 class="seccol">Table of Contents</h3> <ol> <li><a href="#sec1">Introduction</a> <li><a href="#sec2">Overview of Automatic Factor Selection</a> <ul> <li><a href="#sec2.1">Bai and Ng (2002)</a> <li><a href="#sec2.2">Ahn and Horenstein (2013)</a> </ul> <li><a href="#sec3">Working with FRED-MD</a> <ul> <li><a href="#sec3.1">Factor Selection using Bai and Ng (2002)</a> <li><a href="#sec3.2">Factor Selection using Ahn and Horenstein (2013)</a> <li><a href="#sec3.3">Factor Model Estimation</a> <li><a href="#sec3.4">Forecasting Industrial Production</a> </ul> <li><a href="#sec4">Files</a> <li><a href="#sec5">References</a> </ol><br /> <h3 class="seccol", id="sec1">Introduction</h3> Recent trends in empirical economics (particularly those in macroeconomics) indicate increased use and demand for large dimensional datasets. Since the temporal dimension ($T$) is typically thought to be large anyway, the term <b>large dimensional</b> here refers to the number of variables ($N$), otherwise referred to as <b>factors</b> or <b>cross-sectional</b> units. This is in contrast with traditional paradigms where the number of variables is few in number, but the temporal dimension is long. This paradigm shift is markedly the result of theoretical advancements in <b>dimension-aware</b> techniques such as factor-augmented and panel models.<br /><br /> At the heart of all dimension-aware methods is <b>factor selection</b>, or the correct specification (estimation) of the number of factors. Traditionally, this parameter was often assumed. Recently, however, several contributions have offered data driven (semi-)autonomous factor selection methods, most notably those of Bai and Ng (2002) and Ahn and Horenstein (2013).<br /><br /> These automatic factor selection techniques have come to play important roles in factor augmented (vector auto)regressions, panel unit root tests with cross sectional dependence, and data manipulation. A particularly important example of the latter is <a href='https://research.stlouisfed.org/econ/mccracken/fred-databases/'><b>FRED-MD</b></a> - a regularly updated and freely distributed macroeconomic database designed for the empirical analysis of <i>big data</i>. What is notable here is that the dataset is leveraged by collecting a vast number of important macroeconomic variables (factors) which are then optimally reduced in dimensionality using the Bai and Ng (2002) factor selection procedure.<br /><br /> In this post, we will demonstrate how to perform this dimensionality reduction using EViews' native Bai and Ng (2002) and Ahn and Horenstein (2013) factor selection procedures. The latter were introduced with the release of EViews 12. In particular, we will download the raw FRED-MD data, transform each series according to the FRED-MD instructions, and then proceed to perform dimensionality reduction. We will next estimate a traditional factor model with the optimally selected factors, and then proceed to forecast industrial production.<br /><br /> We pause briefly in the next section to provide a quick overview of the aforementioned factor selection procedures. <br /><br /><br /><br /> <h3 class="seccol", id="sec2">Overview of Automatic Factor Selection</h3> Recall that the maximum number of factors cannot exceed the number of observable variables. factor selection is often used as a <b>dimension reduction</b> technique. In other words, the goal is always to optimally select the smallest number of the most representative or <b>principal</b> variables in a set. Since dimensional principality (or importance) is typically quantified in terms of <b>eigenvalues</b>, virtually all dimension reduction techniques in this literature go through <b>principal component analysis</b> (PCA). For detailed theoretical and empirical discussions of PCA, please refer to our blog entries: <a href='http://blog.eviews.com/2018/10/principal-component-analysis-part-i.html'>Principal Component Analysis: Part I (Theory)</a> and <a href='http://blog.eviews.com/2018/11/principal-component-analysis-part-ii.html'>Principal Component Analysis: Part II (Practice)</a>.<br /><br /> Although PCA can identify which dimensions are most principal in a set, it is not designed to offer guidance on how many dimensions to retain. As a result, traditionally, this parameter was often assumed rather than driven by the data. To address this inadequacy, Bai and Ng (2002) proposed to cast the problem of factor selection as a model selection problem whereas Ahn and Horenstein (2013) achieve automatic factor selection by maximizing over ratios of two adjacent eigenvalues. In either case, optimal factor selection is data driven.<br /><br /> <h4 class="subseccol", id="sec2.1">Bai and Ng (2002)</h4> Bai and Ng (2002) handle the problem of optimal factor selection as the more familiar model selection problem. In particular, criteria are judged as a tradeoff between goodness of fit and parsimony. To formalize matters, consider the traditional factor augmented model: $$ Y_{i,t} = \mathbf{\lambda}_{i}^{\top} \mathbf{F}_{t} + e_{i,t} $$ where $ \mathbf{F}_{t} $ is a vector of $ r $ <b>common factors</b>, $ \mathbf{\lambda}_{i} $ denotes a vector of <b>factor loadings</b>, and $ e_{i,t} $ is the <b>idiosyncratic component</b> which is cross-sectionally independent provided $ \mathbf{F}_{t} $ accounts for all inter-cross-sectional correlations. When $ e_{i,t} $ are not cross-sectionally independent, the factor model governing $ u_{i,t} $ is said to be <i>approximate</i>.<br /><br /> The objective here is to identify the optimal number of factors. In particular, $ \mathbf{\lambda}_{i}$ and $ \mathbf{F}_{t} $ are estimated through th optimization problem: \begin{align} \min_{\mathbf{\Lambda}, \mathbf{F}}\frac{1}{NT} \xsum{i}{1}{N}{\xsum{t}{1}{T}{\rbrace{ Y_{i,t} - \mathbf{\lambda}_{i}^{\top}\mathbf{F}_{t} }^{2}}} \label{eq1} \end{align} subject to the normalization $ \frac{1}{T}\mathbf{F}^{\top}\mathbf{F} = \mathbf{I} $ where $ \mathbf{I} $ is the identity matrix.<br /><br /> Traditionally, the estimated factors $\widehat{\mathbf{F}}_{t}$ are proportional to the $T \times \min(N,T)$ matrix of eigenvectors associated with all eigenvalues of the $T\times T$ matrix $\mathbf{Y}\mathbf{Y}^{\top}$. This generates the full set of $ \min(N,T) $ factors. The objective then is to choose $ r < \min(N,T) $ factors that best capture the variation in $ \mathbf{Y} $.<br /><br /> Since the minimization problem in \eqref{eq1} is linear, once the factor matrix is estimated (observed), estimation of the factor loadings reduces to an ordinary least squares problem for a given set of regressors (factors). In particular, let $ \mathbf{F}^{r} $ denote the factors associated with the $ k $ largest eigenvalues of $ \mathbf{Y}\mathbf{Y}^{\top} $, and let $ \mathbf{\lambda}_{i}^{r} $ denote the associated factor loadings. Then, the problem of estimating $ \mathbf{\lambda}_{i}^{r} $ is cast as: $$ V \rbrace{ r, \widehat{\mathbf{F}}^{r} } = \min_{\mathbf{\Lambda}}\frac{1}{NT} \xsum{i}{1}{N}{\xsum{t}{1}{T}{\rbrace{ Y_{i,t} - \mathbf{\lambda}_{i}^{r^{\top}}\widehat{\mathbf{F}}_{t}^{r} }^{2}}} $$ Since a model with $ r+1 $ factors can fit no worse than a model with $ r $ factors, although efficiency is a decreasing function of the number of regressors, the problem of optimally selecting $ r $ becomes a classical problem of model selection. Furthermore, observe that $ V \rbrace{ r, \mathbf{F}^{r} } $ is the sum of squared residuals from a regression of $ \mathbf{Y_{i}} $ on the $ r $ factors, for all $ i $. Thus, to determine $ r $ optimally, one can use a loss function $ L_{r} $ of the form $$ V \rbrace{ r, \widehat{\mathbf{F}}^{r} } + rg(N,T) $$ where $ g(N,T) $ is a penalty for overfitting. \cite{bai-2002} propose 6 such loss functions, labeled PC 1 through 3 and IC 1 through 3. loss functions that yield consistent estimates: The optimal number of factors now derives as the minimum of $V \rbrace{ r, \widehat{\mathbf{F}}^{r} }$ across $ r \leq r_{\text{max}} < \min(N,T) $, where $r_{\text{max}}$ is some known number of maximum factors under consideration. In other words: $$ r^{\star} \equiv \min_{1 \leq k \leq r_{max}} V \rbrace{ r, \widehat{\mathbf{F}}^{r} } $$ Note that since $r_{\text{max}}$ must be specified <i>a priori</i>, its choice will play a role in optimization. <br /><br /> <h4 class="subseccol", id="sec2.2">Ahn and Horenstein (2013)</h4> In contrast to Bai and Ng (2002), Ahn and Horenstein (2013) exploit the fact that the $ r $ largest eigenvalues of some matrix grow unboundedly as the rank of said matrix increases, whereas the other eigenvalues remain bounded. The optimization strategy is then simply the maximum of the ratio of two adjacent eigenvalues. One of the advantages of this contribution is that it's far less sensitive to the choice $ r_{\text{max}} $ than Bai and Ng (2002). Furthermore, the procedure is significantly easier to compute, requiring only eigenvalues.<br /><br /> To further the discussion, let $ \psi_{r} $ denote the $ r^{\text{th}} $ largest eigenvalue of some positive semi-definite matrix $ \mathbf{Q} \equiv \mathbf{Y}\mathbf{Y}^{\top} $ or $ \mathbf{Q} \equiv \mathbf{Y}^{\top}\mathbf{Y} $. Furthermore, define: $$ \tilde{\mu}_{NT,\, r} \equiv \frac{1}{NT}\psi_{r} $$ Ahn and Horenstein (2013) propose the following tow estimators factors. For some $ 1 \leq r_{max} < \min(N,T) $, the optimal number of factors, $ r^{\star} $ is derived as: <ul> <li><b>Eigenvalue Ratio</b> (ER) $$ r^{\star} \equiv \displaystyle \max_{r \leq r_{max}} ER(k) \equiv \frac{\tilde{\mu}_{NT,\, r}}{\tilde{\mu}_{NT,\, r + 1}} $$ </li> <li><b>Growth Ratio</b> (ER) $$ r^{\star} \equiv \displaystyle \max_{r \leq r_{max}} ER(k) \equiv \frac{\log \rbrace{ 1 + \widehat{\mu}_{NT,\, r} }}{\log \rbrace{ 1 + \widehat{\mu}_{NT,\, r + 1} }} $$ where $$ \widehat{\mu}_{NT,\, r} \equiv \frac{\tilde{\mu}_{NT,\, r}}{\displaystyle \xsum{k}{r+1}{\min(N,T)}{\tilde{\mu}_{NT,\, k}}} $$ </li> </ul> At last, we note that Ahn and Horenstein (2013) suggest demeaning the data both in the time dimension as well as the cross-section dimension. While not absolutely necessary for consistency, this step is extremely useful in case of small samples.<br /><br /><br /><br /> <h3 class="seccol", id="sec1">Working with FRED-MD Data</h3> The FRED-MD data a large dimensional dataset updated in real-time and publicly distributed by the Federal Reserve Bank of St. Louis. In its raw form, it consists of 128 time series either in quarterly or monthly frequency. Here, we will work with the monthly frequency which can be downloaded in its raw flavour from <a href='https://s3.amazonaws.com/files.fred.stlouisfed.org/fred-md/monthly/current.csv'><b>current.csv</b></a>. Furthermore, associated with the raw dataset is a set of instructions on how to process each variable in the dataset for empirical work. This can be obtained from <a href='https://s3.amazonaws.com/files.fred.stlouisfed.org/fred-md/Appendix_Tables_Update.pdf'><b>Appendix_Tables_Update.pdf</b></a>.<br /><br /> As a first step, we will write a brief EViews program to download the raw dataset and process each variable according to the aforementioned instructions. The latter is summarized below: <pre><code><br /> <span style="color: green;">'documentation on the data:</span><br /> <span style="color: green;">'https://s3.amazonaws.com/files.fred.stlouisfed.org/fred-md/Appendix_Tables_Update.pdf</span><br /> <br /> close @wf<br /> <br /> <span style="color: green;">'get the latest data (monthly only):</span><br /> wfopen https://s3.amazonaws.com/files.fred.stlouisfed.org/fred-md/monthly/current.csv colhead=2 namepos=firstatt<br /> pagecontract if sasdate<>na<br /> pagestruct @date(sasdate)<br /> <br /> <span style="color: green;">'perform transformations</span><br /> %serlist = @wlookup("*", "series")<br /> <span style="color: blue;">for</span> %j {%serlist}<br /> %tform = {%j}.@attr("Transform:")<br /> <span style="color: blue;">if</span> @len(%tform) <span style="color: blue;">then</span><br /> <span style="color: blue;">if</span> %tform="1" <span style="color: blue;">then</span><br /> series temp = {%j} 'no transform<br /> <span style="color: blue;">endif</span><br /> <span style="color: blue;">if</span> %tform="2" <span style="color: blue;">then</span><br /> series temp = d({%j}) 'first difference<br /> <span style="color: blue;">endif</span><br /> <span style="color: blue;">if</span> %tform="2" <span style="color: blue;">then</span><br /> %tform="3" <span style="color: blue;">then</span><br /> series temp = d({%j},2) 'second difference<br /> <span style="color: blue;">endif</span><br /> <span style="color: blue;">if</span> %tform="2" <span style="color: blue;">then</span><br /> %tform="4" <span style="color: blue;">then</span><br /> series temp = log({%j}) 'log<br /> <span style="color: blue;">endif</span><br /> <span style="color: blue;">if</span> %tform="2" <span style="color: blue;">then</span><br /> %tform= "5" <span style="color: blue;">then</span><br /> series temp = dlog({%j}) 'log difference<br /> <span style="color: blue;">endif</span><br /> <span style="color: blue;">if</span> %tform="2" <span style="color: blue;">then</span><br /> %tform= "6" <span style="color: blue;">then</span><br /> series temp = dlog({%j},2) 'log second difference<br /> <span style="color: blue;">endif</span><br /> <span style="color: blue;">if</span> %tform="2" <span style="color: blue;">then</span><br /> %tform= "7" <span style="color: blue;">then</span><br /> series temp = d({%j}/{%j}(-1) -1) 'whatever<br /> <span style="color: blue;">endif</span><br /> <br /> {%j} = temp<br /> {%j}.clearhistory<br /> d temp <br /> <span style="color: blue;">endif</span><br /> <span style="color: blue;">next</span><br /> <br /> <span style="color: green;">'drop </span><br /> group grp *<br /> grp.drop resid<br /> grp.drop sasdate<br /><br /> smpl 1960:03 @last<br /> </code></pre> This program processes and collects the variables in a group which we've labeled here <b class="wfobj">GRP</b>. Additionally, we've dropped the variable <b class="wfobj">SASDATE</b> from this group since it is a date variable. In other words, <b class="wfobj">GRP</b> is a collection of 127 variables. Furthermore, as suggested by the FRED-MD paper, the sample under consideration should start from March 1960, and so the final line of the code above sets that sample.<br /><br /> A brief glance at the variables indicates that certain variables have missing values. Unfortunately, neither the Bai and Ng (2002) nor the Ahn and Horenstein (2013) procedure handle missing values particularly well. Accordingly, as suggested in the original FRED-MD paper, missing values are initially set to the mean of non-missing observations for any given series. This is easily achieved with a quick program as follows: <pre><code><br /> <span style="color: green;">'impute missing values with mean of non-missing observations</span><br /> <span style="color: blue;">for</span> !k=1 to grp.count<br /> <span style="color: green;">'compute mean of non-missing observations</span><br /> series tmp = grp(!k)<br /> !mu = @mean(tmp)<br /> <br /> <span style="color: green;">'set missing observations to mean</span><br /> grp(!k) = @nan(grp(!k), !mu)<br /> <br /> <span style="color: green;">'clean up before next series</span><br /> smpl 1960:03 @last<br /> d tmp <br /> <span style="color: blue;">next</span><br /> </code></pre> The original FRED-MD paper next suggests a second stage updating of missing observations. Nevertheless, for sake of simplicity, we will skip this step and proceed to estimating the optimal number of factors.<br /><br /> Although we will later estimate a factor model which will handle factor selection within its scope, here we demonstrate automatic factor selection as a standalone exercise. To do so, we will proceed through the principal component dialog. In particular, we open the group <b class="wfobj">GRP</b>, and then proceed to click on <b>View/Principal Components...</b>.<br /><br /> Notice that the principal components dialog here is changed from previous versions. This is to allow for the additional selection procedures we've introduced in EViews 12. Because of these changes, we briefly pause to explain the options available to users. In particular, the method dropdown offers several factor selection procedures. The first two, <b>Bai and Ng</b> and <b>Ahn and Horenstein</b>, are automatic selection procedures. The remaining two, <b>Simple</b> and <b>User</b>, are legacy principal component methods that were available in EViews versions prior to 12.<br /><br /> Next, associated with each method is a criteria to use in selection. In case of Bai and Ng, this offers seven possibilities. One for each of the 6 criteria, and the default <b>Average of criteria</b> which provides a summary of each of the 6 criteria, as well as their average.<br /><br /> Also, associated with each method is a dropdown which determines how the maximum number of factors are determined. Here EViews offers 5 possibilities, the specifics of which can be obtained by referring to the <a href='http://www.eviews.com/help/helpintro.html#page/content/groups-Principal_Components.html'><b>EViews manual</b></a>. Recall that both the Bai and Ng (2002) as well as the Ahn and Horenstein (2013) methods require the specification of this parameter. Although EViews offers several automatic selection mechanisms, in keeping with the suggestions in the FRED-MD paper, exercises below will use a user-defined value of 8.<br /><br /> Finally, EViews offers the option of demeaning and standardizing the dataset across both time and factor dimension. In fact, since the FRED-MD paper suggests that data should be demeaned and standardized, exercises below will proceed by demeaning and standardizing each of the variables. We next demonstrate how to obtain the Bai and Ng (2002) estimate of the optimal number of factors.<br /><br /> <h4 class="subseccol", id="sec3.1">Factor Selection using Bai and Ng (2002)</h4> From the open principal component dialog, we proceed as follows:<br /><br /> <ol> <li>Change the <b>Method</b> dropdown to <b>Bai and Ng</b>.</li> <li>Set the <b>User maximum factors</b> to <b>8</b>.</li> <li>Check the <b>Time-demean</b> box.</li> <li>Check the <b>Time-standardize</b> box.</li> <li>Click on <b>OK</b>.</li> </ol><br /> <!-- :::::::::: FIGURE 1 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/autofactsel/images/pca_dialog.png"><img height="auto" src="http://www.eviews.com/blog/autofactsel/images/pca_dialog.png" title="Principal Components Dialog" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 1: Principal Components Dialog</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 1 :::::::::: --> Hitting OK, Eviews produces a spool output. The first part of this output is a summary of the principal component analysis. <!-- :::::::::: FIGURES 2a and 2b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 2a :::::::::: --> <center> <a href="http://www.eviews.com/blog/autofactsel/images/pca_bn1.png"><img height="auto" src="http://www.eviews.com/blog/autofactsel/images/pca_bn1.png" title="Bai and Ng Summary: PCA Results" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 2b :::::::::: --> <center> <a href="http://www.eviews.com/blog/autofactsel/images/pca_bn2.png"><img height="auto" src="http://www.eviews.com/blog/autofactsel/images/pca_bn2.png" title="Bai and Ng Summary: Factor Selection Results" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 2a: Bai and Ng Summary: PCA Results</small> </center> </td> <td class="nb"> <center> <small>Figure 2b: Bai and Ng Summary: Factor Selection Results</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 2a and 2b :::::::::: --> The second part of the output, <b>Component Selection Results</b>, displays the summary of the Bai and Ng factor selection procedure. In particular, we see that each of the 6 selection criteria selected 8 factors. Naturally, the average number of selected factors is also 8. This result corresponds to the findings in the original FRED-MD paper, although the latter insists on using the PCP2 criterion. Accordingly, we can repeat the exercise above and show the specifics of the PCP2 selection. To do so, from the open group window, we again click on <b>View/Principal Components...</b>, and proceed as follows: <ol> <li>Change the <b>Method</b> dropdown to <b>Bai and Ng</b>.</li> <li>Change the <b>Criterion</b> dropdown to <b>PCP2</b>.</li> <li>Set the <b>User maximum factors</b> to <b>8</b>.</li> <li>Check the <b>Time-demean</b> box.</li> <li>Check the <b>Time-standardize</b> box.</li> <li>Click on <b>OK</b>.</li> </ol><br /> <!-- :::::::::: FIGURE 3 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/autofactsel/images/pca_bn3.png"><img height="auto" src="http://www.eviews.com/blog/autofactsel/images/pca_bn3.png" title="Bai and Ng PCP2: Factor Selection Results" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 3: Bai and Ng PCP2: Factor Selection Results</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 3 :::::::::: --> The output above is a detailed look at the selection procedure. In particular, for each number of factors from 1 to 8, EViews displays the PCP2 statistic. Clearly, the minimum is achieved with 8 factors where the statistic equals 0.904325. Again, the number of factors selected matches that obtained in the FRED-MD paper.<br /><br /> <h4 class="subseccol", id="sec3.1">Factor Selection using Ahn and Horenstein (2013)</h4> Similar steps can be undertaken to obtain the Ahn and Horenstein (2013) factor selection results. From the open principal component dialog, we proceed as follows:<br /><br /> <ol> <li>Change the <b>Method</b> dropdown to <b>Ahn and Horenstein</b>.</li> <li>Set the <b>User maximum factors</b> to <b>8</b>.</li> <li>Check the <b>Time-demean</b> box.</li> <li>Check the <b>Time-standardize</b> box.</li> <li>Check the <b>Cross-demean</b> box.</li> <li>Check the <b>Cross-standardize</b> box.</li> <li>Click on <b>OK</b>.</li> </ol><br /> <!-- :::::::::: FIGURES 4a and 4b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 4a :::::::::: --> <center> <a href="http://www.eviews.com/blog/autofactsel/images/pca_ah1.png"><img height="auto" src="http://www.eviews.com/blog/autofactsel/images/pca_ah1.png" title="Ahn and Horenstein Summary: PCA Results" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 4b :::::::::: --> <center> <a href="http://www.eviews.com/blog/autofactsel/images/pca_ah2.png"><img height="auto" src="http://www.eviews.com/blog/autofactsel/images/pca_ah2.png" title="Ahn and Horenstein: Factor Selection Results" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 4a: Ahn and Horenstein: PCA Results</small> </center> </td> <td class="nb"> <center> <small>Figure 4b: Ahn and Horenstein: Factor Selection Results</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 4a and 4b :::::::::: --> The results of the Ahn and Horenstein (2013) procedure are markedly different. Unlike the preceding Bai and Ng exercises, here we have chosen to demean the factor (cross-sectional) dimension in addition to demeaning and standardizing the time dimension. This is in keeping with the suggestion in Ahn and Horenstein (2013) who suggest that the cross-sectional dimension should be demeaned to achieve superior results. In particular, the optimal number of factors selected is 1 using both the Eigenvalue Ratio and the Growth Ratio statistics. Clearly, this is very different from the 8 selected factors in the previous exercises.<br /><br /> <h4 class="subseccol", id="sec3.3">Factor Model Estimation</h4> Typically, the objective of factor selection mechanisms is not in finding the number of factors outside of some context. Rather, it's a precursor to some form of estimation such factor model or second generation panel unit root tests. Here, we estimate a factor model using the full FRED-MD dataset and specify that the number of factors should be selected with the Bai and Ng (2002) procedure.<br /><br /> We start by creating a factor object. This is easily done by issuing the following command: <pre><code><br /> factor fact<br /> </code></pre> This will create a factor object in the workfile called <b class="wfobj">FACT</b>. We double click it to open it and then proceed to click on the <b>Estimate</b> button to bring up the estimation dialog. <!-- :::::::::: FIGURES 5a and 5b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 5a :::::::::: --> <center> <a href="http://www.eviews.com/blog/autofactsel/images/fact_dialog1.png"><img height="auto" src="http://www.eviews.com/blog/autofactsel/images/fact_dialog1.png" title="Factor Dialog: Data Tab" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 5b :::::::::: --> <center> <a href="http://www.eviews.com/blog/autofactsel/images/fact_dialog2.png"><img height="auto" src="http://www.eviews.com/blog/autofactsel/images/fact_dialog2.png" title="Factor Dialog: Estimation Tab" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 5a: Factor Dialog: Data Tab</small> </center> </td> <td class="nb"> <center> <small>Figure 5b: Factor Dialog: Estimation Tab</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 5a and 5b :::::::::: --> The rest of the steps proceed as follows: <ol> <li>Under the <b>Data</b> tab, enter <b class="wfobj">GRP</b>.</li> <li>Click on the <b>Estimation</b> tab.</li> <li>From the <b>Number of factors</b> group, set the <b>Method</b> dropdown to <b>Bai and Ng</b>.</li> <li>From the <b>Max. Factors</b> dropdown select <b>User</b>.</li> <li>In the <b>User maximum factors</b> textbox write <b>8</b>.</li> <li>Check the <b>Time-demean</b> box.</li> <li>Check the <b>Time-standardize</b> box.</li> <li>Click on <b>OK</b>.</li> </ol><br /> This tells EViews to estimate a factor model of at most 8 factors, with the number of factors chosen from the full FRED-MD set of variables using the Bai and Ng (2002) procedure. The output is reproduced below:<br /><br /> <!-- :::::::::: FIGURES 6a and 6b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 6a :::::::::: --> <center> <a href="http://www.eviews.com/blog/autofactsel/images/fact_est1.png"><img height="auto" src="http://www.eviews.com/blog/autofactsel/images/fact_est1.png" title="Factor Estimation: Part 1" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 6b :::::::::: --> <center> <a href="http://www.eviews.com/blog/autofactsel/images/fact_est2.png"><img height="auto" src="http://www.eviews.com/blog/autofactsel/images/fact_est2.png" title="Factor Estimation: Part 2" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 6a: Factor Estimation: Part 1</small> </center> </td> <td class="nb"> <center> <small>Figure 6b: Factor Estimation: Part 2</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 6a and 6b :::::::::: --> <h4 class="subseccol", id="sec3.3">Forecasting Industrial Production</h4> Having estimated a factor model, we now repeat the exercise of forecasting industrial production. The exercise is considered in the original FRED-MD paper where the forecast dynamics are summarized as follows: $$ y_{t+h} = \alpha_h + \beta_h(L)\hat{f}_t + \gamma_h(L)y_t $$ In other words, this is an $h-$step-ahead AR forecast with a constant and estimated factor as exogenous variables. In particular, to maintain comparability with the original exercise, we consider an 11-month-ahead forecast where $\hat{f}_t$ is obtained from the previously estimated factor model. In other words, we'll forecast for the period of available data in 2020. This exercise is repeated for the first estimated factor, the sum of the first two estimated factors, and no estimated factors, respectively.<br /><br /> As a first step in this exercise, we must extract the estimated factors. Although the factors are unobserved, they may be estimated from the estimated factor model as scores. In particular, proceed as follows: <ol> <li>From the open factor model, click on <b>Proc</b> and then <b>Make Scores...</b>.</li> <li>Under the <b>Output specification</b> enter <b>1 2</b>.</li> <li>Click on <b>OK</b>.</li> </ol><br /> This will produce two series in the workfile: <b class="wfobj">F1</b> and <b class="wfobj">F2</b>.<br /><br /> Next, let's forecast industrial production by leveraging the EViews native autoregressive forecast engine. To do so, double click on the series <b class="wfobj">INDPRO</b> to open it. Next, click on <b>Proc/Automatic ARIMA Forecasting...</b> to open the dialog. We now proceed with the following steps: <ol> <li>In the <b>Estimation sample</b> textbox, enter <b>1960M03 2019M12</b>.</li> <li>Under <b>Forecast length</b> enter <b>11</b>.</li> <li>Under the <b>Regressors</b> textbox, enter <b>C F1</b>.</li> <li>Click on the <b>Options</b> tab.</li> <li>Under the <b>Output forecast name</b>, enter <b>INDPRO_F1</b>.</li> <li>Ensure the <b>Forecast comparison graph</b> is checked.</li> <li>Click on <b>OK</b>.</li> </ol><br /> <!-- :::::::::: FIGURES 8a and 8b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 8a :::::::::: --> <center> <a href="http://www.eviews.com/blog/autofactsel/images/forecast_dialog1.png"><img height="auto" src="http://www.eviews.com/blog/autofactsel/images/forecast_dialog1.png" title="Forecast Dialog: Specification" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 8b :::::::::: --> <center> <a href="http://www.eviews.com/blog/autofactsel/images/forecast_dialog2.png"><img height="auto" src="http://www.eviews.com/blog/autofactsel/images/forecast_dialog2.png" title="Forecast Dialog: Options" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 8a: Forecast Dialog: Specification</small> </center> </td> <td class="nb"> <center> <small>Figure 8b: Forecast Dialog: Options</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 8a and 8b :::::::::: --> The options above specify that we wish to forecast the last 11 months of available data. Since our available sample runs from March 1960 to November 2020, we will estimate on the sample 1960 March through December 2019, and forecast out to November 2020.<br /><br /> <!-- :::::::::: FIGURES 9a and 9b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 9a :::::::::: --> <center> <a href="http://www.eviews.com/blog/autofactsel/images/forecast_11m1.png"><img height="auto" src="http://www.eviews.com/blog/autofactsel/images/forecast_11m1.png" title="Forecast: Actuals vs Forecast" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 9b :::::::::: --> <center> <a href="http://www.eviews.com/blog/autofactsel/images/forecast_11m2.png"><img height="auto" src="http://www.eviews.com/blog/autofactsel/images/forecast_11m2.png" title="Forecast: Forecast Comparison Graph" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 9a: Forecast: Actuals vs Forecast</small> </center> </td> <td class="nb"> <center> <small>Figure 9b: Forecast: Forecast Comparison Graph</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 9a and 9b :::::::::: --> For comparison, the same type of forecast is produced using <b>C (F1 + F2)</b> as exogenous variables, and <b>C</b> as the only exogenous variable. All three forecasts are superimposed on top of the original curve for comparison. This is reproduced below. <!-- :::::::::: FIGURE 10 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/autofactsel/images/forecast_11m3.png"><img height="auto" src="http://www.eviews.com/blog/autofactsel/images/forecast_11m3.png" title="Forecast Comaprison" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 10: Forecast Comparison</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 10 :::::::::: --> <hr /> <h3 class="seccol", id="sec4">Files</h3> <ul> <li><a href="http://www.eviews.com/blog/autofactsel/workfiles/fred-md.wf1"'><b class="wf">FRED-MD.WF1</b></a></li> <li><a href="http://www.eviews.com/blog/autofactsel/workfiles/fred-md.prg"'><b class="wf">FRED-MD.PRG</b></a></li> </ul><br /><br /> <hr /> <h3 class="seccol", id="sec5">References</h3> <ol class="bib2xhtml"> <li id="bai-ng-2002"> Bai J and Ng S (2002), <i>"Determining the Number of Factors in Approximate Factor Models"</i>, Econometrica, Vol. 70, pp. 191-221. Wiley Online Library. </li> <li id="ahn-horenstein-2013"> Ahn SC and Horenstein AR (2013), <i>"Eigenvalue Ratio Test for the Number of Factors"</i>, Econometrica, Vol. 81, pp. 1203-1227. Wiley Online Library. </li> <li id="mcracken-ng-2013"> McCracken MW and Ng S (2016), <i>"FRED-MD: A Monthly Database for Macroeconomic Research"</i>, Econometrica, Vol. 34, pp. 574-589. Taylor & Francis. </li> </ol></span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com0tag:blogger.com,1999:blog-6883247404678549489.post-47321381959468296912020-12-21T09:09:00.000-08:002020-12-21T09:09:28.715-08:00Using Indicator Saturation to Detect Outliers and Structural Shifts<style> table { border: 0px solid black; border-collapse: separate; border-spacing: 10px; } td { //border: 1px solid black; } .nb { border: 0px solid black; } .step { counter-reset: section; list-style-type: none; } .step li::before { counter-increment: section; content: "Step "counter(section) ": "; } .seccol { } .subseccol { color: #fa5e5e } .wf { } .wfobj { } </style> <script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { rbrace: ['{\\left(#1\\right)}', 1], cbrace: ['{\\left\\{#1\\right\\}}', 1], sbrace: ['{\\left[#1\\right]}', 1], bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1], series: ['{\\left\\{#1_{#2}\\right\\}_{#2=#3}^{#4}}', 4], xsum: ['{\\sum_{#1=#2}^{#3}{#4}}', 4], var: ['{\\operatorname\{var\}}'], sign: ['{\\operatorname\{sign\}}'], diag: ['{\\operatorname\{diag\}}'], med: ['{\\operatorname\{median\}}'], vec: ['{\\operatorname\{vec\}}'], tr: ['{\\operatorname\{tr\}}'] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"> </script> <span style="font-family: "verdana" sans-serif"> One of the potential pitfalls when working with time series datasets is that the data may have temporary or permanent changes to its levels. These changes could be single time-period outliers, or a fundamental structural shift.<br /><br /> EViews 12 introduces a new technique to detect and model these outliers and structural changes through <a href='http://eviews.com/help/helpintro.html#page/content%2FRegress2-Indicator_Saturation.html%23'>indicator saturation</a>. in the recently released EViews 12, we thought we'd give another demonstration. <a name='more'></a><br /><br /> <h3 class="seccol">Table of Contents</h3> <ol> <li><a href="#sec1">Indicator Saturation</a> <li><a href="#sec2">AutoSearch/GETS</a> <li><a href="#sec3">An Application with Consumption and Income</a> </ol><br /> <h3 class="seccol", id="sec1">Indicator Saturation</h3> Identifying changes in data is essential if we are to properly estimate models based upon these data. One way to detect changes would be to include dummy or indicator variables for potential observations where the change occurs in your regression, and then decide whether that included indicator is a valid regressor. Such variables could include: <ul> <li><b>Impulse Indicators</b> (IIS): a dummy variable equal to zero everywhere other than a single value of one at period $ t $. This indicator can be used to model single observation outliers, and is equivalent to the <b>@isperiod</b> EViews function used at the date corresponding to $ t $.</li> <li><b>Step Indicators</b> (SIS): a step function variable equal to zero until $ t $ and one thereafter. This indicator can be used to model a shift in the intercept of an equation, and is equivalent to the <b>@after</b> EViews function used at the date corresponding to $ t $.</li> <li><b>Trend Indicators</b> (TIS): a trend-break variable that is equal to zero until period $ t $ and then a follows a trend afterward. This indicator can be used to model a change in the trend of an equation (or the introduction of a trend term if one didn’t previously exist), and is equivalent to the <b>@trendbr</b> function used at the date corresponding to t.</li> </ul><br /> The problem with the approach of including these variables in a traditional regression setting is that unless you know the specific dates where changes occur, you can quickly run into a situation where you have more variables than observations (since you’ll be adding at least one indicator variable for each observation in your estimation sample!).<br /><br /> Fortunately, recent advancements in variable selection techniques have meant that we can now perform variable selection on models with many more variables than observations, and so can saturate our regression with complex combinations of indicator variables and let the variable selection technique choose which are the most appropriate indicators to use.<br /><br /><br /> <h3 class="seccol", id="sec2">AutoSearch/GETS</h3> One of the new technologies introduced in EViews 12 is the <a href='http://eviews.com/help/helpintro.html#page/content%2FVarsel-Background.html%23ww277256'><b>AutoSearch/GETS</b></a> algorithm for variable selection.<br /><br /> AutoSearch/GETS is a method of variable selection that follows the steps suggested by AutoSEARCH algorithm of <a href='http://www.sucarrat.net/research/autofim.pdf'>Escribano and Sucarrat (2011)</a>, which in turn builds upon the work in <a href='http://www.sucarrat.net/research/autofim.pdf'>Hoover and Perez (1999)</a>, and is similar to the technology behind the <b>Autometrics™</b> module in <a href='https://www.doornik.com/products.html#PcGive'><b>PcGive™</b></a>.<br /><br /> Mechanically the algorithm is similar to a <a href='http://eviews.com/help/helpintro.html#page/content%2FVarsel-Background.html%23ww277180'>backwards uni-directional stepwise</a> method: <ol> <li>The model with all search variables (termed the general unrestricted model, GUM) is estimated, and checked with a set of diagnostic tests.</li> <li>A number of search paths are defined, one for each insignificant search variable in the GUM.</li> <li>For each path, the insignificant variable defined in 2) is removed and then a series of further variable removal steps is taken, each time removing the most insignificant variable, and each time checking whether the current model passes the set of diagnostic tests. If the diagnostic tests fail after the removal of a variable, that variable is placed back into the model and prevented from being removed again along this path. Variable removal finishes once there are no more insignificant variables, or it is impossible to removal a variable without failing the diagnostic tests.</li> <li>Once all paths have been calculated the final models produced by the paths are compared using an information criteria selection. The best model is then selected.</li> </ol><br /> One of the advantages of AutoSearch/GETS is that the set of candidate variables can be split into sets, with search performed on each sets one at a time, then the selected variables from each set can be combined into a final set to be searched. This allows you to test more candidate variables than you have observations without creating singularities (as long as enough candidate variables are rejected), which means it is a perfect algorithm for indicator saturation studies.<br /><br /><br /> <h3 class="seccol", id="sec3">An Application with Consumption and Income</h3> To demonstrate this feature, we will estimate a simple personal consumption equation, using log-difference of personal consumption as the dependent variable against a constant and log-differenced disposable income. This estimation is purely for demonstration of the saturation features in EViews 12, and should not be taken as worthy macroeconomic research!<br /><br /> Both data series were downloaded directly from the Federal Reserve of St Louis database, FRED, and contain monthly observations between 2002 and April 2020:<br /><br /> <!-- :::::::::: FIGURE 1 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/indicators/images/FRED.gif"><img height="auto" src="http://www.eviews.com/blog/indicators/images/FRED.gif" title="FRED" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 1: FRED</small> <small>(Click to expand)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 1 :::::::::: --> We begin by estimating a simple equation without any indicators included, using the following steps: <ol> <li><b>Quick/Estimate Equation</b> to bring up the equation estimation dialog.</li> <li>Enter our dependent variable <b>DLOG(CONS)</b> followed by a constant and our regressor <b>DLOG(INCOME)</b>.</li> <li>Clicking OK.</li> </ol><br /> <!-- :::::::::: FIGURES 2a and 2b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 2a :::::::::: --> <center> <a href="http://www.eviews.com/blog/indicators/images/SimpleEqDiag.png"><img height="auto" src="http://www.eviews.com/blog/indicators/images/SimpleEqDiag.png" title="Simple Estimation Dialog" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 2b :::::::::: --> <center> <a href="http://www.eviews.com/blog/indicators/images/SimpleEqRes.png"><img height="auto" src="http://www.eviews.com/blog/indicators/images/SimpleEqRes.png" title="Simple Estimation Output" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 2a: Simple Estimation Dialog</small> <small>(Click to expand)</small> </center> </td> <td class="nb"> <center> <small>Figure 2b: Simple Estimation Output</small> <small>(Click to expand)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 2a and 2b :::::::::: --> Note that the coefficient on log differenced income is negative and statistically significant. Also note we have an R-squared of 35%.<br /><br /> If we click on the <b>Resids</b> button we can view a graph of the equation residuals.<br /><br /> <!-- :::::::::: FIGURE 3 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/indicators/images/SimpleEqResid.png"><img height="auto" src="http://www.eviews.com/blog/indicators/images/SimpleEqResid.png" title="Estimation Residuals" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 3: Estimation Residuals</small> <small>(Click to expand)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 3 :::::::::: --> A quick eyeball test suggests that something happened towards the end of 2004, again in the middle of 2008 and then 2013. And obviously there was a huge shift at the start of the Covid-19 crisis in March/April 2020.<br /><br /> Now we’ll estimate a new equation where we will instruct EViews to detect for both impulse (outlier) and step-shift (change in intercept) indicators, with the following steps: <ol> <li><b>Quick/Estimate Equation</b>> to bring up the equation estimation dialog.</li> <li>Enter our dependent variable <b>DLOG(CONS)</b> followed by a constant and our regressor <b>DLOG(INCOME)</b>.</li> <li>Switch to the <b>Options Tab</b> and select <b>Auto-detection</b> under <b>Outliers/indicator saturation</b>.</li> <li>Press the <b>Options</b> button and select both <b>Impulse</b> and <b>Step-shift</b> indicators.</li> <li>Change the <b>Terminal condition p-value</b> to <b>0.01</b> (which will allow for more indicators entering the equation).</li> <li>Clicking OK twice.</li> </ol><br /> <!-- :::::::::: FIGURES 4a and 4b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 4a :::::::::: --> <center> <a href="http://www.eviews.com/blog/indicators/images/ImpulseEst.gif"><img height="auto" src="http://www.eviews.com/blog/indicators/images/ImpulseEst.gif" title="Impulse Estimation" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 4b :::::::::: --> <center> <a href="http://www.eviews.com/blog/indicators/images/ImpulseRes.png"><img height="auto" src="http://www.eviews.com/blog/indicators/images/ImpulseRes.png" title="Impulse Estimation Output" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 4a: Impulse Estimation</small> <small>(Click to expand)</small> </center> </td> <td class="nb"> <center> <small>Figure 4b: Impulse Estimation Output</small> <small>(Click to expand)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 4a and 4b :::::::::: --> You can see that five indicators have been added to the equation, with three single observation indicators (2018M12, 2020M03, 2020M04), and two level shift indicators (2008M5, 2013M1).<br /><br /> The impact of these variables on the log-differenced income coefficient is dramatic, as is resulting R-squared.<br /><br /> Viewing the residual graph shows that the large outliers have been removed, and the location of detected indicators, as shown by the vertical lines, corresponds to the outliers we eyeballed in the original equation.<br /><br /> <!-- :::::::::: FIGURE 5 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/indicators/images/ImpulseResid.png"><img height="auto" src="http://www.eviews.com/blog/indicators/images/ImpulseResid.png" title="Impulse Residuals" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 5: Impulse Residuals</small> <small>(Click to expand)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 5 :::::::::: --> </span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com0tag:blogger.com,1999:blog-6883247404678549489.post-58039045047729172952020-12-08T07:42:00.007-08:002020-12-08T08:04:04.203-08:00Nowcasting GDP with PMI using MIDAS-GETS<style> table { border: 0px solid black; border-collapse: separate; border-spacing: 10px; } td { border: 1px solid black; } .nb { border: 0px solid black; } .step { counter-reset: section; list-style-type: none; } .step li::before { counter-increment: section; content: "Step "counter(section) ": "; } .seccol { } .subseccol { color: #fa5e5e } .wf { } .wfobj { } </style> <script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { rbrace: ['{\\left(#1\\right)}', 1], cbrace: ['{\\left\\{#1\\right\\}}', 1], sbrace: ['{\\left[#1\\right]}', 1], bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1], series: ['{\\left\\{#1_{#2}\\right\\}_{#2=#3}^{#4}}', 4], xsum: ['{\\sum_{#1=#2}^{#3}{#4}}', 4], var: ['{\\operatorname\{var\}}'], sign: ['{\\operatorname\{sign\}}'], diag: ['{\\operatorname\{diag\}}'], med: ['{\\operatorname\{median\}}'], vec: ['{\\operatorname\{vec\}}'], tr: ['{\\operatorname\{tr\}}'] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"> </script> <span style="font-family: "verdana" sans-serif"> <b>Nowcasting</b>, <a href='https://en.wikipedia.org/wiki/Nowcasting_(economics)'>the act of predicting the current or near-future state of a macro-economic variable</a>, has become one of the more popular research topics performed in EViews over the past decade.<br /><br /> Perhaps the most important technique in nowcasting is mixed data sampling, or MIDAS. We have discussed <a href='https://en.wikipedia.org/wiki/Mixed-data_sampling'>MIDAS</a> estimation in EViews in a couple of prior guest <a href='http://blog.eviews.com/2018/12/nowcasting-gdp-on-daily-basis.html'>blog posts</a>, but with the introduction of a <a href='http://eviews.com/EViews12/ev12ecest_n.html#midas'>new MIDAS technique</a> in the recently released EViews 12, we thought we'd give another demonstration. <a name='more'></a><br /><br /> <h3 class="seccol">Table of Contents</h3> <ol> <li><a href="#sec1">MIDAS – A Brief Background</a> <ul> <li><a href="#sec1.1">MIDAS-GETS</a> </ul> <li><a href="#sec2">MIDAS as a Nowcasting Tool</a> <ul> <li><a href="#sec1.1">PMI as a Nowcasting Instrument</a> </ul> <li><a href="#sec3">Nowcasting Exercises</a> <ul> <li><a href="#sec3.1">MIDAS-PDL</a> <li><a href="#sec3.2">MIDAS-GETS</a> <li><a href="#sec3.3">MIDAS-GETS with Indicator Saturation</a> <li><a href="#sec3.4">Evaluating Nowcasting Models</a> </ul> </ol><br /> <h3 class="seccol", id="sec1">MIDAS – A Brief Background</h3> <b>MIxed DAta Sampling</b> (MIDAS) is a regression technique that handles the case where the dependent variable is sampled or reported at a lower frequency than that of one, or more, of the independent regressors. This is common in macroeconomics where a number of important indicators, such as GDP, are usually reported on a quarterly basis, and other indicators, such as unemployment or stock prices, are reported on a monthly or even weekly basis.<br /><br /> The traditional approach to dealing with this mixed-frequency problem is to aggregate the higher-frequency variable into the same frequency as the lower. For example, when dealing with quarterly GDP and monthly unemployment, it's common practice to use the average monthly unemployment rate over the three months in a quarter as a single quarterly observation. Whilst simple to implement, this approach loses fidelity in the higher-frequency variables. Any within-quarter movements in unemployment are lost, and the dataset is reduced by 2/3 (converting three observations into one).<br /><br /> MIDAS alleviates this issue by adding the individual components of the higher-frequency variable as independent regressors, allowing a separate coefficient for each component. For example, unemployment could have three separate regressors, one for the first month of the quarter, one for the second, and one for the third. This simple approach is called <b>U-MIDAS</b>.<br /><br /> A drawback of creating a regressor for each high-frequency component is that, in certain cases, one quickly saturates the equation with many regressors (curse of dimensionality). For instance, whereas monthly unemployment and quarterly GDP would generate 3 regressors for the one underlying variable, annual data would generate 12 regressors. If we had daily interest rates regressed with quarterly data, we would have over 90 regressors for the one underlying variable.<br /><br /> To mitigate this expansion of regressors, traditional MIDAS utilizes a selection of weighting schemes that parameterize the higher frequency variables into a smaller number of coefficients. The most common of these weighting schemes is <b>Almon/PDL</b> weighting.<br /><br /> A last note on MIDAS – although it is natural to want to include a number of high-frequency variables equal to the number of high-frequency periods per low frequency period (i.e. include three monthly variables since there are three months in a quarter), there is nothing that mathematically imposes this restriction in the MIDAS framework, and it is quite common to use many more variables than the natural number. <br /><br /> Going back to our unemployment/GDP example, you may want to utilize 9 months of unemployment data to explain GDP, and thus create 9 variables. In other words, you may determine that Q1 GDP is determined by unemployment in March, February, January (the three natural months), as well as 6 months previous (December, November, October, September, August, July). <br /><br /> Of course, you can also impose a lag structure to postulate that Q1 GDP is determined by February, January, …., June (a one month lag), or is determined by December, November, …, April (a three month lag). These 9 variables may then be reduced to a smaller number of coefficients using MIDAS weighting schemes, or, if the sample size permits, kept at 9 separate regressors.<br /><br /> <h4 class="subseccol", id="sec1.1">MIDAS-GETS</h4> EViews 12 introduces a new MIDAS estimation method, <a href='http://eviews.com/help/helpintro.html#page/content%2Fmidas-Background.html%23ww331980'><b>MIDAS-GETS</b></a>. Rather than using a weighting scheme to reduce the number of variables, MIDAS-GETS controls the curse of dimensionality with the <a href='http://eviews.com/help/helpintro.html#page/content%2FVarsel-Background.html%23ww277256'><b>Auto-Search/GETS</b></a> variable selection algorithm to select which of the high frequency variables to include in the regression.<br /><br /> Since the Auto-Search/GETS algorithm is also used in EViews' indicator saturation detection routines, <a href='http://eviews.com/help/helpintro.html#page/content%2FRegress2-Indicator_Saturation.html%23'>indicator saturation</a> is available to MIDAS-GETS too. This means that the estimation can automatically include indicator variables that allow for outliers and structural changes in the model, which can dramatically enhance the forecasting performance of a model.<br /><br /><br /> <h3 class="seccol", id="sec2">MIDAS as a Nowcasting Tool</h3> Although MIDAS was not necessarily introduced as a tool for nowcasting, its applicability to nowcasting is obvious; whilst traditional macroeconomic variables are typically sampled at low frequencies and with a reporting delay, high frequency data is available in a timely fashion that can often be used to estimate the current state of a low frequency variable.<br /><br /> More concretely, take Eurozone GDP. This important macro variable is released by <a href='https://ec.europa.eu/eurostat/news/release-calendar'>Eurostat</a> on a quarterly basis, usually 3 months after the quarter has ended. Thus, if you are at the end of July and want to know what the current GDP is, you must wait until December to receive the official statistics.<br /><br /> However, there may be monthly, or even daily variables, available without a delay. Unlike their latent counterparts, these can be used to estimate the current value of GDP immediately.<br /><br /> <h4 class="subseccol", id="sec2.1">PMI as a Nowcasting Instrument</h4> One of the more popular variables used in nowcasting exercises are economic surveys. Surveys can be released at a high frequency with little delay and are often highly correlated with more traditional macroeconomic variables. Here at EViews we're fans of the <a href='https://www.markiteconomics.com/'><b>Purchasing Manager's Index</b></a> (PMI). The latter is derived from surveys of senior executives at private sector companies, is released monthly, and reflects the current state of the economy (i.e., has little delay between the survey and the release). In particular, we like the Eurozone composite measure which consistently shows a high correlation with growth in Eurozone GDP:<br /><br /> <!-- :::::::::: FIGURE 1 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/midas_gets/images/correlation.png"><img height="auto" src="http://www.eviews.com/blog/midas_gets/images/correlation.png" title="Eurozone PMI" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 1: Eurozone PMI</small><br /> <small>(Click to expand)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 1 :::::::::: --> <h3 class="seccol", id="sec3">Nowcasting Exercises</h3> As a simple demonstration of nowcasting with various MIDAS approaches, we're going to run a little exercise that uses monthly Eurozone composite PMI to nowcast quarterly Eurozone GDP growth.<br /><br /> Specifically, we have an EViews workfile with two pages: the first contains quarterly data from 1998q3 to 2020q3 with Eurozone GDP Growth (<b class="wfobj">GDP_GR</b>), whereas the second contains monthly data over the same period with Eurozone Composite PMI (<b class="wfobj">PMICMPEMU</b>).<br /><br /> <!-- :::::::::: FIGURE 2 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/midas_gets/images/workfile.gif"><img height="auto" src="http://www.eviews.com/blog/midas_gets/images/workfile.gif" title="Workfile" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 2: Workfile</small><br /> <small>(Click to expand)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 2 :::::::::: --> <h4 class="subseccol", id="sec3.1">MIDAS-PDL</h4> To begin, we'll pretend we are currently at the start of March 2019 and wish to nowcast the current (2019Q1) value of Eurozone GDP growth. We have our February PMI data handy (and all previous months). We'll estimate a standard MIDAS equation in EViews, using data until Q4 2018 to estimate our model, then use the February PMI with that equation to nowcast Q1 2019. We'll assume that GDP growth is explained by 12 months of PMI data and by the previous quarterly value of GDP growth. The steps we perform are: <ol> <li>Ensure we have the Quarterly page selected.</li> <li>Quick->Estimate Equation</li> <li>Select <b>MIDAS</b> as the <b>Method</b>.</li> <li>Enter <b>GDP_GR C GDP_GR(-1)</b> as the dependent variable and quarterly regressors (a constant and the lagged value of GDP growth).</li> <li>Enter <b>Monthly\PMICMPEMU(-1)</b> as the high frequency regressor. The (-1) here indicates that we wish to use data up until the second month of the quarter (the default is the third/last month of the quarter, so by lagging it one month, we use data until the second month).</li> <li>Set the <b>Sample</b> to end in 2018q4.</li> </ol><br /> <!-- :::::::::: FIGURE 3 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/midas_gets/images/MIDASPDL.gif"><img height="auto" src="http://www.eviews.com/blog/midas_gets/images/MIDASPDL.gif" title="MIDAS PDL Estimation Dialog" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 3: MIDAS PDL Estimation Dialog</small><br /> <small>(Click to expand)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 3 :::::::::: --> The default MIDAS weighting method in EViews is PDL/Almon weighting with a polynomial degree of 3, which is what we'll use if we just click <b>OK</b>:<br /><br /> <!-- :::::::::: FIGURE 4 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/midas_gets/images/MIDASPDL.png"><img height="auto" src="http://www.eviews.com/blog/midas_gets/images/MIDASPDL.png" title="MIDAS PDL Estimation Output" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 4: MIDAS PDL Estimation Output</small><br /> <small>(Click to expand)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 4 :::::::::: --> Since this is a forecasting/nowcasting exercise, we won't delve into interpretation of these results, other than to note that all three MIDAS PDL terms are statistically significant.<br /><br /> Now, to perform the nowcast, we can simply use EViews' built in forecast engine and forecast for the “current” quarter (2019Q1). This is done with the following steps: <ol> <li>Click the <b>Forecast</b> button to bring up the forecast dialog.</li> <li>Change the <b>Forecast sample</b> to <b>2019Q1 2019Q1</b> (just a single period).</li> <li>Click <b>OK</b>.</li> </ol><br /> <!-- :::::::::: FIGURE 5 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/midas_gets/images/forecastdlg.png"><img height="auto" src="http://www.eviews.com/blog/midas_gets/images/forecastdlg.png" title="Forecast Dialog" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 5: Forecast Dialog</small><br /> <small>(Click to expand)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 5 :::::::::: --> The forecast will produce a new series in the workfile, <b class="wfobj">GDP_GRF</b> containing actual values for all observations other than 2019Q1, where it will contain the forecasted value. We can open this series together with the actual series in a group, and then graph it to see how close the single forecasted value is to the historical actual:<br /><br /> <!-- :::::::::: FIGURE 6 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/midas_gets/images/forecast.png"><img height="auto" src="http://www.eviews.com/blog/midas_gets/images/forecast.png" title="MIDAS Forecast" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 6: MIDAS Forecast</small><br /> <small>(Click to expand)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 6 :::::::::: --> The results seem a little underwhelming despite being just a single observation. Let's see if we can improve this forecast with the new MIDAS-GETS weighting method.<br /><br /> <h4 class="subseccol", id="sec3.2">MIDAS-GETS</h4> To perform the new estimation, we undertake the same steps as before, but additionally change the weighting method: <ol> <li>Quick->Estimate Equation</li> <li>Select <b>MIDAS</b> as the <b>Method</b>.</li> <li>Enter <b>GDP_GR C GDP_GR(-1)</b> as the dependent variable and quarterly regressors.</li> <li>Enter <b>Monthly\PMICMPEMU(-1)</b> as the high frequency regressor.</li> <li>Enter <b>12</b> as the <b>Fixed Lags</b> parameter to indicate each quarter is explained by 12 months of data.</li> <li>Set the <b>Sample</b> to end in 2018q4.</li> <li>Switch the <b>Options Tab</b>.</li> <li>Change <b>MIDAS weights</b> to <b>Auto/GETS</b>.</li> </ol><br /> <!-- :::::::::: FIGURES 7a and 7b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 7a :::::::::: --> <center> <a href="http://www.eviews.com/blog/midas_gets/images/MIDASGETS.gif"><img height="auto" src="http://www.eviews.com/blog/midas_gets/images/MIDASGETS.gif" title="MIDAS-GETS Estimation Dialog" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 7b :::::::::: --> <center> <a href="http://www.eviews.com/blog/midas_gets/images/MIDASGETS.png"><img height="auto" src="http://www.eviews.com/blog/midas_gets/images/MIDASGETS.png" title="MIDAS-GETS Estimation Output" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 7a: MIDAS-GETS Estimation Dialog</small><br /> <small>(Click to expand)</small> </center> </td> <td class="nb"> <center> <small>Figure 7b: MIDAS-GETS Estimation Output</small><br /> <small>(Click to expand)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 7a and 7b :::::::::: --> Again, we won't delve into interpretation of these results, other than to mention that out of the 12 months of possible PMI data that could be used to explain each quarter, the equation chose to use only the two most recent months (denoted lags). We'll follow the exact same steps as previously to produce a forecast from this equation:<br /><br /> <!-- :::::::::: FIGURE 8 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/midas_gets/images/forecast2.png"><img height="auto" src="http://www.eviews.com/blog/midas_gets/images/forecast2.png" title="MIDAS-GETS Forecast" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 8: MIDAS-GETS Forecast</small><br /> <small>(Click to expand)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 8 :::::::::: --> The nowcast looks better than the previous model's, although again it is only a single data point.<br /><br /> <h4 class="subseccol", id="sec3.3">MIDAS-GETS with Indicator Saturation</h4> Finally, we'll estimate a MIDAS-GETS model that includes indicator saturation. This will automatically model outliers and structural changes in our equation. We follow the same steps as before but use the Auto/GETS options button to include searching for indicator variables. We will, in this case, search for outliers by only selecting impulse indicators.<br /><br /> <!-- :::::::::: FIGURES 9a and 9b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 9a :::::::::: --> <center> <a href="http://www.eviews.com/blog/midas_gets/images/MIDASGETSIS.gif"><img height="auto" src="http://www.eviews.com/blog/midas_gets/images/MIDASGETSIS.gif" title="MIDAS-GETS (Indicator Saturation) Estimation Dialog" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 7b :::::::::: --> <center> <a href="http://www.eviews.com/blog/midas_gets/images/MIDASGETSIS.png"><img height="auto" src="http://www.eviews.com/blog/midas_gets/images/MIDASGETSIS.png" title="MIDAS-GETS (Indicator Saturation) Estimation Output" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 9a: MIDAS-GETS (Indicator Saturation) Estimation Dialog</small><br /> <small>(Click to expand)</small> </center> </td> <td class="nb"> <center> <small>Figure 9b: MIDAS-GETS (Indicator Saturation) Estimation Output</small><br /> <small>(Click to expand)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 9a and 9b :::::::::: --> The results are worth a quick mention. The GETS routine selected eight periods with outliers. In particular, it included dummy variables for 8 quarters (2001Q1, 2005Q3, 2008Q2, 2008Q3, 2009Q1, 2010Q2, 2011Q2, 2013Q2), <b>and</b> chose to include more months of PMI data: namely, the first and second months of the current quarter, as well as 6, 9 and 12 months prior. In concrete terms, this means, for example, in 2018Q1, the equation chose to use February 2018, January 2018, September 2017, June 2017 and March 2017 as regressors.<br /><br /> Forecasting is performed in the same way, and produces a similar looking forecast to the previous MIDAS-GETS model:<br /><br /> <!-- :::::::::: FIGURE 10 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/midas_gets/images/forecast3.png"><img height="auto" src="http://www.eviews.com/blog/midas_gets/images/forecast3.png" title="MIDAS-GETS (Indicator Saturation) Forecast" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 10: MIDAS-GETS (Indicator Saturation) Forecast</small><br /> <small>(Click to expand)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 10 :::::::::: --> <h4 class="subseccol", id="sec3.4">Evaluating Nowcasting Models</h4> The previous examples all performed a single point nowcast of GDP growth and a quick eyeball-test showed that MIDAS-GETS performed well. Here we'll demonstrate a formal nowcast evaluation exercise. In particular, we'll estimate a handful of different models on a rolling basis. The first estimation will again assume we are in February 2018, estimating on data from 1999Q3 through 2017Q4, and will then nowcast 2018Q1. We'll then move a quarter and assume we're in May 2018, estimate through 2018Q1 and nowcast 2018Q2. Next, we'll move another quarter and so on until 2019Q4, meaning we have eight rolling nowcasts.<br /><br /> We'll estimate and nowcast from six different equation specifications: <ol> <li>A simple AR(1) model with no PMI (GDP growth regressed against a lag and a constant).</li> <li>Simple AR(1) model with aggregated PMI (average of the available monthly PMI data).</li> <li>PDL/Almon MIDAS with 12 monthly lags of PMI and lagged GDP growth.</li> <li>U-MIDAS with 12 monthly lags of PMI and lagged GDP growth.</li> <li>MIDAS-GETS with 12 monthly lags of PMI and lagged GDP growth and no indicators.</li> <li>MIDAS-GETS with 12 monthly lags of PMI and lagged GDP growth with impulse indicators.</li> </ol><br /> Models 3, 5 and 6 are identical to those we estimated in the early examples. We've written a quick EViews program that will perform these nowcasts: <pre><code><br /> <span style="color: green;">'create gdp growth series</span><br /> series gdp_gr = @pca(eur_gdp)<br /> <br /> <span style="color: green;">'keep a list of equation names for easier referencing later</span><br /> %eqlist = "eq_umid eq_agg eq_pdl eq_simple eq_getsis eq_gets"<br /> <br /> <span style="color: green;">'create empty forecast series for each equation</span><br /> group forcs gdp_gr<br /> <span style="color: blue;">for</span> %j {%eqlist}<br /> series gdp_{%j}<br /> forcs.add gdp_{%j}<br /> <span style="color: blue;">next</span><br /> <br /> <span style="color: green;">'estimate/nowcast loop</span><br /> <span style="color: blue;">for</span> !i=0 <span style="color: blue;">to</span> 7<br /> <span style="color: green;">'estimate</span><br /> smpl @first 2017q4+!i <br /> equation eq_simple.ls gdp_gr c gdp_gr(-1)<br /> equation eq_agg.ls gdp_gr c gdp_gr(-1) agg_pmi<br /> equation eq_pdl.midas(fixedlag=12) gdp_gr c gdp_gr(-1) @ monthly\pmicmpemu(-1)<br /> equation eq_umid.midas(midwgt=umidas, fixedlag=12) gdp_gr c gdp_gr(-1) @ monthly\pmicmpemu(-1)<br /> equation eq_gets.midas(fixedlag=12, midwgt=autogets) gdp_gr c gdp_gr(-1) @ monthly\pmicmpemu(-1)<br /> equation eq_getsis.midas(fixedlag=12, midwgt=autogets, iis) gdp_gr c gdp_gr(-1) @ monthly\pmicmpemu(-1)<br /> <br /> <span style="color: green;">'nowcast</span><br /> smpl 2018q1+!i 2018q1+!i<br /> <span style="color: blue;">for</span> %j {%eqlist}<br /> {%j}.forecast temp<br /> gdp_{%j} = temp<br /> d temp<br /> <span style="color: blue;">next</span><br /> <span style="color: blue;">next</span><br /> </code></pre> Once we have the six nowcast series of eight periods each, we can use EViews' built in forecast evaluation engine to compare the nowcasts, by opening up the series containing the true value (GDP_GR) and clicking on View->Forecast Evaluation, and then giving the names of the nowcast series. The results of the is evaluation are:<br /><br /> <!-- :::::::::: FIGURE 11 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/midas_gets/images/evaluation.png"><img height="auto" src="http://www.eviews.com/blog/midas_gets/images/evaluation.png" title="MIDAS Evaluation" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 11: MIDAS Evaluation</small><br /> <small>(Click to expand)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 11 :::::::::: --> From the evaluation statistics, we see that the MIDAS-GETS nowcast, <b class="wfobj">GDP_EQ_GETSIS</b> performs very well, with the indicator saturation version giving the lowest RMSE, MAE and SMAPE. The non-indicator version, <b class="wfobj">GDP_EQ_GETS</b>, also performs better than the other traditional MIDAS methods.<br /><br /><br /> </span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com0tag:blogger.com,1999:blog-6883247404678549489.post-46714369548145411542020-12-02T08:06:00.001-08:002020-12-09T13:18:38.709-08:00Wavelet Analysis: Part II (Applications in EViews)<style> table { border: 0px solid black; border-collapse: separate; border-spacing: 10px; } td { border: 1px solid black; } .nb { border: 0px solid black; } .step { counter-reset: section; list-style-type: none; } .step li::before { counter-increment: section; content: "Step "counter(section) ": "; } .seccol { } .subseccol { color: #fa5e5e } .wf { } .wfobj { } </style> <script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { rbrace: ['{\\left(#1\\right)}', 1], cbrace: ['{\\left\\{#1\\right\\}}', 1], sbrace: ['{\\left[#1\\right]}', 1], bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1], series: ['{\\left\\{#1_{#2}\\right\\}_{#2=#3}^{#4}}', 4], xsum: ['{\\sum_{#1=#2}^{#3}{#4}}', 4], var: ['{\\operatorname\{var\}}'], sign: ['{\\operatorname\{sign\}}'], diag: ['{\\operatorname\{diag\}}'], med: ['{\\operatorname\{median\}}'], vec: ['{\\operatorname\{vec\}}'], tr: ['{\\operatorname\{tr\}}'] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"> </script> <span style="font-family: "verdana" sans-serif"> This is the second of two entries devoted to wavelets. <a href='http://blog.eviews.com/2020/11/wavelet-analysis-part-i-theoretical.html'>Part I</a> was devoted to theoretical underpinnings. Here, we demonstrate the use and application of these principles to empirical exercises using the wavelet engine released with EViews 12. <a name='more'></a><br /><br /> <h3 class="seccol">Table of Contents</h3> <ol> <li><a href="#sec1">Introduction</a> <li><a href="#sec2">Wavelet Transforms</a> <ul> <li><a href="#sec2.1">Example 1: Wavelet Transforms as Informal Tests for (Non-)Stationarity</a> <li><a href="#sec2.2">Example 2: MRA as Seasonal Adjustment</a> <li><a href="#sec2.3">Example 3: DWT vs. MODWT</a> </ul> <li><a href="#sec3">Variance Decomposition</a> <ul> <li><a href="#sec3.1">Example: MODWT Unbiased Variance Decomposition</a> </ul> <li><a href="#sec4">Wavelet Thresholding</a> <ul> <li><a href="#sec4.1">Example: Thresholding as Signal Extraction</a> </ul> <li><a href="#sec5">Outlier Detection</a> <ul> <li><a href="#sec5.1">Example: Bilen and Huzurbazar (2002) Outlier Detection</a> </ul> <li><a href="#sec6">Conclusion</a> <li><a href="#sec7">Files</a> <li><a href="#sec8">References</a> </ol><br /> <h3 class="seccol", id="sec1">Introduction to Wavelets</h3> The new EViews 12 release has introduced several new statistical and econometric procedures. Among them is an engine for wavelet analysis. This is a complement to the existing battery of techniques in EViews used to analyze and isolate features which characterize a time series. While there are undoubtedly numerous applications to wavelets such as regression, unit root testing, fractional integration order estimation, and bootstrapping (wavestrapping), here we highlight the new EViews wavelet engine. In particular, we focuses on four popular and most often used areas of wavelet analysis: <ul> <li>Transforms</li> <li>Variance decomposition</li> <li>Thresholding</li> <li>Outlier detection</li> </ul><br /><br /> <h3 class="seccol", id="sec2">Wavelet Transforms</h3> The first step in wavelet analysis is usually a wavelet transform of a time series of interest. This is similar in spirit to a Fourier transform. The time series is decomposed into its constituent spectral (frequency) features on a scale-by-scale basis. Recall that the idea of scale in wavelet analysis is akin to frequency in Fourier analysis. This is nothing more than a re-expression of time series observations in time, to their behaviour in the frequency domain. This allows us to see which scales (frequencies) dominate in terms of activity.<br /><br /> <h4 class="subseccol", id="sec2.1">Example 1: Wavelet Transforms as Informal Tests for (Non-)Stationarity</h4> Many important and routine tasks in time series analysis require classifying data as stationary or non-stationary. Any of the unit root tests available in EViews are designed to formally address such classifications. Nevertheless, wavelet transforms such as the discrete wavelet transform (DWT) or the maximum overlap discrete wavelet transform (MODWT) can also be used for a similar purpose. While formal wavelet-based unit root tests are available in the literature, here we focus on demonstrating how wavelets can be used as an exploratory tool for stationarity determination <i>in lieu</i> of a formal test.<br /><br /> Recall from the theoretical discussion of Mallat's algorithm in <a href='http://blog.eviews.com/2020/11/wavelet-analysis-part-i-theoretical.html'>Part I</a> that discrete wavelet transforms partition the frequency range into finer and finer blocks. For instance, at the first scale, the frequency range is split into two equal parts. The first, lower frequency part, is captured by the scaling coefficients and corresponds to the traditional (Fourier) frequency range $ \sbrace{0,\, \pi} $. The second, higher frequency part, is captured by the wavelet coefficients and corresponds to the traditional frequency range $ \sbrace{\pi,\, 2\pi} $. At the second stage, the lower frequency from the previous scale, namely the frequency region roughly corresponding to $ \sbrace{0,\, \pi} $ in the traditional Fourier context, is again split into two equal portions. Accordingly, the wavelet coefficients at scale 2 would roughly correspond to the traditional frequency region $ \sbrace{\frac{\pi}{2},\, \pi} $, whereas the scaling coefficients would roughly correspond to the traditional frequency region $ \sbrace{0,\, \frac{\pi}{2}} $, and so on.<br /><br /> This decomposition affords the ability to identify which features of the original time series data are dominant at which scale. In particular, if the spectra (read wavelet/scaling coefficient magnitudes) at a given scale are high, this would indicate that those coefficients are registering behaviours in the underlying data which dominate at said scale and frequency region. For instance, in the traditional Fourier context, if a series has very pronounced spectra near the frequency zero, this indicates that observations of that time series are very persistent (die off slowly). Naturally, one would classify such a series as non-stationary, possibly exhibiting a unit root. Alternatively, if a series has very pronounced spectra at higher frequencies, this indicates that the time series is driven by dynamics that frequently appear and disappear. In other words, the time series is driven by transient features and one would classify the time series as stationary. The analogue of this analysis in the context of wavelet analysis would proceed as follows.<br /><br /> At the first scale, if wavelet spectra dominate scaling spectra, the underlying series is dominated by higher frequency (transitory) forces and the series is most likely stationary. At scale two, if the scaling spectra dominate the wavelet spectra from the first and second scales, this indicates that lower frequency forces dominate higher frequency dynamics, providing evidence of non-stationarity. Naturally, this scale-based analysis carries on until the final decomposition scale.<br /><br /> To demonstrate the dynamics outlined above, we'll consider Canadian real exchange rate data extracted from the dataset in Pesaran (2007). This is a quarterly time series running from 1973Q1 to 1998Q4. The data can be found in <a href="http://www.eviews.com/blog/wavelets/workfiles/wavelets.wf1"'><b class="wf">WAVELETS.WF1</b></a>. The series we're interested in is <b class="wfobj">CANADA_RER</b>. We'll demonstrate with a discrete wavelet transform (DWT) and the Haar wavelet filter. To facilitate the discussion to follow, we will consider the transformation only up to the first scale.<br /><br /> To perform the transform, proceed in the following steps: <ol> <li>Double click on <b class="wfobj">CANADA_RER</b> to open the series window.</li> <li>Click on <b>View/Wavelet Analysis/Transforms...</b></li> <li>From the <b>Max scale</b> dropdown, select <b>1</b>.</li> <li>Click on <b>OK</b>.</li> </ol><br /> <!-- :::::::::: FIGURES 2a and 2b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 2a :::::::::: --> <center> <a href="http://www.eviews.com/blog/wavelets/images/transform_ex1_1.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/transform_ex1_1.png" title="Canadian RER: Discrete Wavelet Transform Part 1" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 2b :::::::::: --> <center> <a href="http://www.eviews.com/blog/wavelets/images/transform_ex1_2.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/transform_ex1_2.png" title="Canadian RER: Discrete Wavelet Transform Part 2" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 2a: Canadian RER: Discrete Wavelet Transform Part 1</small> </center> </td> <td class="nb"> <center> <small>Figure 2b: Canadian RER: Discrete Wavelet Transform Part 2</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 2a and 2b :::::::::: --> The output is a spool object with the spool tree listing the summary, original series, as well as wavelet and scaling coefficients for each scale (in this case just 1). The first of these is a summary of the wavelet transformation performed. Note here that since the number of available observations is 104 a dyadic adjustment using the series mean was applied to achieve dyadic length.<br /><br /> The first plot in the output is a plot of the original series, in addition to the padded values in case a dyadic adjustment was applied. The last two plots are respectively the wavelet and scaling coefficients. Recall that at the first scale, the wavelet decomposition effectively splits the frequency spectrum into two equal portions: the low and high frequency portions, respectively. Recall further that the low frequency portion is associated with the scaling coefficients $ \mathbf{V} $ whereas the high frequency portion is associated with the wavelet coefficients $ \mathbf{W} $.<br /><br /> Evidently, the spectra characterizing the wavelet coefficients are significantly less pronounced than those characterizing the scaling coefficients. This is an indication that the Canadian real exchange series is possibly non-stationary. Furthermore, observe that the wavelet plot has two dashed red lines. These represent the $ \pm 1 $ standard deviation of the coefficients at that scale. This is particularly useful in visualizing which wavelet coefficients should be shrunk to zero (are insignificant) in wavelet shrinkage applications. (We will return to this later when we discuss wavelet thresholding outright.) Recall that coefficients exceeding some threshold bound (in this case the standard deviation) ought to be retained, while the remaining coefficients are shrunk to zero. From this we see that the majority of wavelet coefficients at scale 1 can be discarded. This is further evidence that high frequency forces in the <b class="wfobj">CANADA_RER</b> series are not very pronounced.<br /><br /> To justify the intuition, we can perform a quick ADF unit root test on <b class="wfobj">CANADA_RER</b>. To do so, from the open <b class="wfobj">CANADA_RER</b> series window, proceed as follows: <ol> <li>Click on <b>View/Unit Root Tests/Standard Unit Root Test...</b></li> <li>Click on <b>OK</b>.</li> </ol><br /> <!-- :::::::::: FIGURE 3 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/wavelets/images/canada_rer_ur.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/canada_rer_ur.png" title="Canadian RER: Unit Root Test" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 3: Canadian RER Unit Root Test</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 3 :::::::::: --> Our intuition is indeed correct. From the unit root test output it is clear that the p-value associated with the ADF unit root test is 0.7643 -- too high to reject the null hypothesis of a unit root at any meaningful significance level.<br /><br /> While the wavelet decomposition is not a formal test, it is certainly a great way of identifying which scales (read frequencies) dominate the underlying series behaviour. Naturally, this analysis is not limited to the first scale. To see this, we will repeat the exercise above using the maximum overlap discrete wavelet transform (MODWT) with the Daubechies (daublet) filter of length 6. We will also perform the transform upto the maximum scale possible, and also indicate which and how many wavelet coefficient are affected by the boundary. (See <a href='http://blog.eviews.com/2020/11/wavelet-analysis-part-i-theoretical.html'>Part I</a> for a discussion of boundary conditions.)<br /><br /> From the open <b class="wfobj">CANADA_RER</b> series window, we proceed in the following steps: <ol> <li>Click on <b>View/Wavelet Analysis/Transforms...</b></li> <li>Change the <b>Decomposition</b> dropdown to <b>Overlap transform - MODWT</b>.</li> <li>Change the <b>Class</b> dropdown to <b>Daubechies</b>.</li> <li>From the <b>Length</b> dropdown select <b>6</b>.</li> <li>Click on <b>OK</b>.</li> </ol><br /> <!-- :::::::::: FIGURES 4a, 4b, and 4c :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 4a :::::::::: --> <center> <a href="http://www.eviews.com/blog/wavelets/images/transform_ex2_1.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/transform_ex2_1.png" title="Canadian RER: MODWT Part 1" width="240" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 4b :::::::::: --> <center> <a href="http://www.eviews.com/blog/wavelets/images/transform_ex2_2.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/transform_ex2_2.png" title="Canadian RER: MODWT Part 2" width="240" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 4c :::::::::: --> <center> <a href="http://www.eviews.com/blog/wavelets/images/transform_ex2_3.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/transform_ex2_3.png" title="Canadian RER: MODWT Part 3" width="240" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 4a: Canadian RER: MODWT Part 1</small> </center> </td> <td class="nb"> <center> <small>Figure 4b: Canadian RER: MODWT Part 2</small> </center> </td> <td class="nb"> <center> <small>Figure 4c: Canadian RER: MODWT Part 3</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 4a, 4b, and 4c :::::::::: --> As before, the output is a spool object with wavelet and scaling coefficients as individual spool elements. Since the MODWT is not an orthonormal transform and since it uses all of the available observations, wavelet and scaling coefficients are of input series length and do not require length adjustments. Notice the significantly more pronounced ''wave'' behvaviour across wavelet coefficients and scales. This is a consequence of the fact that the MODWT is not an orthonormal transform and is significantly more redundant than the DWT counterpart. In other words, patterns retain their momentum as they evolve.<br /><br /> Analogous to the DWT, the MODWT partitions the frequency range into finer and finer blocks. At the first scale, we see that only a few wavelet coefficients exhibit significant spikes (ie. exceed the threshold bounds). At scales two and three, it is evident that transient features persist, but after that, don't seem to contribute much. Alternatively, the scaling coefficients at the final scale (scale 6) are roughly twice as large (0.20) as the largest wavelet spectrum (0.10) which manifests at scales 1 and 2. These are all indications that lower frequency forces dominate those at higher frequencies and that the underlying series is most likely non-stationary.<br /><br /> Finally, notice that for each scale, those coefficients affected by the boundary are displayed in red, and their count reported in the legends. A vertical dashed black line shows the region upto which the boundary conditions persist. Boundary coefficients are an important consequence of longer filters and higher scales. Evidently, as the scale is increased, boundary coefficients consume the entire set of coefficients. Moreover, since the MODWT is a redundant transform, the number of boundary coefficients will always be greater than those in the orthonormal DWT. As before the $ \pm 1 $ standard deviation bounds are available for reference.<br /><br /> <h4 class="subseccol", id="sec2.2">Example 2: MRA as Seasonal Adjustment</h4> It's worth noting that multiresolution analysis (MRA) is often used as an intermediate step toward some final inferential procedure. For instance, if the objective is to run a unit root test on some series, we may we wish to do so on the true signal, having discarded the noise, in order to get a more reliable test. Similarly, we may wish to run regressions on series which have been <i>smoothed</i>. Discarding noise from regressors may prevent clouding of inferential conclusions. This is the idea behind most existing smoothing techniques in the literature.<br /><br /> In fact, wavelets are very well adapted to isolating many different kinds of trends and patterns, whether seasonal, non-stationary, non-linear, etc. Here we demonstrate their potential using an artificial dataset with a quarterly seasonality. In particular, we generate 128 random normal variates and excite every first quarter with a shock. These modified normal variates are then fed as innovations into a stationary autoregressive (AR) process. This is achieved with a few commands in the command window or an EViews program as follows: <pre><code><br /> rndseed 128 <span style="color: green;">'set the random seed</span><br /> wfcreate q 1989 2020 <span style="color: green;">'make quarterly workfile with 128 quarter</span><br /><br /> series eps = 8*(@quarter=1) + @rnorm <span style="color: green;">'create random normal innovations with each first quarter having mean 8</span><br /> series x <span style="color: green;">'create a series x</span><br /> x(1) = @rnorm <span style="color: green;">'set the first observation to a random normal value</span><br /><br /> smpl 1989q2 @last <span style="color: green;">'start the sample at the 2nd quarter</span><br /> x = 0.75*x(-1) + eps <span style="color: green;">'generate an AR process using eps as innovations</span><br /><br /> smpl @all <span style="color: green;">'reset the sample to the full workfile range</span><br /> </code></pre> To truly appreciate the idea behind MRA, one ought to set the maximum decomposition level to a lower value. This is because the smooth series extracts the ''signal'' from the original series for all scales beyond the maximum decomposition level, whereas the ''noise'' portion of the original series is decomposed on a scale-by-scale basis for all scales upto the maximum decomposition level. We now perform a MODWT MRA on the <b class="wfobj">X</b> series using a Daubechies filter of length 4 and maximum decomposition level 2, as follows: <ol> <li>Double click on <b class="wfobj">X</b> to open the series.</li> <li>Click on <b>View/Wavelet Analysis/Transforms...</b></li> <li>Change the <b>Decomposition</b> dropdown to <b>Overlap multires. - MODWT MRA</b>.</li> <li>Set the <b>Max scale</b> textbox to <b>2</b>.</li> <li>Change the <b>Class</b> dropdown to <b>Daubechies</b>.</li> <li>Click on <b>OK</b>.</li> </ol><br /> <!-- :::::::::: FIGURES 6a and 6b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 6a :::::::::: --> <center> <a href="http://www.eviews.com/blog/wavelets/images/transform_ex4_1.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/transform_ex4_1.png" title="Quarterly Seasonality: MODWT MRA Part 1" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 6b :::::::::: --> <center> <a href="http://www.eviews.com/blog/wavelets/images/transform_ex4_2.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/transform_ex4_2.png" title="Quarterly Seasonality: MODWT MRA Part 2" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 6a: Quarterly Seasonality: MODWT MRA Part 1</small> </center> </td> <td class="nb"> <center> <small>Figure 6b: Quarterly Seasonality: MODWT MRA Part 2</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 6a and 6b :::::::::: --> The output is again a spool object with smooth and detail series as individual spool elements. The first plot is that of the smooth series at the maximum decomposition level overlaying the original series for context. Any observations affected by boundary coefficients will be reported in red and their number reported in the legend. Furthermore, since observations affected by the boundary will be split between the beginning and end of original series observations, two dashed vertical lines are provided at each decomposition scale. These isolate the areas which partition the total set of observations into those affected by the boundary, and those which are not.<br /><br /> It is clear from the smooth series that seasonal patterns have been dropped from the underlying trend approximation of the original data. This is precisely what we want and the idea behind other well known seasonal adjustments techniques such as TRAMO/SEATS, X-12, X-13, STL Decompositions, etc., all of which can also be performed in EViews for comparison. In fact, the figure below plots our MRA smooth series against the STL decomposition trend series performed on the same data.<br /><br /> <!-- :::::::::: FIGURE 7 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/wavelets/images/transform_ex4_3.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/transform_ex4_3.png" title="MODWT MRA Smooth vs STL Trend" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 7: MODWT MRA Smooth vs. STL Trend</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 7 :::::::::: --> The two series are undoubtedly very similar, as they should be!<br /><br /> This figure above also suggests that the STL seasonal series should be very similar to the details from our MODWT MRA decomposition. Before demonstrating this, we remind readers that whereas the STL decomposition produces a single series estimate of the seasonal pattern, wavelet MRA procedures decompose noise (in this case seasonal patterns) on a scale by scale basis. Accordingly, at scale 1, the the MRA detail series captures all movements on a scale of 0 to 2 quarters. At scale 2, the MRA detail series captures movements on a scale of 2 to 4 quarters, and so on. In general, for each scale $ j $, the detail series capture patterns on a scale $ 2^{j-1} $ to $ 2^{j} $ units, whereas the smooth series captures patterns on a scale of $ 2^{j} $ units.<br /><br /> Finally, turning to the comparison of seasonal variation estimates between the MRA and STL, we need to sum all detail series to compound their effect and produce a single series estimate of noise. We can then compare this with single series estimate of seasonality from the STL decomposition.<br /><br /> <!-- :::::::::: FIGURE 8 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/wavelets/images/transform_ex4_4.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/transform_ex4_4.png" title="MODWT MRA Details vs STL Seasonality" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 8: MODWT MRA Details vs. STL Seasonality</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 8 :::::::::: --> As expected, the series are nearly identical.<br /><br /> To demonstrate this in the context of non-artificial data, we'll run a MODWT MRA on the Canadian real exchange rate data using a Least Asymmetric filter of length 12 and a maximum decomposition scale 3.<br /><br /> <!-- :::::::::: FIGURES 5a and 5b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 5a :::::::::: --> <center> <a href="http://www.eviews.com/blog/wavelets/images/transform_ex3_1.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/transform_ex3_1.png" title="Canadian RER: MODWT Multiresolution Analysis Part 1" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 5b :::::::::: --> <center> <a href="http://www.eviews.com/blog/wavelets/images/transform_ex3_2.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/transform_ex3_2.png" title="Canadian RER: MODWT Multiresolution Analysis Part 2" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 5a: Canadian RER: MODWT Multiresolution Analysis Part 1</small> </center> </td> <td class="nb"> <center> <small>Figure 5b: Canadian RER: MODWT Multiresolution Analysis Part 2</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 5a and 5b :::::::::: --> Recall that the main use for MRA is the separation of the true ''signal'' of the underlying series from its noise, at a given decomposition level. Here, the ''Smooths 3'' series is the signal approximation and from the plot seems to follow the contours of the original data. The remaining three series - ''Details 3'', ''Details 2'', and ''Details 1'' - approximate the noise at their scales. Clearly at the first scale, noise is rather negligible. This is an indication that the majority of the signal is in the lower frequency range. As we move to the second scale, the noise becomes more prominent, but still relatively negligible. Again, this confirms that the true signal is in a frequency range lower still, and so on. More importantly, this is indicative that the dynamics driving the noise are not particularly transitory. Accordingly, this would rule out traditional seasonality as a force driving the noise, but would not necessarily preclude the existence of non-stationary seasonality such as seasonal unit roots.<br /><br /> <h4 class="subseccol", id="sec2.3">Example 3: DWT vs. MODWT</h4> We have already mentioned that the primary difference between the DWT and MODWT is redundancy. The DWT is an orthonormal decomposition whereas the MODWT is not. This is certainly an advantage of the DWT over its MODWT counterpart since it guarantees that at each scale, the decomposition captures only those features which characterize that scale, and that scale alone. Nevertheless, the DWT requires input series to be of dyadic length, whereas the MODWT does not. This is an advantage of the MODWT since information is never dropped or added to derive the transform. Nevertheless, the MODWT has an additional advantage over the DWT and it has to do with spectral-time alignment - any pronounced observations in the time domain register as spikes in the wavelet domain at the same time spot. This is unlike the DWT where this alignment fails to hold. Formally, it is said that the MODWT is associated with a <b>zero-phase</b> filter, whereas the DWT does not. In practice, this means that outlying characteristics (spikes) in the DWT MRA will not align with outlying features of the original time series, whereas they will in the case of the MODWT MRA.<br /><br /> To demonstrate this difference we will generate a time series of length 128 and fill it with random normal observations. We will then introduce a large outlying observation at observation 64. We will then perform a DWT MRA and a MODWT MRA decomposition of the same data using a Daubechies filter of length 4 and study the differences. We will also only consider the first scale since the remaining scales do little to further the intuition.<br /><br /> We can begin by creating our artificial data by typing in the following set of commands in the command window: <pre><code><br /> wfcreate u 128<br /> series x = @rnorm<br /> x(64) = 40<br /> </code></pre> These commands create a workfile of length 128, and a series <b class="wfobj">X</b> filled with random normal variates. The 64th observation is then set to 40 - roughly 10 times as large as observations in the top 1\% of the Gaussian distribution.<br /><br /> We then generate a DWT MRA and a MODWT MRA transform of the same series. The output is summarized in the plots below.<br /><br /> <!-- :::::::::: FIGURES 9a and 9b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 9a :::::::::: --> <center> <a href="http://www.eviews.com/blog/wavelets/images/transform_ex5_1.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/transform_ex5_1.png" title="Outlying Observation: DWT MRA" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 9b :::::::::: --> <center> <a href="http://www.eviews.com/blog/wavelets/images/transform_ex5_2.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/transform_ex5_2.png" title="Outlying Observation: MODWT MRA" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 9a: Outlying Observation: DWT MRA</small> </center> </td> <td class="nb"> <center> <small>Figure 9b: Outlying Observation: MODWT MRA</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 9a and 9b :::::::::: --> Evidently the peak of the ''shark fin'' pattern in the DWT MRA smooth series does not align with the outlying observation that generated it in the original data. In other words, whereas the outlying observation is at time $ t = 64 $, the peak of the smooth series occurs at time $ t = 63 $. This in contrast to the MODWT MRA smooth series which clearly aligns its peak with the outlying observation in the original data.<br /><br /><br /> <h3 class="seccol", id="sec3">Variance Decomposition</h3> Another traditional application of wavelets is to variance decomposition. Just as wavelet transforms can decompose a series signal across scales, they can also decompose a series variance across scales. In particular, this is a decomposition of the amount of original variation attributed to a given scale. Naturally, the conclusions derived above on transience would hold here as well. For instance, if the contribution to overall variation is largest at scale 1, this would indicate that it is transitory forces which contribute most to overall variation. The opposite is true if higher scales are associated with larger contributions to overall variation.<br /><br /> <h4 class="subseccol", id="sec3.1">Example: MODWT Unbiased Variance Decomposition</h4> To demonstrate the procedure, we will use Japanese real exchange rate data from 1973Q1 to 1988Q4, again extracted from the Pesaran (2007) dataset. The series of interest is called <b class="wfobj">JAPAN_RER</b>. We will produce a scale-by-scale decomposition of variance contributions using the MODWT with a Daubechies filter of length 4. Furthermore, we'll produce a 95% confidence intervals using the asymptotic Chi-squared distribution with a band-pass estimate for the EDOF. The band-pass EDOF is preferred here since the sample size is less than 128 and the asymptotic approximation to the EDOF requires a sample size of at least 128 observations for decent results.<br /><br /> From the open series window, proceed in the following steps: <ol> <li>Click on <b>View/Wavelet Analysis/Variance Decomposition...</b></li> <li>Change the <b>CI type</b> dropdown to <b>Asymp. Band-Limited</b>.</li> <li>From the <b>Decomposition</b> dropdown select <b>Overlap transform - MODWT</b>.</li> <li>Set the <b>Class</b> dropdown to <b>Daubechies</b>.</li> <li>Click on <b>OK</b>.</li> </ol><br /> <!-- :::::::::: FIGURES 11a and 11b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 11a :::::::::: --> <center> <a href="http://www.eviews.com/blog/wavelets/images/vardecomp_ex1_1.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/vardecomp_ex1_1.png" title="Japanese RER: MODWT Variance Decomp. Part 1" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 11b :::::::::: --> <center> <a href="http://www.eviews.com/blog/wavelets/images/vardecomp_ex1_2.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/vardecomp_ex1_2.png" title="Japanese RER: MODWT Variance Decomp. Part 2" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 11a: Japanese RER: MODWT Variance Decomp. Part 1</small> </center> </td> <td class="nb"> <center> <small>Figure 11b: Japanese RER: MODWT Variance Decomp. Part 2</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 11a and 11b :::::::::: --> The output is a spool object with the spool tree listing the summary, spectrum table, variance distribution across-scales, confidence intervals (CIs) across scales, and the cumulative variance and CIs. The spectrum table lists the contribution to overall variance by wavelet coefficients at each scale. In particular, the column titled <b>Variance</b> shows the variance contributed to the total at a given scale. Columns titled <b>Rel. Proport.</b> and <b>Cum. Proport.</b> display, respectivel, the proportion of overall variance contributing to the total at a given scale and its cumulative total. Lastly, in case CIs are produced, the last two columns display, respectively, the lower and upper confidence interval values at a given scale.<br /><br /> The first plot is a histogram of variances at each given scale. It is clear that the majority of variation in the <b class="wfobj">JAPAN_RER</b> series comes from higher scales, or lower frequencies. This is indicative of persistent behaviour in the original data, and possibly evidence of a unit root. A quick unit root test on the series will confirm this intuition. The plot below summarizes the output of a unit root test on <b class="wfobj">JAPAN_RER</b>.<br /><br /> <!-- :::::::::: FIGURE 12 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/wavelets/images/japan_rer_ur.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/japan_rer_ur.png" title="Japanese RER: Unit Root Test" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 12: Japanese RER Unit Root Test</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 12 :::::::::: --> Returning to the wavelet variance decomposition output, following the distribution plot is a plot of the variance values along with their 95% confidence intervals at each scale. At last, the final plot displays variances and CIs accumulated across scales.<br /><br /><br /> <h3 class="seccol", id="sec4">Wavelet Thresholding</h3> A particularly important aspect of empirical work is discerning useful data from noise. In other words, if an observed time series is obscured by the presence of unwanted noise, it is critical to obtain an estimate of this noise and filter it from the observed data in order to retain the useful information, or the signal. Traditionally, this filtration and signal extraction was achieved using Fourier transforms or a number of previously mentioned routines such as the STL decomposition. While the former is typically better suited to stationary data, the latter can accommodate non-stationarities, non-linearities, and seasonalities of arbitrary type. This makes STL an attractive tool in this space and similar (but ultimately different) in function to wavelet thresholding. The following examples explores these nuances.<br /><br /> <h4 class="subseccol", id="sec4.1">Example: Thresholding as Signal Extraction</h4> Given a series of observed data, recall that STL decomposition produces three curves: <ul> <li>Trend</li> <li>Seasonality</li> <li>Remainder</li> </ul><br /> The last of these is obtained by subtracting from the original data the first two curves. As an additional byproduct, STL also produces a seasonally adjusted version of the original data which derives by subtracting from the original data the seasonality curve.<br /><br /> In contrast, recall from the theoretical discussion in <a href='http://blog.eviews.com/2020/11/wavelet-analysis-part-i-theoretical.html'>Part I</a> of this series that the principle governing wavelet-based signal extraction, otherwise known as <b>wavelet thresholding</b> or <b>wavelet shrinkage</b>, is to <i>shrink</i> any wavelet coefficients not exceeding some <b>threshold</b> to zero and then exploit the MRA to synthesize the signal of interest using the modified wavelet coefficients. This produces two curves: <ul> <li>Signal</li> <li>Residual</li> </ul> where the latter is just the original data minus the signal estimate.<br /><br /> Because wavelet thresholding treats any insignificant transient features as noise, it is very likely that any reticent cylclicality would be treated as noise and driven to zero. In this regard, the extracted signal, while perhaps free of cyclical dynamics, would really be so only by technicality, and not by intention. This is in contrast to STL which derives an explicit estimate of seasonal features, and then removes those from the original data to derive the seasonally adjusted curve. Nevertheless, in many instances, the STL seasonally adjusted curve may behave quite similarly to the signal extracted via wavelet thresholding. To demonstrate this, we'll use French real exchange rate data from 1973Q1 to 1988Q4 extracted from the Pesaran (2007) dataset. The series of interest is called <b class="wfobj">FRANCE_RER</b>. We'll also start with performing a MODWT threshold using a Least Asymmetric filter of length 12, and maximum decomposition level 1.<br /><br /> Double click on the <b class="wfobj">FRANCE_RER</b> series to open its window and proceed as follows: <ol> <li>Click on <b>View/Wavelet Analysis/Thresholding (Denoising)...</b></li> <li>Change the <b>Decomposition</b> dropdown to <b>Overlap transform - MODWT</b>.</li> <li>Set the <b>Max scale</b> to <b>1</b>.</li> <li>Change the <b>Class</b> dropdown to <b>Least Asymmetric</b>.</li> <li>Set the <b>Length</b> dropdown to <b>12</b>.</li> <li>Click on <b>OK</b>.</li> </ol><br /> <!-- :::::::::: FIGURE 14 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/wavelets/images/threshold_ex1_1.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/threshold_ex1_1.png" title="French RER: MODWT Thresholding" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 14: French RER: MODWT Thresholding</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 14 :::::::::: --> The output is a spool object with the spool tree listing the summary, denoised function, and noise. The table is a summary of the thresholding procedure performed. The first plot is the de-noised function (signal) superimposed over the original series for context. The second plot is the noise process extracted from the original series.<br /><br /> Next, let's derive the STL decomposition of the same data. The plots below superimpose the wavelet signal estimate on top of the STL seasonally adjusted curve, as well as the wavelet thresholded noise on top of the STL remainder series.<br /><br /> <!-- :::::::::: FIGURES 15a and 15b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 11a :::::::::: --> <center> <a href="http://www.eviews.com/blog/wavelets/images/threshold_ex1_2.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/threshold_ex1_2.png" title="French RER: STL Seas. Adj. vs. Wavelet Tresh. Signal" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 11b :::::::::: --> <center> <a href="http://www.eviews.com/blog/wavelets/images/threshold_ex1_3.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/threshold_ex1_3.png" title="French RER: STL Remainder vs. Wavelet Tresh. Noise" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 15a: Japanese RER: STL Seas. Adj. vs. Wavelet Tresh. Signal</small> </center> </td> <td class="nb"> <center> <small>Figure 15b: Japanese RER: STL Remainder vs. Wavelet Tresh. Noise</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 15a and 15b :::::::::: --> Clearly the STL seasonally adjusted series is very similar to the wavelet signal curve. However, this is really only because the cyclical components in the underlying data are negligible. This can be confirmed by looking at the magnitude of the STL seasonality curve. Nevertheless, a close inspection of the STL remainder and wavelet threshold noise series reveals noticeable differences. It is these differences that drive any differences in the STL seasonal adjustment and wavelet threshold signal curves.<br /><br /><br /> <h3 class="seccol", id="sec5">Outlier Detection</h3> A particularly important and useful application of wavelets is <b>outlier detection</b>. While the subject matter has received some attention over the years starting with Greenblatt (1996), we focus here on a rather simple and appealing contribution by Bilen and Huzurbazar (2002). The appeal of their approach is that it doesn't require model estimation, is not restricted to processes generated via ARIMA, and works in the presence of both additive and innovational outliers. The approach does assume that wavelet coefficients are approximately independent and identically normal variates. This is a rather weak assumption since the independence assumption (the more difficult to satisfy) is typically guaranteed using the DWT. While EVIews offers the ability to perform this procedure using a MODWT, it's generally better suited to the orthonormal transform.<br /><br /> Bilen and Huzurbazar (2002) also suggest that Haar is the preferred filter here. This is because the latter yields coefficients large in magnitude in the presence of jumps or outliers. They also suggest that the transformation be carried out only at the first scale. Nevertheless, EViews does offer the ability to stray away from these suggestions.<br /><br /> The overall procedure works on the principle of thresholding and the authors suggest the use of the universal threshold. The idea here is that extreme (outlying) values will register as noticeable spikes in the spectrum. As such, those values would be candidates for outlying observations. In particular, if $ m_{j} $ denotes the number of wavelet coefficients at scale $ \lambda_{j} $, the entire algorithm is summarized (and generalized) as follows: <ol> <li>Apply a wavelet transform to the original data up to some scale $ J \leq M $.</li><br /> <li>Specify a threshold value $ \eta $.</li><br /> <li>For each $ j = 1, \ldots, J $:</li><br /> <ol> <li>Find the set of indices $ S = \cbrace{s_{1}, \ldots, s_{m_{j}}} $ such that $ |W_{i, j}| > \eta $ for $ i = 1, \ldots, m_{j} $.</li><br /> <li>Find the exact location of the outlier among original observations. For instance, if $ s_{i} $ is an index associated with an outlier:</li><br /> <ul> <li> If the wavelet transform is the DWT, the original observation associated with that outlier is either $ 2^{j}s_{i} $ or $ (2^{j}s_{i} - 1) $. To discern between the two, let $ \tilde{\mu} $ denote the mean of the original observations with observations located at $ 2s_{i} $ and $ (2s_{i} - 1) $. That is: $$ \tilde{\mu} = \frac{1}{T-2}\sum_{t \neq 2^{j}s_{i}\, ,\, (2^{j}s_{i} - 1)}{y_{t}} $$ If $ |y_{2^{j}s_{i}} - \tilde{\mu}| > |y_{2^{j}s_{i} - 1} - \tilde{\mu}| $, the location of the outlier is $ 2^{j}s_{i} $, otherwise, the location of the outlier is $ (2^{j}s_{i} - 1) $. </li><br /> <li>If the wavelet transform is the MODWT, the outlier is associated with observation $ i $.</li> </ul> </ol> </ol><br /> <h4 class="subseccol", id="sec5.1">Example: Bilen and Huzurbazar (2002) Outlier Detection</h4> To demonstrate outlier detection, data is obtained from the <b>US Geological Survey</b> website <a href='https://www.usgs.gov/'>https://www.usgs.gov/</a>. As discussed in Bilen and Huzurbazar (2002), data collected in this database comes from many different sources and is generally notorious for input errors. Here we focus on a monthly dataset, collected at irregular intervals from May 19876 to June 2020, measuring water conductance at the Green River near Greendale, UT. The dataset is identified by site number 09234500.<br /><br /> A quick summary of the series indicates that there is a large drop from typical values (500 to 800 units) in September 1999. The value recorded at this date is roughly 7.4 units. This is an unusually large drop and is almost certainly an outlying observation.<br /><br /> In an attempt to identify the aforementioned outlier, and perhaps uncover others, we use aforementioned wavelet outlier detection method. We stick with the defaults suggested in the paper and use a DWT transform with a Haar filter, universal threshold, a mean median absolute deviation estimator for wavelet coefficient variance, and a maximum decomposition scale set to unity.<br /><br /> To proceed, either download the data from the source, or open the tab <b>Outliers</b> in the workfile provided. The series we're interested in is <b class="wfobj">WATER_CONDUCTANCE</b>. Next, open the series window and proceed as follows: <ol> <li>Click on <b>View/Wavelet Analysis/Outlier Detection...</b></li> <li>Set the <b>Max scale</b> dropdown to <b>1</b>.</li> <li>Under the <b>Threshold</b> group, set the <b>Method</b> dropdown to <b>Hard</b>.</li> <li>Under the <b>Wavelet coefficient variance</b> group, set the <b>Method</b> dropdown to <b>Mean Med. Abs. Dev.</b>.</li> <li>Click on <b>OK</b>.</li> </ol><br /> <!-- :::::::::: FIGURE 16 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/wavelets/images/outliers_ex1_1.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/outliers_ex1_1.png" title="Water Conductance: Outlier Detection" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 16: Water Conductance: Outlier Detection</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 16 :::::::::: --> The output is a spool object with the spool tree listing the summary, outlier table, and outlier graphs for each scale (in this case just one). The first of these is a summary of the outlier detection procedure performed. Next is a table listing the exact location of a detected outlier along with its value and absolute deviation from the series mean and median, respectively. The plot that follows is that of the original series with red dots identifying outlying observations along with a dotted vertical line at said locations for easier identification.<br /><br /> Evidently, the large outlying observation in September 1999 is accurately identified. In addition there are three other possible outlying observations identified in September 1988, January 1992, and June 2020.<br /><br /><br /> <h3 class="seccol", id="sec6">Conclusion</h3> In this first entry of our series on wavelets, we provided a theoretical overview of the most important aspects in wavelet analysis. Here we demonstrated how these principles are applied to real and artificial data using the new EViews 12 wavelet engine.<br /><br /><br /> <hr /> <h3 class="seccol", id="sec7">Files</h3> <ul> <li><a href="http://www.eviews.com/blog/wavelets/workfiles/wavelets.wf1"'><b class="wf">WAVELETS.WF1</b></a></li> <li><a href="http://www.eviews.com/blog/wavelets/workfiles/wavelets.prg"'><b class="wf">WAVELETS.PRG</b></a></li> </ul><br /><br /> <hr /> <h3 class="seccol", id="sec8">References</h3> <ol class="bib2xhtml"> <li id="bilen-2002" class="entry"> Bilen C and Huzurbazar S (2002), <i>"Wavelet-based detection of outliers in time series"</i>, Journal of Computational and Graphical Statistics. Vol. 11(2), pp. 311-327. Taylor & Francis. </li> <li id="greenblatt-1996" class="entry"> Greenblatt SA (1996), <i>"Wavelets in econometrics"</i>, In Computational Economic Systems. , pp. 139-160. Springer. </li> <li id="pesaran-2007" class="entry"> Pesaran MH (2007), <i>"A simple panel unit root test in the presence of cross-section dependence"</i>, Journal of applied econometrics. Vol. 22(2), pp. 265-312. Wiley Online Library. </li> </ol></span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com0tag:blogger.com,1999:blog-6883247404678549489.post-41385365046285526582020-11-30T09:06:00.006-08:002020-12-02T08:09:49.844-08:00Wavelet Analysis: Part I (Theoretical Background)<style> table { border: 0px solid black; border-collapse: separate; border-spacing: 10px; } td { border: 1px solid black; } .nb { border: 0px solid black; } .step { counter-reset: section; list-style-type: none; } .step li::before { counter-increment: section; content: "Step "counter(section) ": "; } .seccol { } .subseccol { color: #fa5e5e } </style> <script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { rbrace: ['{\\left(#1\\right)}', 1], cbrace: ['{\\left\\{#1\\right\\}}', 1], sbrace: ['{\\left[#1\\right]}', 1], bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1], series: ['{\\left\\{#1_{#2}\\right\\}_{#2=#3}^{#4}}', 4], xsum: ['{\\sum_{#1=#2}^{#3}{#4}}', 4], var: ['{\\operatorname\{var\}}'], sign: ['{\\operatorname\{sign\}}'], diag: ['{\\operatorname\{diag\}}'], med: ['{\\operatorname\{median\}}'], vec: ['{\\operatorname\{vec\}}'], tr: ['{\\operatorname\{tr\}}'] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"> </script> <span style="font-family: "verdana" sans-serif"> This is the first of two entries devoted to wavelets. Here, we summarize the most important theoretical principles underlying wavelet analysis. This entry should serve as a detailed background reference when using the new wavelet features released in EViews 12. In <a href='http://blog.eviews.com/2020/12/wavelet-analysis-part-ii-applications.html'>Part II</a> we will apply these principles and demonstrate how they are used with the new EViews 12 wavelet engine. <a name='more'></a><br /><br /> <h3 class="seccol">Table of Contents</h3> <ol> <li><a href="#sec1">Introduction to Wavelets</a> <li><a href="#sec2">Wavelet Transforms</a> <ul> <li><a href="#sec2.1">Discrete Wavelet Filters</a> <li><a href="#sec2.2">Mallat's Pyramid Algorithm</a> <li><a href="#sec2.3">Boundary Conditions</a> <li><a href="#sec2.4">Variance Decomposition</a> <li><a href="#sec2.5">Multiresolution Analysis</a> </ul> <li><a href="#sec3">Practical Considerations</a> <ul> <li><a href="#sec3.1">Choice of Wavelet Filter</a> <li><a href="#sec3.2">Handling Boundary Conditions</a> <li><a href="#sec3.3">Adjusting Non-Dyadic Time Series Lengths</a> </ul> <li><a href="#sec4">Wavelet Thresholding</a> <ul> <li><a href="#sec4.1">Thresholding Rule</a> <li><a href="#sec4.2">Optimal Threshold</a> <li><a href="#sec4.3">Wavelet Coefficient Variance</a> <li><a href="#sec4.4">Thresholding Implementation</a> </ul> <li><a href="#sec5">Conclusion</a> <li><a href="#sec6">References</a> </ol><br /> <h3 class="seccol", id="sec1">Introduction to Wavelets</h3> What characterizes most economic time series are time-varying features such as non-stationarity, volatility, seasonality, and structural discontinuities. Wavelet analysis is a natural framework for analyzing these phenomena without imposing any simplifying assumptions such as stationarity. In particular, wavelet filters can decompose and reconstruct a time series (as well as its correlation structure) across timescales so that constituent elements at one scale are uncorrelated with those at another. This is clearly useful in isolating features which materialize only at certain timescales.<br /><br /> Wavelet analysis is also, in many respects, like Fourier spectral analysis. Both methods can represent a time series signal in a different space by re-expressing a signal as a linear combination of basis functions. In the context of Fourier analysis, these basis functions are sines and cosines. While these basis functions approximate global variation well, they are poorly adapted to capturing local variation, otherwise known as time-variation in time series analysis. To see this, observe that trigonometric basis functions are sinusoids of the form: $$ R\cos\left(2\pi(\omega t + \phi)\right) $$ where $ R $ is the <b>amplitude</b>, $ \omega $ is the <b>frequency</b> (in cycles per unit time) or <b>period</b> $ \frac{1}{\omega} $ (in units of time), and $ \phi $ is the <b>phase</b>. Accordingly, if the time variable $ t $ is shifted and scaled to $ u = \frac{t - a}{b} $, the associated sinusoid becomes: $$ R\cos\left(2\pi(\omega^{\star} u + \phi^{\star})\right) $$ where $ \omega^{\star} = \omega b $ and $ \phi^{\star} = \phi + \omega a $.<br /><br /> Evidently, the amplitude $ R $ is invariant to shifts in location and scale. Furthermore, notice that if $ b > 1 $, the frequency $ \omega^{\star} $ increases, but time $ u $ decreases, and vice versa. Accordingly, frequency information is gained when time information is lost, and vice versa.<br /><br /> Ultimately, trigonometric functions are ideally adapted to stationary processes characterized by impulses which wane with time, but are otherwise poorly adapted to discontinuous, non-linear, and non-stationary processes whose impulses persist and evolve with time. To surmount this fixed time-frequency relationship, a new set of basis functions are needed.<br /><br /> In contrast to Fourier transforms, wavelet transforms rely on a reference basis function called the <b>mother wavelet</b>. The latter is stretched (scaled) and shifted across time to capture time-dependent features. Thus, the wavelet basis functions are localized both in scale and time. In this sense, the wavelet basis function scale is the analogue of frequency in Fourier transforms. The fact that the wavelet basis function is also shifted (translated) across time, implies that wavelet basis functions are similar in spirit to performing a Fourier transform on a moving and overlapping window of subsets of the entire time series signal.<br /><br /> In particular, the mother wavelet function $ \psi(t) $ is any function satisfying: $$ \int_{-\infty}^{\infty} \psi(x) dx = 0 \qquad\qquad \int_{-\infty}^{\infty} \psi(x)^{2} dx = 1 $$ In other words, wavelets are functions that have mean zero and unit energy. Here, the term <i>energy</i> originates from the signal processing literature and is formalized as $ \int_{-\infty}^{\infty} |f(t)^{2}| dt$ for some function $ f(t) $. In fact, the concept is interchangeable with the idea of <b>variance</b> for non-complex functions.<br /><br /> From the mother wavelet, the wavelet basis functions are now derived as: $$ \psi_{a,b}(t) = \frac{1}{\sqrt{b}}\psi\left(\frac{t - a}{b}\right) $$ where $ a $ is the <b>location constant</b>, whereas $ b $ is the <b>scaling factor</b> which corresponds to the notion of frequency in Fourier analysis. Observe further that the analogue of the amplitude $ R $ in Fourier analysis, here captured by the term $ \frac{1}{\sqrt{b}} $, is in fact a function of the scale $ b $. Accordingly, wavelet basis functions will adapt to scale-dependent phenomena much better than their trigonometric counterparts.<br /><br /> Since wavelet basis functions are <i>de facto</i> location and scale transformations of a single function, they are also an ideal tool for <b>multiresolution analysis</b> (MRA) - the ability to analyze a signal at different frequencies with varying resolutions. In fact, MRA is in some sense the inverse of the wavelet transform. It can derive representations of the original time-series data, using only those features which are characteristic at a given timescale. For instance, a highly noisy but persistent time series, can be decomposed into a portion which represents only the noise (features captured at high frequency), and a portion which represents only the persistent signal (features captured at low frequencies). Thus, moving along the time domain, MRA allows one to zoom to a desired level of detail such that high (low) frequencies yield good (poor) time resolutions and poor (good) frequency resolutions. Since economic time series often exhibit multiscale features, wavelet techniques can effectively decompose these series into constituent processes associated with different timescales.<br /><br /><br /><br /> <h3 class="seccol", id="sec2">Wavelet Transforms</h3> In the context of continuous functions, the <b>continuous wavelet transform</b> (CWT) of a time series $ y(t) $ is defined as: $$ W(a, b) = \int_{-\infty}^{\infty} y(t)\psi_{a,b}(t) \,dt $$ Moreover, the inverse transformation to reconstruct the original process is given as: $$ y(t) = \int_{-\infty}^{\infty} \int_{0}^{\infty} W(a,b)\psi_{a,b}(t) \,da \,db $$ See Percival and Walden (2000) for a detailed discussion.<br /><br /> Since continuous functions are rarely observed, the CWT is empirically rarely exploited and a discretized analogue known as the <b>discrete wavelet transform</b> (DWT) is used. In its most basic form, the series length, $ T = 2^{M} $ for $ M \geq 0 $, is assumed <b>dyadic</b> (a power of 2), and the DWT manifests as a collection of CWT <i>slices</i> at nodes $ (a, b) \equiv (a_{k}, b_{j}) $ such that $ a_{k} = 2^{j}k $ and $ b_{j} = 2^{j} $ where $ j = 1, \ldots, M $. In other words, the discrete wavelet basis functions assume the form: $$ \psi_{k,j}(t) = 2^{-j/2}\psi\left( 2^{-j}t - k \right) $$ Unlike the CWT which is highly redundant in both location and scale, the DWT can be designed as an orthonormal transformation. If the location discretization is restricted to the index $ k = 1, \ldots, 2^{-j}T $, at each scale $ \lambda_{j} = 2^{j - 1} $, half the available observations are lost in exchange for <b>orthonormality</b>. This is the classical DWT framework. Alternatively, if the location index is restricted to the full set of available observations with $ k = 1, \ldots, T $, the discretized transform is no longer orthonormal, but does not suffer from observation loss. The latter framework is typically referred to as the <b>maximal overlap discrete wavelet transform</b> (MODWT), and sometimes as the <b>non-decimated</b> DWT. Since the DWT is formally characterized by wavelet filters, we devote some time to those next.<br /><br /> <h4 class="subseccol", id="sec2.1">Discrete Wavelet Filters</h4> Formally, the DWT is characterized via $h = \rbrace{h_{0}, \ldots, h_{L-1}}$ and $g = \rbrace{ g_{0}, \ldots, g_{L-1} }$ -- the wavelet (high pass) and scaling (low pass) filters of length $L$, respectively, for some $ L \geq 1 $. Recall that the low and high pass filters are defined in the context of <b>frequency response functions</b>, otherwise known as <b>transfer functions</b>. The latter are Fourier transforms of impulse response functions. Since the impulse response function describes, in the time domain, the evolution (response) of a time series signal to a given stimulus (impulse), the transfer function describes, in the frequency domain, the response of a time series signal to a given impulse in the frequency domain. In this regard, when the magnitude of the transfer function, otherwise known as the <b>gain function</b>, is large at low frequencies and small at high frequencies, the filter associated with that transfer function is said to be a <b>low-pass filter</b>. Otherwise, when the gain function is small at low frequencies but high at high frequencies, the transfer function is associated with a <b>high-pass</b> filter.<br /><br /> Like traditional time series filters which are used to extract features (eg. trends, seasonalities, business cycles, noise, etc.), wavelets filters perform a similar role. They are designed to capture low and high frequencies, and have a particular length. This length governs how much of the original series information is used to extract low and high frequency phenomena. This is very similar to the role of the autoregressive (AR) order in traditional time series models where higher AR orders imply more historical observations influence the present.<br /><br /> The simplest and shortest wavelet filter is of length $ L = 2 $ and is called the <b>Haar</b> wavelet. Formally, it is characterized by its high-pass filter definition: \begin{align*} h_{l} = \begin{cases} \frac{1}{\sqrt{2}} \quad \text{if} \quad l = 0\\ \frac{-1}{\sqrt{2}} \quad \text{if} \quad l = 1 \end{cases} \end{align*} This is a sequence of rescaled rectangular functions and is therefore ideally suited to analyzing signals with sudden and discontinuous changes. In this regard, it is ideally suited for outlier detection. Unfortunately, this filter is typically too simple for most other applications.<br /><br /> To help mitigate the limitations of the Haar filter, Daubechies (1992) introduced a family of filters (known as <b>daublets</b>) of even length that are indexed by the polynomial degree they are able to capture -- rather the number of vanishing moments. Thus, the Haar filter, which is of length 2, can only capture constants and linear functions. The Daubechies wavelet filter of length 4 can capture everything from a constant to a cubic function, and so on. Accordingly, higher filter lengths are associated with higher smoothness. Unlike the Haar filter which has a closed form solution in the time domain, the Daubechies family of wavelet filters have a closed form solution only in the frequency domain.<br /><br /> Unfortunately, Daubechies filters are typically not symmetric. If a more symmetric version of the daublet filters is required, then the class known as <b>least asymmetric</b>, or <b>symmlets</b>, is used. The latter define a family of wavelet filters which are as close to symmetric as possible.<br /><br /> <!-- :::::::::: FIGURE 1 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/wavelets/images/wavelet_haar.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/wavelet_haar.png" title="Haar Wavelet" width="540" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 1: Haar Wavelet</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 1 :::::::::: --> <!-- :::::::::: FIGURE 2 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/wavelets/images/wavelet_d8.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/wavelet_d8.png" title="Daublet (L=8) Wavelet" width="540" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 2: Daubechies - Daublet (L=8) Wavelet</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 2 :::::::::: --> <!-- :::::::::: FIGURE 3 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/wavelets/images/wavelet_la8.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/wavelet_la8.png" title="Symmlet (L=8) Wavelet" width="540" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 3: Least Asymmetric - Symmlet (L=8) Wavelet</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 3 :::::::::: --> <h4 class="subseccol", id="sec2.2">Mallat's Pyramid Algorithm</h4> In practice, DWT coefficients are derived through the <b>pyramid algorithm</b> of Mallat (1989). In case of the classical DWT with $T=2^{M}$, let $\mathbf{y} = \series{y}{t}{1}{T}$ and define $\mathbf{W} = \sbrace{\mathbf{W}_{1}, \ldots, \mathbf{W}_{M}, \mathbf{V}_{M}}^{\top}$ as the matrix of DWT coefficients. Here, $\mathbf{W}_{j}$ is a vector of wavelet coefficients of length $T/2^{j}$ and is associated with changes on a scale of length $\lambda_{j} = 2^{j-1}$. Moreover, $\mathbf{V}_{M}$ is a vector of scaling coefficients of length $T/2^{j}$ and is associated with averages on a scale of length $\lambda_{M} = 2^{M-1}$. $\mathbf{W}$ now follows from $\mathbf{W} = \mathcal{W}\mathbf{y}$ where $\mathcal{W}$ is some $T\times T$ orthonormal matrix generating the DWT coefficients. The algorithm can now be formalized as follows.<br /><br /> If $\mathbf{W}_{j} = \rbrace{W_{1,1} \ldots W_{T/2^{j},j}}^{\top}$ and $\mathbf{V}_{j} = \rbrace{V_{1,1} \ldots V_{T/2^{j},j}}^{\top}$, the $j^{th}$ iteration of the algorithm convolves an input signal with filters $h$ and $g$ respectively to derive the $j^{th}$ level DWT matrix $\sbrace{\mathbf{W}_{1}, \ldots \mathbf{W}_{j}, \mathbf{V}_{j}}^{\top}$. Explicitly, the convolution is formalized as: \begin{align*} W_{t,1} &= \xsum{l}{0}{L-1}{h_{l}y_{2t-l\hspace{-5pt}\mod T}} && V_{t,1} = \xsum{l}{0}{L-1}{g_{l} y_{2t-l\hspace{-5pt}\mod T}} && j=1\\ W_{t,j} &= \xsum{l}{0}{L-1}{h_{l} V_{2t-l\hspace{-5pt}\mod T,j-1}} && V_{t,j} = \xsum{l}{0}{L-1}{g_{l} V_{2t-l\hspace{-5pt}\mod T,j-1}} && j=2,\ldots,M \end{align*} where $t=1,\ldots,T/2^{j}$. In particular, each iteration therefore convolves the scaling coefficients from the preceding iteration, namely $V_{t,j-1}$, with both the high and low pass filters, and the input signal in the first iteration is $y_{t}$. The entire algorithm continues until the $M^{th}$ iteration although it can be stopped earlier.<br /><br /> In effect, at each scale, the DWT algorithm partitions the frequency spectrum into equal subsets -- the low and high frequencies. At the first scale, low-frequency phenomena of the original signal $ \mathbf{y} $ are captured by $ \mathbf{V}_{1} $, whereas high frequency phenomena are captured by $ \mathbf{W}_{1} $. At scale 2, the same procedure is performed not on the original time series signal, but on the low-frequency components $ \mathbf{V}_{1} $. This in turn generates $ \mathbf{V}_{2} $, which is in a sense those phenomena that would be captured in the first quarter of the frequency spectrum, as well as $ \mathbf{W}_{2} $ -- the high-frequency components at scale 2, or those phenomena that would be captured in the second quarter of the frequency range. This continues at finer and finer levels as we increase scale. In this regard, increasing scale can isolate increasingly more persistent (lower frequency) features of the original time-series signal, with the wavelet coefficients $ \mathbf{W}_{j} $ capturing the remaining, cumulated, ``noisy'' features.<br /><br /> <h4 class="subseccol", id="sec2.3">Boundary Conditions</h4> It's important to note that both the DWT and the MODWT make use of <b>circular filtering</b>. When a filtering operation reaches the beginning or end of an input series, otherwise known as the <b>boundaries</b>, the filter treats the input time series as periodic with period $ T $. In other words, we assume that $ y_{T-1}, y_{T-2}, \ldots $ are useful surrogates for unobserved values $ y_{-1}, y_{-2}, \ldots $. Those wavelet coefficients which are affected are also known as <b>boundary coefficients</b>. Note that the number of boundary coefficients only depends on the filter length $ L $ and is independent of the input series length $ T $. Furthermore, the number of boundary coefficients increases with filter length $ L $. In particular, the formula for the number of boundary coefficients for the DWT and MODWT respectively, are given by: \begin{align*} \kappa_{\text{DWT}, j} &\equiv L_{j}^{\prime}\\ \kappa_{\text{MODWT}, j} &\equiv \min \cbrace{L_{j}, T} \end{align*} where $ L_{j}^{\prime} = \left\lceil (L - 2)\rbrace{1 - \frac{1}{2^{j}}} \right\rceil $ and $ L_{j} = (L - 1)(2^{j - 1} - 1) $.<br /><br /> Furthermore, both DWT and MODWT boundary coefficients will appear at the beginning of $ \mathbf{W}_{j} $ and $ \mathbf{V}_{j} $. Refer to Percival and Walden (2000) for further details.<br /><br /> <h4 class="subseccol", id="sec2.3">Variance Decomposition</h4> The orthonormality of the DWT generating matrix $\mathcal{W}$ has important implications. First, $\mathcal{W}\times\mathcal{W} = I_{T}$, is an identity matrix of dimension $T$. More importantly, $\norm{\mathbf{y}}^{2} = \norm{\mathbf{W}}^{2}$. To see this, recall that $\mathbf{y} = \mathcal{W}^{\top}\mathbf{W}$ and $\norm{\mathbf{y}}^{2} = \mathbf{y}^{\top}\mathbf{y}$. The DWT is therefore an energy (variance) preserving transformation. Coupled with this preservation of energy is also the decomposition of energy on a scale by scale basis. The latter formalizes as: \begin{align} \norm{\mathbf{y}}^{2} = \xsum{j}{1}{M}{\norm{\mathbf{W}_{j}}^{2}} + \norm{\mathbf{V}_{M}}^{2} \label{eq2.5.1} \end{align} where $\norm{\mathbf{W}_{j}}^{2} = \xsum{t}{t}{T/2^{j}}{W^{2}_{t,j}}$ and $\norm{\mathbf{V}_{M}}^{2} = \xsum{t}{t}{T/2^{M}}{V^{2}_{t,M}}$. Thus, $\norm{\mathbf{W}_{j}}^{2}$ quantifies the energy of $ y_{t} $ accounted for at scale $\lambda_{j}$. This decomposition is known as the <b>wavelet power spectrum</b> (WPS) and is arguably the most insightful of the properties of the DWT.<br /><br /> The WPS bares resemblance to the <b>spectral density function</b> (SDF) used in Fourier analysis. Whereas the SDF decomposes the variance of an input series across frequencies, in wavelet analysis, the variance of an input series is decomposed across scales $ \lambda_{j} $. One of the advantages of the WPS over the SDF is that the latter requires an estimate of the input series mean, whereas the former does not. In particular, note that the total variance in $ \mathbf{y} $ can be decomposed as: $$ \xsum{j}{0}{\infty}{\nu^{2}(\lambda_{j})} = \var(\mathbf{y}) $$ where $ \nu^{2}(\lambda_{j}) $ is the contribution to $ \var(\mathbf{y}) $ due to scale $ \lambda_{j} $ and is estimated as: $$ \hat{\nu}^{2}(\lambda_{j}) \equiv \frac{1}{T} \xsum{t}{1}{T}{W_{t,j}^{2}} $$ Note that $ \hat{\nu}^{2}(\lambda_{j}) $ is the energy of $ y_{t} $ at scale $ \lambda_{j} $ divided by the number of observations. Unfortunately, this estimator is biased due to the presence of boundary coefficients. To derive an unbiased estimate, boundary coefficients should be dropped from consideration. Accordingly, an unbiased estimate of variance contributed at scale $ \lambda_{j} $ is given by: $$ \tilde{\nu}^{2}(\lambda_{j}) \equiv \frac{1}{M_{j}} \xsum{t}{\kappa_{j} + 1}{T}{W_{t,j}^{2}}$$ where $ M_{j} = T - \kappa_{j}$ and $ \kappa_{j} \equiv L_{j}^{\prime} $ when wavelet coefficients are derived using the DWT, whereas $ \kappa_{j} \equiv L_{j} $ in case wavelet coefficients derive from the MODWT.<br /><br /> It is also possible to derive confidence intervals for the contribution to the overall variance at each scale. In particular, dealing with unbiased estimators $ \tilde{\nu}(\lambda_{j}) $ and a level of significance $ \alpha \in (0,1) $, a confidence interval for $ \nu(\lambda_{j}) $ with coverage $ 1 - 2\alpha $ is given by: \begin{align*} \sbrace{\tilde{\nu}^{2}(\lambda_{j}) - \Phi^{-1}(1 - \alpha) \rbrace{\frac{2A_{j}}{M_{j}}}^{1/2} \quad ,\quad \tilde{\nu}^{2}(\lambda_{j}) + \Phi^{-1}(1 - \alpha) \rbrace{\frac{2A_{j}}{M_{j}}}^{1/2}} \end{align*} Above, $ A_{j} $ is the integral of the squared spectral density function of wavelet coefficients $ \mathbf{W_{j}} $ excluding any boundary coefficients. As shown in Percival and Walden (2000), $ A_{j} $ can be estimated as the sum of squared serial correlations among $ \mathbf{W_{j}} $ excluding any boundary coefficients. In other words: $$ \hat{A}_{j} = \frac{1}{M_{j}}\xsum{t}{\kappa_{j}}{T - |\tau|}{W_{j, t}W_{j, t+ |\tau|}} \, \quad 0 \leq |\tau| \leq M_{j} - 1 $$ Unfortunately, as argued in Priestley (1981), there is no condition that prevents the lower bound of the confidence interval above from becoming negative. Accordingly, Percival and Walden (2000) suggest the approximation: $$ \frac{\eta \tilde{\nu}^{2}(\lambda_{j})}{\nu^{2}(\lambda_{j})} \stackrel{d}{=} \chi^{2}_{\eta} $$ where $ \eta $ is known as the <b>equivalent degrees of freedom</b> (EDOF) and is formalized as: $$ \eta = \frac{2 E\rbrace{\tilde{\nu}^{2}(\lambda_{j})}^{2}}{\var \rbrace{\tilde{\nu}^{2}(\lambda_{j})}} $$ The confidence interval of interest with coverage $ 1 - 2\alpha $ can now be stated as: \begin{align*} \sbrace{\frac{\eta \tilde{\nu}^{2}(\lambda_{j})}{Q_{\eta}(1 - \alpha)} \,,\, \frac{\eta \tilde{\nu}^{2}(\lambda_{j})}{Q_{\eta}(\alpha)}} \end{align*} where $ Q_{\eta}(1 - \alpha) $ is the $ \alpha- $ quantile for the $ \chi^{2}_{\eta} $ distribution.<br /><br /> Remaining is the issue of EDOF estimation. Two suggestions in Percival and Walden (2000): \begin{align*} \eta_{1} \equiv \frac{M_{j}\tilde{\nu}^{4}(\lambda_{j})}{\hat{A}_{j}}\\ \eta_{2} \equiv \max \cbrace{2^{-j}M_{j} \, , \, 1} \end{align*} The first estimate above relies on large sample theory and in practice requires a sample of at least $ T = 128 $ to yield a decent approximation. The second assumes that the SDF of the wavelet coefficients at scale $ \lambda_{j} $ is a band-pass. See Percival and Walden (2000) for details.<br /><br /> <h4 class="subseccol", id="sec2.4">Multiresolution Analysis</h4> Similar to Fourier, spline, and linear approximations, a principal feature of the DWT is the ability to approximate an input series as a function of wavelet basis functions. In wavelet theory this is known as <b>multiresolution analysis</b> (MRA) and refers to the approximation of an input series at each scale (and up to all scales) $ \lambda_{j} $.<br /><br /> To formalize matters, recall that $ \mathbf{W} = \mathcal{W}\mathbf{y} $ and partition the rows of $ \mathcal{W} $ commensurate with the row partition of $ \mathbf{W} $ into $ \mathbf{W}_{1}, \ldots, \mathbf{W}_{M} $ and $ \mathbf{V}_{M} $. In other words, let $ \mathcal{W} = \sbrace{\mathcal{W}_{1}, \ldots, \mathcal{W}_{M}, \mathcal{V}_{M}}^{\top} $, where $ \mathcal{W}_{j} $ and $ \mathcal{V}_{j} $ have dimensions $ 2^{-j}T \times T $. Then, note that for any $ m \in \cbrace{1, \ldots, M} $: \begin{align*} \mathbf{y} &= \mathcal{W}^{\top}\mathbf{W}\\ &= \xsum{j}{1}{m}{\mathcal{W}^{\top}\mathbf{W}_{j}} + \mathcal{V}^{\top}\mathbf{V}_{m}\\ &= \xsum{j}{1}{m}{\mathcal{D}_{j}} + \mathcal{S}_{m} \end{align*} where $ \mathcal{D}_{j} = \mathcal{W}^{\top}_{j} \mathbf{W}_{j} $ and $ \mathcal{V}_{m} = \mathcal{V}^{\top}_{m} \mathbf{V}_{m} $ are $ T- $ dimensional vectors, respectively called the $ j^{\text{th}} $ level <b>detail</b> and $ m^{\text{th}} $ level <b>smooth</b> series. Furthermore, since the low-pass (high-pass) wavelet coefficients are associated with changes (averages) at scale $ \lambda_{j} $, the detail and smooth series are associated with changes and average at scale $ \lambda_{j} $, respectively, in the input series $ \mathbf{y} $.<br /><br /> The MRA is typically used to derive approximations for the original series using its lower and upper frequency components. Since upper frequency components are associated with transient features and are captured by the wavelet coefficients, the detail series will in fact extract those features of the original series which are typically associated with ``noise''. Alternatively, since lower frequency components are associated with perpetual features and are captured by the scaling coefficients, the smooth series will in fact extract those features of the original series which are typically associated with the ``signal''.<br /><br /> It's worth noting that because wavelet filtering can result in boundary coefficients, the detail and smooth series will have observations affected by the same. The latter are given as: \begin{align*} \text{DWT} &\quad t = \begin{cases} 1, \ldots, 2^{j}L_{j}^{\prime} &\quad \text{lower portion}\\ T - \rbrace{L_{j} + 1 - 2^{j}} + 1, \ldots, T &\quad \text{upper portion} \end{cases}\\ \\ \text{MODWT} &\quad t = \begin{cases} 1, \ldots, L_{j} &\quad \text{lower portion}\\ T - L_{j} + 1, \ldots, T &\quad \text{upper portion} \end{cases} \end{align*} <br /><br /> <h3 class="seccol", id="sec3">Practical Considerations</h3> The exposition above introduces basic theory underlying wavelet analysis. Nevertheless, there are several practical (empirical) considerations which should be addressed. We focus here on three in particular: <ul> <li>Wavelet filter selection</li> <li>Handling boundary conditions</li> <li>Non-dyadic series length adjustments</li> </ul><br /> <h4 class="subseccol", id="sec3.1">Choice of Wavelet Filter</h4> The type of wavelet filter is typically chosen to mimic the data to which it is applied. Shorter filters don't approximate the ideal band pass filter well, but longer ones do. On the other hand, if the data derives from piecewise constant functions, the Haar wavelet or other shorter wavelets may be more appropriate. Alternatively, if the underlying data is smooth, longer filters may be more appropriate. In this regard, it's important to note that longer filters expose more coefficients to boundary condition effects than shorter ones. Accordingly, the rule of thumb strategy is to use the filter with the smallest length that gives reasonable results. Furthermore, since the MODWT is not orthogonal and its wavelet coefficients are correlated, wavelet filter choice is not as vital as in the case of the orthogonal DWT. Nevertheless, if alignment to time is important (i.e. zero phase filters), the least asymmetric family of filters may be a good choice.<br /><br /> <h4 class="subseccol", id="sec3.2">Handling Boundary Conditions</h4> As previously mentioned, wavelet filters exhibit boundary conditions due to circular recycling of observations. Although this may be an appropriate assumption for some series such as those naturally exhibiting cyclical effects, it is not appropriate in all circumstances. In this regard, another popular approach is to reflect the original series to generate a series of length $ 2T $. In other words, wavelet filtering proceeds on observations $ y_{1}, \ldots, y_{T}, y_{T}, y_{T-1}, \ldots, y_{1} $. In either case, any proper wavelet analysis ought, at the very least, quantify how many wavelet coefficients are affected by boundary conditions.<br /><br /> <h4 class="subseccol", id="sec3.3">Adjusting Non-dyadic Length Time Series</h4> Recall that the DWT requires an input series of dyadic length. Naturally, this condition is rarely satisfied in practice. In this regard, there are two broad strategies. Either shorten the input series to dyadic length at the expense of losing observations, or ``pad'' the input series with observations to achieve dyadic length. In the context of the latter strategy, although the choice of padding values is ultimately arbitrary, there are three popular choices, neither of which has proven superior: <ul> <li>Pad with zeros</li> <li>Pad with mean</li> <li>Pad with median</li> </ul><br /><br /> <h3 class="seccol", id="sec4">Wavelet Thresholding</h3> A key objective in any empirical work is to discriminate noise from useful information. In this regard, suppose that the observed time series $ y_{t} = x_{t} + \epsilon_{t} $ where $ x_{t} $ is an unknown signal of interest obscured by the presence of unwanted noise $ \epsilon_{t} $. Traditionally, signal discernment was typically achieved using discrete Fourier transforms. Naturally, this assumes that any signal is an infinite superposition of sinusoidal functions; a strong assumption in empirical econometrics where most data exhibits unit roots, jumps, kinks, and various other non-linearities.<br /><br /> The principle behind wavelet-based signal extraction, otherwise known as <b>wavelet shrinkage</b>, is to <i>shrink</i> any wavelet coefficients not exceeding some <b>threshold</b> to zero and then exploit the MRA to synthesize the signal of interest using the modified wavelet coefficients. In other words, only those wavelet coefficients associated with very pronounced spectra are retained with the additional benefit of deriving a very sparse wavelet matrix.<br /><br /> To formalize the idea, let $ \mathbf{x} = \series{x}{t}{1}{T} $ and $ \mathbf{\epsilon} = \series{\epsilon}{t}{1}{T} $. Next, recall that the DWT can be represented as $ T\times T $ orthonormal matrix $ \mathcal{W} $, yielding: $$ \mathbf{z} \equiv \mathcal{W}\mathbf{y} = \mathcal{W}\mathbf{x} + \mathcal{W}\mathbf{\epsilon} $$ where $ \mathcal{W}\mathbf{\epsilon} \sim N(0, \sigma^{2}_{\epsilon}) $. The idea now is to shrink any coefficients not surpassing a threshold to zero.<br /><br /> <h4 class="subseccol", id="sec4.1">Thresholding Rule</h4> While there are several thresholding rules, by far, the two most popular are: <ul> <li><b>Hard Tresholding Rule</b> (``kill/keep'' strategy), formalized as: $$ \delta_{\eta}^{H}(x) = \begin{cases} x \quad \text{if } |x| > \eta\\ 0 \quad \text{otherwise} \end{cases} $$ </li> <li> <b>Soft Thresholding Rule</b>, formalized as: $$ \delta_{\eta}^{S}(x) = \sign(x)\max\cbrace{0 \,,\, |x| - \eta} $$ </li> </ul> where $ \eta $ is the threshold limit.<br /><br /> <h4 class="subseccol", id="sec4.2">Optimal Threshold</h4> The threshold value $ \eta $ is key to wavelet shrinkage. In particular, optimal thresholding is achieved when $ \eta = \sigma_{\epsilon} $ where $ \sigma_{\epsilon} $ is the standard deviation of the noise process $ \mathbf{\epsilon} $. In this regard, several threshold strategies have emerged over the years. <ul> <li> <b>Universal Threshold</b>, proposed in Donoho and Johnstone (1994), and formalized as: $$ \eta^{\text{U}} = \hat{\sigma}_{\epsilon} \sqrt{2\log(T)} $$ where $ \hat{\sigma}_{\epsilon} $ is estimated using wavelet coefficients only at scale $ \lambda_{1} $, regardless of what scale is under consideration. When this threshold rule is coupled with soft thresholding, the combination is commonly referred to as <b>VisuShrink</b>.<br /><br /> </li> <li> <b>Adaptive Universal Threshold</b> is identical to the universal threshold above, but estimates $ \hat{\sigma}_{\epsilon} $ using those wavelet coefficients associated with the scale under consideration. In other words: $$ \eta^{\text{AU}} = \hat{\sigma}_{\epsilon, j} \sqrt{2\log(T)} $$ where $ \sigma_{\epsilon, j} $ is the variance of the wavelet coefficients at scale $ \lambda_{j} $.<br /><br /> </li> <li> <b>Minimax Estimation</b> proposed in Donoho and Johnstone (1994), and is formalized as the solution to: $$ \inf_{\hat{\mathbf{x}}}\sup_{\mathbf{x}} R(\hat{\mathbf{x}}, \mathbf{x}) $$ Unfortunately, a closed form solution is not available, although tabulated values exist. Furthermore, when this threshold is coupled with soft thresholding, the combination is commonly referred to as <b>RiskShrink</b>.<br /><br /> </li> <li> <b>Stein's Unbiased Risk Estimate</b> (SURE), formalized as the solution to: $$ \min_{\hat{\mathbf{\mu}}} \norm{\mathbf{\mu} - \hat{\mathbf{\mu}}}^{2} $$ where $ \mathbf{\mu} = (\mu_{1}, \ldots, \mu_{s})^{\top} $ and $ \mu_{k} $ is the mean of some variable of interest $ q_{k} ~ N(\mu_{k}, 1) $, for $ k = 1, \ldots, s $. In the framework of wavelet coefficients, $ q_{k} $ would represent the standardized wavelet coefficients at a given scale.<br /><br /> Furthermore, while the optimal threshold $ \eta $ based on this rule depends on the thresholding rule used, the solution may not be unique and so the SURE threshold value is the minimum such $ \eta $. In case of the soft thresholding rule, the solution was proposed in Donoho and Johnstone (1994). Alternatively, for the hard thresholding rule, the solution was proposed in Jansen (2010).<br /><br /> </li> <li> <b>False Discovery Rate</b> (FDR), proposed in Abramovich and Benjamini (1995), determines the threshold value through a multiple hypotheses testing problem. The procedure is summarized in the following algorithm:<br /><br /> <ol> <li> For each $ W_{t,j} \in \mathbf{W}_{j} $ consider the hypothesis $ H_{t,j}: W_{t,j} = 0 $ and its associated two-sided $ p- $value: $$ p_{t,j} = 2\rbrace{1 - \Phi\rbrace{\frac{|W_{t,j}|}{\sigma_{\epsilon, j}}}} $$ where as before, $ \sigma_{\epsilon, j} $ is the variance of the wavelet coefficients at scale $ \lambda_{j} $ and $ \Phi(\cdot) $ is the standard Gaussian CDF.<br /><br /> </li> <li> Sort the $ p_{t,j} $ in ascending order so that: $$ p_{(1)} \leq p_{(2)} \leq \ldots \leq p_{(m_{j})} $$ where $ m_{j} $ denotes the cardinality (number of elements) in $ \mathbf{W}_{j} $. For instance, when $ \mathbf{W}_{j} $ are derived from a DWT, then $ m_{j} = T/2^{j} $.<br /><br /> </li> <li> Let $ \alpha $ define the significance level of the hypothesis tests and let $ i^{\star} $ denote the largest $ i \in \cbrace{1, \ldots, m_{j}} $ such that $ p_{(i)} \leq (\frac{i}{m_{j}})\alpha $. For this $ i^{\star} $, the quantity: $$ \eta^{\text{FDR}}_{j} = \sigma_{\epsilon, j}\Phi^{-1}\rbrace{1 - \frac{p_{i^{\star}}}{2}} $$ is the optimal threshold for wavelet coefficients at scale $ \lambda_{j} $.<br /> </li> </ol> </li> </ul> For further details, Donoho, Johnstone, et. al. (1998), Gencay, Selcu, and Whitcher (2001), and Percival and Walden (2000).<br /><br /> <h4 class="subseccol", id="sec4.3">Wavelet Coefficient Variance</h4> Before summarizing the entire threshold procedure, there remains the issue of how to estimate the variance of the wavelet coefficients, $ \sigma^{2}_{\epsilon} $. If the assumption is that the observed data $ \mathbf{y} $ is obscured by some noise process $ \mathbf{\epsilon} $, the usual estimator of variance will exhibit extreme sensitivity to noisy observations. Accordingly, let $ \mu_{j} $ and $ \zeta_{j} $ denote the mean and median, respectively, of the wavelet coefficients $ \mathbf{W}_{j} $ at scale $ \lambda_{j} $, and let $ m_{j} $ denote its cardinality (total number of coefficients at said scale). Then, several common estimators have been proposed in the literature: <ul> <li> <b>Mean Absolute Deviation</b> formalized as: $$ \hat{\sigma}_{\epsilon, j} = \frac{1}{m_{j}}\xsum{i}{1}{m_{j}}{|W_{i, j} -\mu_{j}|} $$<br /><br /> </li> <li> <b>Median Absolute Deviation</b> formalized as: $$ \hat{\sigma}_{\epsilon, j} = \med\rbrace{|W_{1, j} -\zeta_{1}|, \ldots, |W_{m_{j}, j} -\zeta_{m_{j}}|} $$<br /><br /> </li> <li> <b>Mean Median Absolute Deviation</b> formalized as: $$ \hat{\sigma}_{\epsilon, j} = \frac{1}{m_{j}}\xsum{i}{1}{m_{j}}{|W_{i, j} -\zeta_{j}|} $$<br /><br /> </li> <li> <b>Median (Gaussian)</b> formalized as: $$ \hat{\sigma}_{\epsilon, j} = \frac{\med\rbrace{|W_{1, j}|, \ldots, |W_{m_{j}, j}|}}{0.6745} $$<br /><br /> </li> </ul> <h4 class="subseccol", id="sec4.4">Thresholding Implementation</h4> The previous sections were devoted to describing thresholding rules and optimal threshold values. Here the focus is on summarizing thresholding implementations.<br /><br /> Effectively all wavelet thresholding procedures follow the algorithm below: <ol> <li> Compute a wavelet transformation of the original data up to some scale $ J^{\star} < J $. In other words, derive a partial wavelet transform and derive the wavelet and scaling coefficients $ \mathbf{W}_{1}, \ldots, \mathbf{W}_{J^{\star}}, \mathbf{V}_{J^{\star}} $.<br /><br /> </li> <li> Select an optimal threshold $ \eta $ from one of the methods discussed earlier.<br /><br /> </li> <li> Threshold the coefficients at each scale $ \lambda_{j} $ for $ j \in \cbrace{1, \ldots, J^{\star}} $ using the threshold value selected in 2 and some thresholding rule (hard or soft). This will generate a set of modified (thresholded) wavelet coefficients $ \mathbf{W}^{\text{(T)}}_{1}, \ldots, \mathbf{W}^{\text{(T)}}_{J^{\star}} $. Observe that scaling coefficients $ \mathbf{V}_{J^{\star}} $ are <b>not</b> thresholded.<br /><br /> </li> <li> Use MRA with the thresholded coefficients to reconstruct the signal (original data) as follows: \begin{align*} \hat{\mathbf{y}} &= \xsum{j}{1}{J^{\star}}{\mathcal{W}^{\top}\mathbf{W}^{\text{(T)}}_{j}} + \mathcal{V}^{\top}\mathbf{V}_{J^{\star}}\\ &= \xsum{j}{1}{J^{\star}}{\mathcal{D}^{\text{(T)}}_{j}} + \mathcal{S}_{J^{\star}} \end{align*}<br /><br /> </li> </ol> <h3 class="seccol", id="sec5">Conclusion</h3> In this first entry of our series on wavelets, we provided a theoretical overview of the most important aspects in wavelet analysis. In <a href='http://blog.eviews.com/2020/12/wavelet-analysis-part-ii-applications.html'>Part II</a>, we will see how to apply these concepts by using the new wavelet features released with EViews 12.<br /><br /><br /> <hr /> <h3 class="seccol", id="sec6">References</h3> <ol class="bib2xhtml"> <li id="abramovich-1995"> Abramovich F and Benjamini Y (1995), <i>"Thresholding of wavelet coefficients as multiple hypotheses testing procedure"</i>, In Wavelets and Statistics. , pp. 5-14. Springer. </li> <li id="daubechies-1992"> Daubechies I (1992), <i>"Ten lectures on wavelets, CBMS-NSF Conference Series in Applied Mathematics"</i>, SIAM Ed. , pp. 122-122. </li> <li id="donoho-1994"> Donoho DL and Johnstone IM (1994), <i>"Ideal spatial adaptation by wavelet shrinkage"</i>, biomeliika. Vol. 81(3), pp. 425-455. Oxford University Press. </li> <li id="donoho-1995"> Donoho DL and Johnstone IM (1995), <i>"Adapting to unknown smoothness via wavelet shrinkage"</i>, Journal of the american statistical association. Vol. 90(432), pp. 1200-1224. Taylor & Francis Group. </li> <li id="donoho-1998"> Donoho DL, Johnstone IM and others (1998), <i>"Minimax estimation via wavelet shrinkage"</i>, The annals of Statistics. Vol. 26(3), pp. 879-921. Institute of Mathematical Statistics. </li> <li id="gencay-2001"> Gençay R, Selçuk F and Whitcher BJ (2001), <i>"An inlioduction to wavelets and other filtering methods in finance and economics"</i> Academic press. </li> <li id="jansen-2010"> Jansen M (2010), <i>"Minimum risk methods in the estimation of unknown sparsity"</i>, Technical report. </li> <li id="mallat-1989"> Mallat S (1989), <i>"A theory for multiresolution signal decomposition: The wavelet representation"</i>, Pattern Analysis and Machine Intelligence, IEEE liansactions on. Vol. 11(7), pp. 674-693. </li> <li id="percival-2000"> Percival D and Walden A (2000), <i>"Wavelet methods for time series analysis"</i> Vol. 4 Cambridge Univ Pr. </li> <li id="priestley-1981"> Priestley MB (1981), <i>"Speclial analysis and time series: probability and mathematical statistics"</i> (04; QA280, P7.) </li> </ol></span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com1tag:blogger.com,1999:blog-6883247404678549489.post-6385533183570778612020-07-16T09:48:00.003-07:002020-07-17T07:32:26.402-07:00Time Series Methods for Modelling the Spread of Epidemics<style> table { border: 0px solid black; border-collapse: separate; border-spacing: 10px; } td { border: 1px solid black; } .nb { border: 0px solid black; } .step { counter-reset: section; list-style-type: none; } .step li::before { counter-increment: section; content: "Step "counter(section) ": "; } </style> <script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { lb: "{\\left(}", rb: "{\\right)}", bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"> </script> <span style="font-family: "verdana" sans-serif"> <i>Authors and guest post by Eren Ocakverdi</i><br /><br /> This blog piece intends to introduce two new add-ins (i.e. <a href='http://www.eviews.com/Addins/seirmodel.aipz'>SEIRMODEL</a> and <a href='http://www.eviews.com/Addins/tsepigrowth.aipz'>TSEPIGROWTH</a>) to EViews users’ toolbox and help close the gap between epidemiological models and time series methods from a practitioner’s point of view. <a name='more'></a><br /><br /> <h3>Table of Contents</h3> <ol> <li><a href="#sec1">Introduction</a> <li><a href="#sec2">Susceptible-Exposed-Infected-Recovered (SEIR) model</a> <li><a href="#sec3">Observational Models</a> <li><a href="#sec4">Application to COVID-19 Data from Turkey</a> <li><a href="#sec5">Files</a> <li><a href="#sec6">References</a> </ol><br /> <h3 id="sec1">Introduction</h3> Spread of infectious diseases are usually described through compartmental models in mathematical epidemiology instead of observational time series models since analytical derivation of their dynamics are quite straightforward. These are merely structural models that divide the population into several states and then define the equations that govern the transition behavior from one state to another. In other words, <i>state space</i> models.<br /><br /> <h3 id="sec2">Susceptible-Exposed-Infected-Recovered (SEIR) model</h3> I have written an add-in (<a href='http://www.eviews.com/Addins/seirmodel.aipz'>SEIRMODEL</a>) for interested EViews users, who would want to carry out their own analyses and gain basic insights into the systemic nature of an epidemic. The add-in implements a deterministic version of the SEIR model, which does not take into account vital dynamics like birth and death. Still, it offers a simplified framework for those who are not familiar with these concepts.<br /><br /> In order to run simulations, users need to provide required inputs (e.g. population size, calibration parameters, initial conditions etc.), details of which can be found in the documentation file that comes with the add-in:<br /><br /> <!-- :::::::::: FIGURE 1 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/tsepigrowth/seir_dialog.png"><img height="auto" src="http://www.eviews.com/blog/tsepigrowth/seir_dialog.png" title="SEIR Add-In Dialog" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 1: SEIR Add-In Dialog</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 1 :::::::::: --> The default output is a chart showing the evolution of compartments/states during the spread of the epidemic. You can also save these series for further analysis.<br/><br/> <!-- :::::::::: FIGURE 2 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/tsepigrowth/seir_output.png"><img height="auto" src="http://www.eviews.com/blog/tsepigrowth/seir_output.png" title="SEIR Add-In: Output" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 2: SEIR Add-In Output</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 2 :::::::::: --> <h3 id="sec3">Observational Models</h3> Structural modelling of epidemics becomes increasingly complex when the heterogeneity in the population, mobility issues, interactions, etc. are considered in the computations. Functions fitted to observed data for calibration purposes are mostly nonlinear, which can further complicate the estimation process. Harvey and Kuttman (2020) recently proposed useful observational time series methods particularly for generalized logistic and Gompertz growth curves. I have written an add-in (<a href='http://www.eviews.com/Addins/tsepigrowth.aipz'>TSEPIGROWTH</a>) that implements those methods outlined in the paper.<br/><br/> Suppose we wanted to fit these nonlinear curves to the number of infected individuals from the simulation of our earlier SEIR model:<br /><br /> <!-- :::::::::: FIGURES 3a and 3b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 3a :::::::::: --> <center> <a href="http://www.eviews.com/blog/tsepigrowth/seir_logistic.png"><img height="auto" src="http://www.eviews.com/blog/tsepigrowth/seir_logistic.png" title="SEIR: Generalized Logistic Fit" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 3b :::::::::: --> <center> <a href="http://www.eviews.com/blog/tsepigrowth/seir_gompertz.png"><img height="auto" src="http://www.eviews.com/blog/tsepigrowth/seir_gompertz.png" title="SEIR: Gompertz Growth Curve Fit" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 3a: SEIR: Generalized Logistic Fit</small> </center> </td> <td class="nb"> <center> <small>Figure 3b: SEIR: Gompertz Growth Curve Fit</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 3a and 3b :::::::::: --> Above, c(4) denotes the growth rate parameter. At this point I would also suggest EViews users to try the <a href="http://www.eviews.com/Addins/GBASS.aipz">GBASS</a> add-in, which incorporates the generalized BASS model developed for modelling how new products (or new viruses for that matter!) get adopted into a population.<br /><br /> If we wanted to take the other venue offered by Harvey and Kuttman (2020) and estimate these parameters via observational methods, then we could simply run the add-in:<br /><br /> <!-- :::::::::: FIGURE 4 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/tsepigrowth/tsepigrowth_dialog.png"><img height="auto" src="http://www.eviews.com/blog/tsepigrowth/tsepigrowth_dialog.png" title="TSEPIGROWTH Add-In: Dialog" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 4: TSEPIGROWTH Add-In Dialog</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 4 :::::::::: --> Output from the state space specification of these models are as follows:<br /><br /> <!-- :::::::::: FIGURES 5a and 5b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 3a :::::::::: --> <center> <a href="http://www.eviews.com/blog/tsepigrowth/tsepigrowth_logistic_ss.png"><img height="auto" src="http://www.eviews.com/blog/tsepigrowth/tsepigrowth_logistic_ss.png" title="TSEPIGROWTH: Generalized Logistic SS Model" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 3b :::::::::: --> <center> <a href="http://www.eviews.com/blog/tsepigrowth/tsepigrowth_gompertz_ss.png"><img height="auto" src="http://www.eviews.com/blog/tsepigrowth/tsepigrowth_gompertz_ss.png" title="TSEPIGROWTH: Gompertz Growth Curve SS Model" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 5a: TSEPIGROWTH: Generalized Logistic SS Model</small> </center> </td> <td class="nb"> <center> <small>Figure 5b: TSEPIGROWTH: Gompertz Growth Curve SS Model</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 5a and 5b :::::::::: --> Here, the final value of the state variable <i>CHANGE</i>, corresponds to the growth rate parameter and is more or less close to that of fitted nonlinear curves.<br/><br/> <h3 id="sec4">Application to COVID-19 Data From Turkey</h3> Examples above may be important or useful from a pedagogical point of view, but we need to try these models on actual data to gain more insight from a practical perspective. Naturally, COVID-19 data would be the most recent and most appropriate place to start. Users can visit the <a href='http://blog.eviews.com/2020/03/mapping-covid-19.html'>previous blog post</a> to learn how to fetch COVID-19 data from various sources. Here, I’ll use another data source provided by the WHO.<br /><br /> First, we fit a Gompertz curve to the level and make forecasts until the end of year. Next, we do the same exercise with the observational counterparts of the Gompertz model that focus on estimation of the growth rate.<br /><br /> The chart below visually compares the fitted values of growth:<br /><br /> <!-- :::::::::: FIGURE 6 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/tsepigrowth/grfit.png"><img height="auto" src="http://www.eviews.com/blog/tsepigrowth/grfit.png" title="Gompertz Fit Curves" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 6: Gompertz Fit Curves</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 6 :::::::::: --> The next plot displays the forecasted values for the level: <!-- :::::::::: FIGURE 7 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/tsepigrowth/grfcast.png"><img height="auto" src="http://www.eviews.com/blog/tsepigrowth/grfcast.png" title="Gompertz Forecast Curves" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 7: Gompertz Forecast Curves</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 7 :::::::::: --> These forecasts indicate different saturation levels, of which the nonlinear curve is the lowest. This is mainly because the inflection point of the fitted nonlinear curve implies levelling off at an earlier date. The first observational model has a deterministic trend, but performs better since it focuses on the growth rate. There is an obvious change in trend at the beginning of June as Turkey then announced the first phase of COVID-19 restriction easing and marked the start of the normalization process. Observational models allow us to model this change explicitly as a slope intervention: <!-- :::::::::: FIGURE 8 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/tsepigrowth/policyss.png"><img height="auto" src="http://www.eviews.com/blog/tsepigrowth/policyss.png" title="Policy Intervention SS Model" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 8: Policy Intervention SS Model</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 8 :::::::::: --> The coefficient <i>C(3)</i> verifies that the growth rate has risen significantly as of June. Dynamic versions of the observational model of Gompertz fits a flexible trend to data so it adapts to changes in growth rates without any need for explicit modelling of the intervention. It also allows the analysis of the impact of policy/intervention from a counterfactual perspective. The plot below compares the out-of-sample forecasts of the dynamic model before and after the normalization period. The shift in the forecasted level of total cases is obvious! <!-- :::::::::: FIGURE 9 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/tsepigrowth/policygrfcast.png"><img height="auto" src="http://www.eviews.com/blog/tsepigrowth/policygrfcast.png" title="Policy Intervention Out of Sample Forecast" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 9: Policy Intervention Out of Sample Forecast</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 9 :::::::::: --> <h3 id="sec5">Files</h3> <ul> <li><a href="http://www.eviews.com/blog/tsepigrowth/tsepigrowth_blog.prg">tsepigrowth_blog.prg</a> </ul> <br /><br /> <hr /> <h3 id="sec6">References</h3> <ol class="bib2xhtml"> <li><a name="harvey-2020"></a>Harvey, A. C. and Kattuman, P.: Time Series Models Based on Growth Curves with Applications to Forecasting Coronavirus <cite>Covid Economics: Vetted and Real-Time Papers</cite>, 24(1) 126–157, 2020. </li> </ol></span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com3tag:blogger.com,1999:blog-6883247404678549489.post-33436183732368941762020-04-01T06:44:00.000-07:002020-04-01T06:44:12.331-07:00Mapping COVID-19: Follow-up<style> table { border: 0px solid black; border-collapse: separate; border-spacing: 10px; } td { border: 1px solid black; } .nb { border: 0px solid black; } .step { counter-reset: section; list-style-type: none; } .step li::before { counter-increment: section; content: "Step "counter(section) ": "; } </style> <script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { lb: "{\\left(}", rb: "{\\right)}", bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"> </script> <span style="font-family: "verdana" sans-serif"> As a follow up to our <a href="http://blog.eviews.com/2020/03/mapping-covid-19.html">previous blog entry</a> describing how to import Covid-19 data into EViews and produce some maps/graphs of the data, this post will produce a couple more graphs similar to ones we've seen become popular across social media in recent days. <a name='more'></a><br /><br /> <h3>Table of Contents</h3> <ol> <li><a href="#sec1">Deaths Since First Death</a> <li><a href="#sec2">One Week Difference</a> </ol><br /> <h3 id="sec1">Deaths Since First Death</h3> The first is a graph showing the 3 day moving average of the number of deaths per day since the first death was recorded in a country, for countries with a current number of deaths greater than 160:<br /><br /> <!-- :::::::::: FIGURE 1 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/3dma.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/3dma.png" title="3-Day moving average" width="480" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 1: 3-Day moving average</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 1 :::::::::: --> The graph shows that for most countries the growth rate of deaths (approximated by using log-scaling) is increasing, but at a slower rate. The code to produce this graph, including importing the death data from Johns Hopkins is:<br /><br /> <pre style="overflow:auto"><br /> <font color="green">'import the death data from Johns Hopkins</font><br /> %url = "https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_global.csv"<br /> <br /> <font color="green">'load up the url as a new page</font><br /> pageload(page=temp) {%url}<br /> <br /> <font color="green">'stack the page into a 2d panel</font><br /> pagestack(page=stack) _? @ *? * <br /> <br /> <font color="green">'do some renaming and make the date series</font><br /> rename country_region country <br /> rename province_state province<br /> rename _ deaths<br /> series date = @dateval(var01, "MM_DD_YYYY")<br /> <br /> <font color="green">'structure the page </font><br /> pagestruct province country @date(date)<br /> <br /> <font color="green">'delete the original page</font><br /> pagedelete temp<br /> <br /> <font color="green">'create the panel page</font><br /> pagecreate(id, page=panel) country @date @srcpage stack<br /> <br /> <font color="green">'copy the deaths series to the panel page</font><br /> copy(c=sum) stack\deaths * @src @date country @dest @date country<br /> pagedelete stack<br /> <br /> <font color="green">'contract the page to only include countries with greater than 160 deaths</font><br /> pagecontract if @maxsby(deaths,country)>160<br /> <br /> <font color="green">'create a series containing the number of days since the first death was recorded in each country. This series is equal to 0 if the number of deaths on a date is equal to the minimum number of deaths for that country (nearly always 0, but for China, the data starts after the first recorded death), and then counts up by one for dates after the minimum.</font><br /> series days = @recode(deaths=@minsby(deaths,country), 0, days(-1)+1)<br /> <br /> <font color="green">'contract the page so that days before the second recorded death in each country are removed</font><br /> pagecontract if days>0<br /> <br /> <font color="green">'restructure the page to be based on this day count rather than actual dates</font><br /> pagestruct(freq=u) @date(days) country<br /> <br /> <font color="green">'set sample to be first 45 days</font><br /> smpl 1 45<br /> <br /> <font color="green">'make a graph of the 3 day moving average of deaths</font><br /> freeze(d_graph) @movav(log(deaths),3).line(m, panel=c)<br /> d_graph.addtext(t, just(c)) Deaths Since First Death\n(3 day moving average, log scale)<br /> d_graph.addtext(br) Days<br /> d_graph.addtext(l) log(deaths)<br /> d_graph.legend columns(5)<br /> d_graph.legend position(-0.6,3.72)<br /> show d_graph<br /> </pre> <h3 id="sec2">One Week Difference</h3> The second graph is an interesting approach plotting the one-week difference in the number of new confirmed cases of COVID-19 against the total number of confirmed cases for each country, with both shown using log-scales. We have only included countries with more than 140 deaths, and have highlighted just three countries – China, South Korea and the US.<br /><br /> <!-- :::::::::: FIGURE 2 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/weekdiff.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/weekdiff.png" title="One week difference" width="480" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 2: One week difference</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 2 :::::::::: --> The code to generate this graph is:<br /><br /> <pre style="overflow:auto"><br /> <font color="green">'names of the three topics/files</font><br /> %topics = "confirmed deaths recovered"<br /><br /> <font color="green">'loop through the topics</font><br /> for %topic {%topics}<br /> <br /> <font color="green">'build the url by taking the base url and then adding the topic in the middle</font><br /> %url = "https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_" + %topic + "_global.csv"<br /> <br /> <font color="green">'load up the url as a new page</font><br /> pageload(page=temp) {%url}<br /> <br /> <font color="green">'stack the page into a 2d panel</font><br /> pagestack(page=stack_{%topic}) _? @ *? *<br /> <br /> <font color="green">'do some renaming and make the date series</font><br /> rename country_region country <br /> rename province_state province<br /> rename _ {%topic}<br /> series date = @dateval(var01, "MM_DD_YYYY")<br /> <br /> <font color="green">'structure the page</font><br /> pagestruct province country @date(date)<br /> <br /> <font color="green">'delete the original page</font><br /> pagedelete temp<br /> next<br /> <br /> <font color="green">'create the panel page</font><br /> pagecreate(id, page=panel) country @date @srcpage stack_{%topic}<br /> <br /> <font color="green">'loop through the topics copying each from the 2D panel</font><br /> for %topic {%topics}<br /> copy(c=sum) stack_{%topic}\{%topic} * @src @date country @dest @date country<br /> pagedelete stack_{%topic}<br /> next<br /> <br /> <font color="green">'contract the page to only include countries with more than 140 deaths</font><br /> pagecontract if @maxsby(deaths, country)>140<br /> <br /> <font color="green">'make a group, called DATA, containing confirmed cases and the one week difference in confirmed cases</font><br /> group data confirmed confirmed-confirmed(-7)<br /> <br /> <font color="green">'set the sample to remove periods with fewer than 50 cases</font><br /> smpl if confirmed > 50<br /> <br /> <font color="green">'produce a panel plot of confirmed against 7 day difference in confirmed</font><br /> freeze(c_graph) data.xyline(panel=c)<br /> <br /> <font color="green">' Add titles</font><br /> c_graph.addtext(t) "COVID-19: New vs. Total Cases\n(Countries with >140 deaths)"<br /> c_graph.addtext(bc, just(c)) "Total Confirmed Cases\n(log scale)"<br /> c_graph.addtext(l, just(c))"New Confirmed Cases (in the past week)\n(log scale)"<br /> c_graph.setelem(1) legend("")<br /> <br /> <font color="green">' Adjust axis to use logs</font><br /> c_graph.axis(b) log<br /> c_graph.axis(l) log<br /> <br /> <font color="green">' Adjust lines - remove lines after this if you want to show all countries</font><br /> c_graph.legend -display<br /> for !i = 1 to @rows(@uniquevals(country))<br /> c_graph.setelem(!i) linewidth(.75) linecolor(@rgb(192,192,192))<br /> next<br /><br /> c_graph.setelem(8) linecolor(@rgb(128,64,0))<br /> c_graph.setelem(3) linecolor(@rgb(0,64,128))<br /> c_graph.setelem(15) linecolor(@rgb(0,128,0))<br /> <br /> <font color="green">'add some text</font><br /> c_graph.addtext(3.29, 1.92, font(Calibri,10)) "S. Korea"<br /> c_graph.addtext(4.87, 2.35, font(Calibri,10)) "China"<br /> c_graph.addtext(5.31, 0.23, font(Calibri,10)) "United States"<br /> <br /> show c_graph<br /> </pre></span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com8tag:blogger.com,1999:blog-6883247404678549489.post-11407133675307775742020-03-30T17:28:00.001-07:002020-04-01T07:55:06.046-07:00Mapping COVID-19<style> table { border: 0px solid black; border-collapse: separate; border-spacing: 10px; } td { border: 1px solid black; } .nb { border: 0px solid black; } .step { counter-reset: section; list-style-type: none; } .step li::before { counter-increment: section; content: "Step "counter(section) ": "; } </style> <script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { lb: "{\\left(}", rb: "{\\right)}", bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"> </script> <span style="font-family: "verdana" sans-serif"> With the world currently experiencing the Covid-19 crisis, many of our users are working remotely (aside: for details on how to use EViews at home, visit our <a href="http://www.eviews.com/covid">Covid licensing page</a>) anxious to follow data on how the virus is spreading across parts of the world. There are many sources of information on Covid-19, and we thought we’d demonstrate how to fetch some of these sources directly into EViews, and then display some graphics of the data. (Please visit our <a href="http://blog.eviews.com/2020/04/mapping-covid-19-follow-up.html">follow up post</a> for a few more graph examples). <a name='more'></a><br /><br /> <h3>Table of Contents</h3> <ol> <li><a href="#sec1">Johns Hopkins Data</a> <li><a href="#sec2">European Centre for Disease Prevention and Control Data</a> <li><a href="#sec3">New York Times US County Data</a> <li><a href="#sec4">Sneak Peaks</a> </ol><br /> <h3 id="sec1">Johns Hopkins Data</h3> To begin we'll retrieve data from the Covid-19 Time Series collection from <a href="https://github.com/CSSEGISandData/COVID-19/tree/master/csse_covid_19_data/csse_covid_19_time_series">Johns Hopkins Whiting School of Engineering Center for Systems Science and Engineering</a>. These data are organized into three csv files, one containing confirmed cases, on containing deaths, and one recoveries at both country and state/province levels. Each file is organized such that the first column contains state/province name (where applicable), the second column the country name, the third and fourth contain average latitude and longitude, and then the remaining columns containing daily values.<br /><br /> There are a number of different approaches that could be used to import these data into an EViews workfile. We’ll demonstrate an approach that will stack the data into a single panel workfile. We’ll start with importing the confirmed cases data. EViews is able to directly open CSV files over the web using the <b>File->Open->Foreign Data as Workfile</b> menu item:<br /><br /> <!-- :::::::::: FIGURE 1 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/jhopenpath.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/jhopenpath.png" title="JH open path" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 1: JH open path</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 1 :::::::::: --> Which results in the following workfile:<br /><br /> <!-- :::::::::: FIGURE 2 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/jhwf.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/jhwf.png" title="JH workfile" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 2: JH workfile</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 2 :::::::::: --> Each day of data has been imported into its own series, with the name of the series being the date. There are also series containing the country/region name and the province/state name, as well as latitude and longitude.<br /><br /> To create a panel, we’ll want to stack these date series into a single series, which we can do simply with the <b>Proc->Reshape Current Page->Stack in New Page…</b><br /><br /> <!-- :::::::::: FIGURE 3 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/jhstackdialog.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/jhstackdialog.png" title="JH stack data dialog" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 3: JH stack data dialog</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 3 :::::::::: --> Since all of the series we wish to stack have a similar naming structure – they all start with an “_” we can instruct EViews to stack using “_?” as the identifier, where ? is a wildcard. This results in the following stacked workfile page:<br /><br /> <!-- :::::::::: FIGURE 4 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/jhstackwf.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/jhstackwf.png" title="JH stack data workfile" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 4: JH stack data workfile</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 4 :::::::::: --> Which is close to what we want, we simply need to tidy up some of the variable names, and instruct EViews to structure the page as a true panel. The date information has been imported into the alpha series VAR01, which we can convert into a true date series with:<br /><br /> <pre style="overflow:auto"><br /> series date = @dateval(var01, "MM_DD_YYYY")<br /> </pre> The actual cases data is stored in the series currently named "_", which we can rename to something more meaningful with:<br /><br /> <pre style="overflow:auto"><br /> rename _ cases<br /> </pre> And then finally we can structure the page as a panel by clicking on <b>Proc->Structure/Resize</b> current page, selecting Dated Panel as the structure type and filling in the date and filling in the cross-section and date information:<br /><br /> <!-- :::::::::: FIGURE 5 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/jhstructuredialog.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/jhstructuredialog.png" title="JH workfile restructure" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 5: JH workfile restructure</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 5 :::::::::: --> When asked if we wish to remove blank values, we select no. We now have a 2-dimensional panel, with two sets of cross-sectional identifiers – one for province/state and the other for country:<br /><br /> <!-- :::::::::: FIGURE 6 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/jh3dpanel.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/jh3dpanel.png" title="JH 2D Panel" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 6: JH 2D Panel</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 6 :::::::::: --> If we want to sum up the state level data to create a traditional panel with just country and time, we can do so by creating a new panel page based upon the indices of this page. Click on the <b>New Page</b> tab at the bottom of the workfile and select <b>Specify by Identifier Series</b>. In the resulting dialog we enter the country series as the cross-section identifier we wish to keep:<br /><br /> <!-- :::::::::: FIGURE 6 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/jhpagebyid.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/jhpagebyid.png" title="JH page by ID" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 6: JH page by ID</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 6 :::::::::: --> Which results in a panel. We can then copy the cases series from our 2D panel page to the new panel page with standard copy and paste, but ensuring to change the Contraction method to Sum in the Paste Special dialog:<br /><br /> <!-- :::::::::: FIGURE 7 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/jhpastedialog.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/jhpastedialog.png" title="JH paste dialog" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 7: JH paste dialog</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 7 :::::::::: --> <!-- :::::::::: FIGURE 8 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/jhpanelwf.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/jhpanelwf.png" title="JH panel workfile" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 8: JH panel workfile</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 8 :::::::::: --> With the data in a standard panel workfile, all of the standard EViews tools are now available. We can view a graph of the cases by country by opening the cases series, clicking on <b>View->Graph</b>, and then selecting <b>Individual cross sections</b> as the <b>Panel option</b>.<br /><br /> <!-- :::::::::: FIGURE 9 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/jhallcxgraph.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/jhallcxgraph.png" title="JH graph of all cross-sections" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 9: JH graph of all cross-sections</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 9 :::::::::: --> This graph may be a little unwieldy, so we can reduce the number of cross-sections down to, say, only countries that have, thus far, experienced more than 10,000 cases by using the smpl command:<br /><br /> <pre style="overflow:auto"><br /> smpl if @maxsby(cases, country_region)>10000<br /> </pre> <!-- :::::::::: FIGURE 9 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/jhmaxsbygraph.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/jhmaxsbygraph.png" title="JH cross-sections with more than 10000 cases" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 9: JH cross-sections with more than 10000 cases</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 9 :::::::::: --> Of course, all of this could have been done in an EViews program, and it could be automated to combine all three data files, ending up with a panel containing cases, deaths and recoveries. The following EViews code produces such a panel:<br /><br /> <pre style="overflow:auto"><br /> <font color="green">'close all existing workfiles</font><br /> close @wf<br /> <br /> <font color="green">'names of the three topics/files</font><br /> %topics = "confirmed deaths recovered"<br /> <br /> <font color="green">'loop through the topics</font><br /> for %topic {%topics}<br /> <font color="green">'build the url by taking the base url and then adding the topic in the middle</font><br /> %url = "https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_" + %topic + "_global.csv"<br /> <br /> <font color="green">'load up the url as a new page</font><br /> pageload(page=temp) {%url}<br /> <br /> <font color="green">'stack the page into a 3d panel</font><br /> pagestack(page=stack_{%topic}) _? @ *? *<br /> <br /> <font color="green">'do some renaming and make the date series</font><br /> rename country_region country <br /> rename province_state province<br /> rename _ {%topic}<br /><br /> series date = @dateval(var01, "MM_DD_YYYY")<br /><br /> <font color="green">'structure the page</font><br /> pagestruct province country @date(date)<br /> <br /> <font color="green">'delete the original page</font><br /> pagedelete temp<br /><br /> <font color="green">'create the 2D panel page</font><br /> pagecreate(id, page=panel) country @date @srcpage stack_{%topic}<br /> next<br /> <br /> <font color="green">'loop through the topics copying each from the 3D panel into the 2D panel</font><br /> for %topic {%topics}<br /> copy(c=sum) stack_{%topic}\{%topic} * @src @date country @dest @date country<br /> pagedelete stack_{%topic}<br /> next<br /> </pre> <h3 id="sec2">European Centre for Disease Prevention and Control Data</h3> The second repository we'll use is data provided by the <a href="https://www.ecdc.europa.eu/en/publications-data/download-todays-data-geographic-distribution-covid-19-cases-worldwide">ECDC's Covid-19 Data site</a>. They provide an extremely easy to use data for each country, along with population data. Importing these data into EViews is trivial – you can open the XLSX file directly using the <b>File->Open-Foreign Data as Workfile</b> dialog and entering the URL to the XLSX in the <b>File name</b> box:<br /><br /> <!-- :::::::::: FIGURE 10 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/ecdcopenpath.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/ecdcopenpath.png" title="ECDC open path" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 10: ECDC open path</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 10 :::::::::: --> The resulting workfile will look like this:<br /><br /> <!-- :::::::::: FIGURE 11 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/ecdcwf.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/ecdcwf.png" title="ECDC workfile" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 11: ECDC workfile</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 11 :::::::::: --> All we need to do is structure it as a panel, which we can do by clicking on <b>Proc->Structure/Resize Current Page</b> and then entering the cross-section and date identifiers (we also choose to keep an unbalanced panel by unchecking the <b>Balance between starts & ends</b> box).<br /><br /> <!-- :::::::::: FIGURE 12 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/ecdcstructuredialog.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/ecdcstructuredialog.png" title="ECDC structure WF dialog" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 12: ECDC strcture WF dialog</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 12 :::::::::: --> The result is an EViews panel workfile:<br /><br /> <!-- :::::::::: FIGURE 13 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/ecdcseries.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/ecdcseries.png" title="ECDC series" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 13: ECDC series</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 13 :::::::::: --> The data provided by ECDC contains the number of new cases and deaths each day. Most presentation of Covid-19 data has been with the total number of cases and deaths per country. We can create the totals with the <b>@cumsum</b> function which will produce the cumulative sum, resetting to zero as the start of each cross-section.<br /><br /> <pre style="overflow:auto"><br /> series ccases = @cumsum(cases)<br /> series cdeaths = @cumsum(deaths)<br /> </pre> With this panel we can perform standard panel data analysis, or produce graphs (see the Johns Hopkins examples above). However, since the ECDC have included standard <a href="https://en.wikipedia.org/wiki/List_of_ISO_3166_country_codes"></a>ISO country codes for the countries, we can also tie the data to a geomap.<br /><br /> We found a simple <a href="http://thematicmapping.org/downloads/world_borders.php">shapefile</a> of the world <a href="http://thematicmapping.org/downloads/world_borders.php">online</a>, and downloaded it to our computer. In EViews we then click on <b>Object->New Object->GeoMap</b> to create a new geomap, and then drag the <b>.prj</b> file we downloaded onto the geomap.<br /><br /> In the properties box that appears, we tie the countries defined in the shapefile to the identifiers in the workfile. Since the shapefile uses ISO codes, and we have those in the <b>countriesandterritories</b> series, we can use those to map the workfile to the shapefile:<br /><br /> <!-- :::::::::: FIGURE 14 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/geomapprops.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/geomapprops.png" title="Geomap properties" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 14: Geomap properties</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 14 :::::::::: --> Which results in the following global geomap:<br /><br /> <!-- :::::::::: FIGURE 15 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/geopmapglobal.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/geomapglobal.png" title="Global geomap" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 15: Global geomap</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 15 :::::::::: --> We can use the <b>Label:</b> dropdown to remove the country labels to give a clearer view of the map (note this feature is a recent addition, you may need to update your copy of EViews to see the <b>None</b> option).<br /><br /> To add some color information to the map we click on <b>Properties</b> and then the <b>Color</b> tab. We'll add two custom color settings – a gradient fill so show differences in the number of cases, and a single solid color for countries with a large number of cases:<br /><br /> <!-- :::::::::: FIGURES 16a and 16b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 16a :::::::::: --> <center> <a href="http://www.eviews.com/blog/covid19/images/ecdcgeomaprange.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/ecdcgeomaprange.png" title="ECDC geomap color range" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 16b :::::::::: --> <center> <a href="http://www.eviews.com/blog/covid19/images/ecdcgeomapthresh.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/ecdcgeomapthresh.png" title="ECDC geomap color threshold" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 3a: ECDC geomap color range</small> </center> </td> <td class="nb"> <center> <small>Figure 3b: ECDC geomap color threshold</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 16a and 16b :::::::::: --> And then entering <b>ccases</b> as the coloring series. This results in a map:<br /><br /> <!-- :::::::::: FIGURE 17 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/ecdcgeomap.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/ecdcgeomap.png" title="ECDC geomap" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 17: ECDC geomap</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 17 :::::::::: --> Again, this could all be done programmatically with the following program (note the ranges for coloring will need to be changed as the virus becomes more wide spread):<br /><br /> <pre style="overflow:auto"><br /> <font color="green">'download data</font><br /> wfopen https://www.ecdc.europa.eu/sites/default/files/documents/COVID-19-geographic-disbtribution-worldwide.xlsx<br /> rename countryterritorycode iso3<br /> pagecontract if iso3<>""<br /> pagestruct(bal=m) iso3 @date(daterep)<br /> <br /> <font color="green">'make cumulative data</font><br /> series ccases = @cumsum(cases)<br /> series cdeaths = @cumsum(deaths)<br /> <br /> <font color="green">'make geomap for cases</font><br /> geomap cases_map<br /> cases_map.load ".\World Map\TM_WORLD_BORDERS_SIMPL-0.3.prj"<br /> cases_map.link iso3 iso3<br /> cases_map.options -legend<br /> cases_map.setlabel none<br /> cases_map.setfillcolor(t=custom) mapser(ccases) naclr(@RGB(255,255,255)) range(lim(0,12000,cboth), rangeclr(@grad(@RGB(255,255,255),@RGB(0,0,255))), outclr(@trans,@trans), name("Range")) thresh(12000, below(@trans), above(@RGB(0,0,255)), name("Threshold"))<br /> <br /> <font color="green">'make geomaps for deaths</font><br /> geomap deaths_map<br /> deaths_map.load ".\World Map\TM_WORLD_BORDERS_SIMPL-0.3.prj"<br /> deaths_map.link iso3 iso3<br /> deaths_map.options -legend<br /> deaths_map.setlabel none<br /> deaths_map.setfillcolor(t=custom) mapser(cdeaths) naclr(@RGB(255,255,255)) range(lim(1,500,cboth), rangeclr(@grad(@RGB(255,128,128),@RGB(128,64,64))), outclr(@trans,@trans), name("Range")) thresh(500,cleft,below(@trans),above(@RGB(128,0,0)),name("Threshold")) <br /> </pre> <h3 id="sec3">New York Times US County Data</h3> The final data repository we will look at is the <a href="https://github.com/nytimes/covid-19-data/blob/master/us-counties.csv">New York Times</a> data for the United States at county level. These data are also trivial to import into EViews, you can again just enter the URL for the CSV file to open it. Rather than walking through the UI steps, we'll simply post the two lines of code required to import and structure as a panel:<br /><br /> <pre style="overflow:auto"><br /> <font color="green">'retrieve data from NY Times github</font><br /> wfopen(page=covid) https://raw.githubusercontent.com/nytimes/covid-19-data/master/us-counties.csv<br /> <br /> <font color="green">'structure as a panel based on date and FIPS ID</font><br /> pagestruct(dropna) fips @date(date)<br /> </pre> Note that the New York Times have conveniently provided the <a href="https://en.wikipedia.org/wiki/FIPS_county_code">FIPS code</a> for each county, which means we can also produce some geomaps. We've downloaded a US county map from the <a href="https://dataverse.tdl.org/dataset.xhtml?persistentId=doi:10.18738/T8/CPTP8C">Texas Data Repository</a>, and then linked the <b>FIPS</b> series in the workfile with the <b>FIPS_BEA</b> attribute of the map:<br /><br /> <!-- :::::::::: FIGURE 17 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/geomapfipsprops.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/geomapfipsprops.png" title="Geomap FIPS properties" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 17: Geomap FIPS properties</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 17 :::::::::: --> The full code to produce such a map is:<br /><br /> <pre style="overflow:auto"><br /> <font color="green">'retrieve data from NY Times github</font><br /> wfopen(page=covid) https://raw.githubusercontent.com/nytimes/covid-19-data/master/us-counties.csv<br /> <br /> <font color="green">'structure as a panel based on date and FIPS ID</font><br /> pagestruct(dropna) fips @date(date)<br /> <br /> <font color="green">'set displaynames for use in geomaps</font><br /> cases.displayname Confirmed Cases<br /> deaths.displayname Deaths<br /> <br /> <font color="green">'make geomap</font><br /> geomap cases_map<br /> cases_map.load ".\Us County Map\CountiesBEA.prj"<br /> cases_map.link fips_bea fips<br /> cases_map.options -legend<br /> cases_map.setlabel none<br /> cases_map.setfillcolor(t=custom) mapser(cases) naclr(@RGB(255,255,255)) range(lim(1,200,cboth), rangeclr(@grad(@RGB(204,204,255),@RGB(0,0,255))), outclr(@trans,@trans), name("Range")) thresh(200, below(@trans), above(@RGB(0,0,255)), name("Threshold")) <br /> </pre> <h3 id="sec4">Sneak Peaks</h3> One of the features our engineering team have been working on for the next major release of EViews is the ability to produce animated graphs and geomaps (the keen eyed amongst you may have noticed the <b>Animate</b> button on a few of our screenshots). Whilst this feature is a little far away from release, the Covid-19 data does give an interesting set of testing procedures, and we thought we'd share some of the results.<br /><br /> <!-- :::::::::: ANIMATION 1 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/animations/cases_map.gif"><img height="auto" src="http://www.eviews.com/blog/covid19/animations/cases_map.gif" title="US counties cases evolution (wait for it...)" width="680" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Animation 1: US counties cases evolution</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: ANIMATION 1 :::::::::: --> <!-- :::::::::: ANIMATION 2 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/animations/cases_map.gif"> <video width="680" controls> <source src= "http://www.eviews.com/blog/covid19/animations/graph01.mp4" type="video/mp4" title="Confirmed cases"> </video> </a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Animation 2: Confirmed cases</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: ANIMATION 1 :::::::::: --></span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com9tag:blogger.com,1999:blog-6883247404678549489.post-86065470278518103992020-02-25T07:58:00.001-08:002020-03-04T09:27:00.165-08:00Beveridge-Nelson Filter<style> table { border: 0px solid black; border-collapse: separate; border-spacing: 10px; } td { border: 1px solid black; } .nb { border: 0px solid black; } .step { counter-reset: section; list-style-type: none; } .step li::before { counter-increment: section; content: "Step "counter(section) ": "; } </style> <script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { lb: "{\\left(}", rb: "{\\right)}", bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"> </script> <span style="font-family: "verdana" sans-serif"> <i>Authors and guest post by Benjamin Wong (Monash University) and Davaajargal Luvsannyam (The Bank of Mongolia)</i><br /><br /> Analysis of macroeconomic time series often involves decomposing a series into a trend and cycle components. In this blog post, we describe the Kamber, Morley, and Wong (2018) Beveridge-Nelson (BN) filter and the associated EViews add-in. <a name='more'></a><br /><br /> <h3>Table of Contents</h3> <ol> <li><a href="#sec1">Introduction</a> <li><a href="#sec2">The BN Decomposition</a> <li><a href="#sec3">The BN Filter</a> <li><a href="#sec4">Why Use the BN Filter</a> <li><a href="#sec5">BN Filter Implementation</a> <li><a href="#sec6">Conclusion</a> <li><a href="#sec7">Files</a> <li><a href="#sec8">References</a> </ol><br /> <h3 id="sec1">Introduction</h3> In this blog entry, we will discuss the Beveridge-Nelson (BN) filter - the Kamber, Morley, and Wong (2018) modification of the well-known Beveridge and Nelson (1981) decomposition. In particular, we will discuss the application of both procedures to estimating the <i>output gap</i>, which the US Bureau of Economic Analysis (BEA) and the Congressional Budget Office (CBO) define as the proportional deviation of the real actual <i>gross domestic product</i> (GDP) from the real potential GDP.<br /><br /> The analysis to follow will use quarterly data from the post World War II period 1947Q1 to 2019Q3 and will be downloaded from the FRED database. In this regard, we begin by creating a new quarterly workfile as follows: <ol> <li>From the main EViews window, click on <b>File/New/Workfile...</b>. <li>Under <b>Frequency</b> select <b>Quarterly</b>. <li>Set the <b>Start date</b> to <i>1947Q1</i> and the set the <b>End date</b> to <i>2019Q3</i> <li>Hit <b>OK</b>. </ol> Next, we fetch the GDP data as follows: <ol> <li>From the main EViews window, click on <b>File/Open/Database...</b>. <li>From the <b>Database/File Type</b> dropdown, select <b>FRED Database</b>. <li>Hit <b>OK</b>. <li>From the FRED database window, click on the <b>Browse</b> button. <li>Next, click on <b>All Series Search</b> and in the <b>Search for</b> box,type <i>GDPC1</i>. (This is the real actual seasonally adjusted GDP) <li>Drag the series over to the workfile to make it available for analysis. <li>Again, in the <b>Search for</b> box, type <i>GDPPOT</i>. (This is the real potential seasonally unadjusted GDP estimated by the CBO) <li>Drag the series over to the workfile to make it available for analysis. <li>Close the FRED windows as they are no longer needed. </ol> <!-- :::::::::: FIGURES 1a and 1b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 1a :::::::::: --> <center> <a href="http://www.eviews.com/blog/bnfilter/fredbrowse.png"><img height="auto" src="http://www.eviews.com/blog/bnfilter/fredbrowse.png" title="FRED Browse" width="180" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 1b :::::::::: --> <center> <a href="http://www.eviews.com/blog/bnfilter/fredsearch.png"><img height="auto" src="http://www.eviews.com/blog/bnfilter/fredsearch.png" title="FRED Search" width="180" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 1a: FRED Browse </small> </center> </td> <td class="nb"> <center> <small>Figure 1b: FRED Search</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 2a and 2b :::::::::: --> Next, rename the series <b>GDPC1</b> to <b>GDP</b> by issuing the following command: <pre><br /> rename gdpc1 gdp<br /> </pre> We now show how to obtain the implied estimate of the output gap from the CBO to provide the user some perspective on how to obtain the output gap. In particular, the CBO implied estimate of the output gap is defined using the formula: $$ CBOGAP = 100\left(\frac{GDP - GDPPOT}{GDPPOT}\right) $$ For reference, we will create this series in EViews and call it <b>CBOGAP</b>. This is done by issuing the following command: <pre><br /> series cbogap = 100*(gdp-gdppot)/gdppot<br /> </pre> We also plot <b>CBOGAP</b> below: <!-- :::::::::: FIGURE 2 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/bnfilter/gap.png"><img height="auto" src="http://www.eviews.com/blog/bnfilter/gap.png" title=" CBO implied estimate of the output gap" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 2: CBO implied estimate of the output gap</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 2 :::::::::: --> <h3 id="sec2">BN Decomposition</h3> Recall here that for any time series $ y_{t} $, the BN decomposition determines a trend process $ \tau_{t} $ and a cycle process $ c_{t} $, such that $ y_{t} = \tau_{t} + c_{t} $. In this regard, the trend component $ \tau_{t} $ is the deviation of the long-horizon conditional forecast of $ y_{t} $ from its deterministic drift $ \mu $. In other words: $$ \tau_{t} = \lim_{h\rightarrow \infty} E_{t}\left(y_{t+h} - h\mu\right) \quad \text{where} \quad \mu = E(\Delta y_{t}) $$ On the other hand, the cyclical component is the deviation of the underlying process from its long-horizon forecast. Intuitively, when $ y_{t} $ represents the GDP of some economy, the cycle process $ c_{t} = y_{t} - \tau_{t}$ is interpreted as the <i>output gap</i>.<br /><br /> In practice, in order to capture the autocovariance structure of $ \Delta y_{t} $, the BN decomposition starts by first fitting an autoregressive moving-average (ARMA) model to $ \Delta(y) $ and then proceeds to derive $ \tau_{t} $ and $ c_{t} $. For instance, when the model of choice is AR(1), the BN decomposition derives from the following steps:<br /><br /> <ol class="step"> <li>Fit an AR(1) model to $ \Delta y_{t} $: $$ \Delta y_{t} = \widehat{\alpha} + \widehat{\phi}\Delta y_{t-1} + \widehat{\epsilon}_{t} $$ <li> Estimate the deterministic drift as the unconditional mean process: $$ \widehat{\mu} = \frac{\widehat{\alpha}}{1 - \widehat{\phi}} $$ <li> Estimate the BN trend process: $$ \widehat{\tau}_{t} = \left(y_{t} + \left(\frac{\widehat{\phi}}{1 - \widehat{\phi}}\right) \Delta y_{t}\right) - \left(\frac{\widehat{\phi}}{1 - \widehat{\phi}}\right) \widehat{\mu}$$ <li> Estimate the BN cycle component: $$ \widehat{c}_{t} = y_{t} - \widehat{\tau}_{t} $$ </ol><br /> As an illustrative example, consider the BN decomposition of US quarterly real GDP. To conform with the Kamber, Morley, and Wong (2018) paper, we will also transform the raw US real GDP as 100 times its logarithm. In this regard, we generate a new EViews series object <b>LOGGDP</b> by issuing the following command: <pre><br /> series loggdp = 100 * log(gdp)<br /> </pre> At last, following the 4 steps outlined earlier, we derive the BN decomposition in EViews as follows: <pre><br /> series dy = d(loggdp)<br /> equation ar1.ls dy c dy(-1) 'Step 1<br /> scalar mu = c(1)/(1-c(2)) 'Step 2<br /> series bntrend = loggdp + (dy - mu)*c(2)/(1 - c(2)) 'Step 3<br /> series bncycle = loggdp - bntrend 'Step 4<br /> </pre> The BN trend and cycle series are displayed in Figure 2 below.<br /><br /> <!-- :::::::::: FIGURES 3a and 3b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 3a :::::::::: --> <center> <a href="http://www.eviews.com/blog/bnfilter/bntrend.png"><img height="auto" src="http://www.eviews.com/blog/bnfilter/bntrend.png" title="BN Trend" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 3b :::::::::: --> <center> <a href="http://www.eviews.com/blog/bnfilter/bncycle.png"><img height="auto" src="http://www.eviews.com/blog/bnfilter/bncycle.png" title="BN Cycle" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 3a: BN Trend</small> </center> </td> <td class="nb"> <center> <small>Figure 3b: BN Cycle</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 3a and 3b :::::::::: --> To see how the BN decomposition estimate of the output gap compares to the CBO implied estimate of the output gap, we plot both series on the same graph.<br /><br /> <!-- :::::::::: FIGURE 4 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/bnfilter/bncvsgap.png"><img height="auto" src="http://www.eviews.com/blog/bnfilter/bncvsgap.png" title="BN Cycle vs CBO implied output gap estimate" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 4: BN Cycle vs CBO implied output gap estimate</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 4 :::::::::: --> Evidently, the BN cycle series lacks persistence (very noisy), lacks amplitude (low variance), and in general, does not exhibit the characteristics found in the CBO implied estimate of the output gap, <b>CBOGAP</b>.<br /><br /> <h3 id="sec3">The BN Filter</h3> First, to explain why the BN estimate of output gap lacks the persistence of its true counterpart, recall the formula for the BN cycle component for an AR(1) model: $$ c_{t} = y_{t} - \tau_{t} = -\frac{\phi}{1-\phi}(\Delta y_{t} - \mu)$$ Clearly, when $ \phi $ is small, $ \Delta y_{t} $ is not very persistent. Since $ c_{t} $ is only as persistent as $ \Delta y_{t} $, the cycle component itself lacks the persistence one expects of the true output gap series.<br /><br /> Next, to explain why $ c_{t} $ lacks the expected amplitude, define the signal-to-noise ratio $ \delta $ for any time series as the ratio of the variance of trend shocks relative to the overall forecast error variance. In other words: $$ \delta \equiv \frac{\sigma^{2}_{\Delta \tau}}{\sigma^{2}_{\epsilon}} = \psi(1)^{2} $$ which follows since $ \Delta\tau = \psi(1)\epsilon_{t} $ and $ \psi(1) = \lim_{h\rightarrow \infty} \frac{\partial y_{t+h}}{\partial \epsilon_{t}} $. Intuitively, $ \psi(1) $ is the <i>long-run multiplier</i> that captures the permanent effect of the forecast error on the long-horizon conditional expectation of $ y_{t} $. Quite generally, as demonstrated in Kamber, Morley, and Wong (2018), for any AR(p) model: \begin{align} \Delta y_{t} = c + \sum_{k=1}^{p}\phi_{k}\Delta y_{t-k} + \epsilon_{t} \label{eq1} \end{align} the signal-to-noise ratio is given by the relation \begin{align} \delta = \frac{1}{(1-\phi(1))^{2}} \quad \text{where} \quad \phi(1) = \phi_{1} + \ldots + \phi_{p}\label{eq2} \end{align} In particular, when the forecasting model is AR(1), as was the case in the BN decomposition above, the signal-to-noise ratio is simply $ \delta = \frac{1}{(1-\phi)^{2}} $ and in the case of the US GDP growth process, it is $ \delta = \frac{1}{(1-0.36)^{2}} = 2.44$. In other words, the BN trend shocks exhibit higher volatility than quarter-to-quarter forecast errors and the signal-to-noise ratio is therefore relatively high. In fact, in the case of a freely estimated AR$ (p) $ model of output growth, $ \phi(1) < 1 $, which implies that $ \delta > 1 $. In other words,the trend will be more volatile than the cycle, and at odds if one expects the cycle shocks (the output gap amplitude) to explain the majority of the systematic forecast variance.<br /><br /> To correct for the aforementioned shortcomings of the BN decomposition, Kamber, Morley, and Wong (2018) exploit the relationship between the signal-to-noise ratio and the AR coefficients in equation \eqref{eq2}. In particular, they note that equation \eqref{eq2} implies that: \begin{align} \phi(1) = 1 - \frac{1}{\sqrt{\delta}} \end{align} In this regard, the idea underlying the BN filter is to fix a specific value to the signal-to-noise ratio, say $ \delta = \bar{\delta} $. Subsequently, the BN decomposition is derived from an AR model, the AR coefficients of which are forced to sum to $ \bar{\phi}(1) \equiv 1 - \frac{1}{\sqrt{\bar{\delta}}} $. In other words, the BN decomposition is derived while imposing a particular signal-to-noise ratio.<br /><br /> It is important to note here that estimation of the BN decomposition under a particular signal-to-noise ratio restriction is in fact straightforward and does not require complicated non-linear routines. To see this, observe that equation \eqref{eq1} can be rewritten as: \begin{align} \Delta y_{t} = c + \rho \Delta y_{t-1} + \sum_{k=1}^{p-1}\phi^{\star}_{k}\Delta^{2} y_{t-k} + \epsilon_{t} \label{eq3} \end{align} where $ \rho = \phi_{1} + \ldots + \phi_{p} $ and $ \phi^{\star}_{k} = -\left(\phi_{k+1} + \ldots + \phi_{p}\right) $. Then, imposing the restriction $ \rho = \bar{\rho} \equiv \bar{\phi}(1) $ reduces the regresion in \eqref{eq3} to: \begin{align} \Delta y_{t} - \bar{\rho} \Delta y_{t-1} = c + \sum_{k=1}^{p-1}\phi^{\star}_{k}\Delta^{2} y_{t-k} + \epsilon_{t} \label{eq4} \end{align} In other words, $ \bar{\rho}\Delta y_{t-1} $ is brought to the left hand side and the regressand in the regression \eqref{eq4} becomes $ \Delta \bar{y}_{t} \equiv \Delta y_{t} - \bar{\rho} \Delta y_{t-1} $.<br /><br /> <h3 id="sec4">Why Use the BN Filter?</h3> Before we demonstrate the BN Filter add-in, we quickly outline two reasons why the BN filter might be a reasonable approach, particularly when estimating the output gap. <ol> <li>When analyzing GDP growth, standard ARMA model selection often favours low order AR variants, which, as discussed earlier, produce high signal-to-noise ratios. <li>Unlike alternative low signal-to-noise ratio procedures such as deterministic quadratic detrending, the Hodrick-Prescott (HP) filter, and the bandpass (BP) filter, which often require large number of estimation revisions (as new data comes in) and are typically unreliable in out-of-sample forecasts (see Orphanides and Van Norden, (2003)), Kamber, Morley and Wong (2018) argue that the BN filter exhibits better out-of-sample performance and generally requires fewer estimation revisions to match observable data characteristics.<br /><br /> </ol> To further drive this latter point, we demonstrate the impact of ex-post estimation of the output gap using the HP filter. In particular, we will first estimate the output gap (the cycle component) of the <b>LOGGDP</b> series for the period 1947Q1 to 2008Q3 and call it <b>HPCYCLE</b>, and then again for the period 1947Q1 to 2019Q3 and call it <b>HPCYLCE_EXPOST</b>.<br /><br /> To estimate the HP filter cycle component for the period 1947Q1 to 2008Q3, we first set the sample accordingly by issuing the command: <pre><br /> smpl @first 2008Q3<br /> </pre> Next, we estimate the HP filter cycle series as follows: <ol> <li>From the workfile, double click on the series <b>LOGGDP</b> to open the series. <li>In the series window, click on <b>Proc/Hodrick-Prescott Filter...</b> <li>In the <b>Cylce series</b> text box, type <i>hpcycle</i>. <li>Hit <b>OK</b>. </ol> <!-- :::::::::: FIGURE 5 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/bnfilter/hpfilter.png"><img height="auto" src="http://www.eviews.com/blog/bnfilter/hpfilter.png" title="HP Filter" width="180" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 5: HP Filter</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 5 :::::::::: --> The steps are repeated for the sample period 1947Q1 to 2019Q3. A plot of both cycle series on the same graph is presented below.<br /><br /> <!-- :::::::::: FIGURE 6 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/bnfilter/hpcycleexpost.png"><img height="auto" src="http://www.eviews.com/blog/bnfilter/hpcycleexpost.png" title="HP Cycle vs HP Cycle Ex Post" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 6: HP Cycle vs HP Cycle Ex Post</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 6 :::::::::: --> Evidently, the ex-post HP filter estimation of the output gap diverges from its shorter period counterpart starting from 2006Q1. It is precisely this drawback that we will see is not nearly as pronounced in BN filter estimates.<br /><br /> <h3 id="sec5">BN Filter Implementation</h3> To implement the BN Filter, we need to download and install the add-in from the EViews website. The latter can be found at <a href="https://www.eviews.com/Addins/BNFilter.aipz">https://www.eviews.com/Addins/BNFilter.aipz</a>. We can also do this from inside EViews itself: <ol> <li>From the main EViews window, click on <b>Add-ins/Download Add-ins...</b> <li>Click on the the BNFilter add-in. <li>Click on <b>Install</b>. </ol> <!-- :::::::::: FIGURE 5 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/bnfilter/addin.png"><img height="auto" src="http://www.eviews.com/blog/bnfilter/addin.png" title="Install Add-in" width="180" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 5: Install Add-in</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 5 :::::::::: --> At last, we will demonstrate how to apply the BN Filter add-in using an AR(12) model. To do so, proceed as follows: <ol> <li>From the workfile window, double click on <b>LOGGDP</b> to open the spreadsheet view of the series. <li>To access the BN filter dialog, click on <b>Proc/Add-ins/BN Filter</b> <li>Stick with the defaults and hit <b>OK</b>. </ol><br /> <!-- :::::::::: FIGURE 6 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/bnfilter/bnfilter.png"><img height="auto" src="http://www.eviews.com/blog/bnfilter/bnfilter.png" title="BN Filter Dialog" width="180" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 6: BN Filter Dialog</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 6 :::::::::: --> The signal-to-noise ratio, while not specified above, is chosen using the Kamber, Morley, and Wong (2018) automatic selection procedure which balances the trade off between fit and amplitude. Typically, the signal-to-noise ratio for the US using such a procedure is about 0.25, which implies a quarter of the shocks to US GDP are permanent. Below, we show the BN Filter cycle series both alone and in comparison to the CBO implied estimate of the output gap <b>CBOGAP</b>.<br /><br /> <!-- :::::::::: FIGURES 7a and 7b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 7a :::::::::: --> <center> <a href="http://www.eviews.com/blog/bnfilter/bnfcycle.png"><img height="auto" src="http://www.eviews.com/blog/bnfilter/bnfcycle.png" title="BN Filter Cycle" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 7b :::::::::: --> <center> <a href="http://www.eviews.com/blog/bnfilter/bnfcvsgap.png"><img height="auto" src="http://www.eviews.com/blog/bnfilter/bnfcvsgap.png" title="BN Filter Cycle vs. CBO implied output gap estimate" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 7a: BN Filter Cycle</small> </center> </td> <td class="nb"> <center> <small>Figure 7b: BN Filter Cycle vs CBO implied output gap estimate</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 7a and 7b :::::::::: --> We also plot a comparison of the BN Filter cycle series with the HP filtered cycle.<br /><br /> <!-- :::::::::: FIGURE 8 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/bnfilter/bnfcvshpc.png"><img height="auto" src="http://www.eviews.com/blog/bnfilter/bnfcvshpc.png" title="BN Filter Cycle vs HP Filter Cycle" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 8: BN Filter Cycle vs HP Filter Cycle</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 8 :::::::::: --> As we can see, the BN filter estimate of the the US output gap using an AR(12) model resembles what we would get for an output gap that has a low signal-to-noise ratio. The amplitude is reasonably large, we see business cycles, and the troughs line up with the recessions dated by the NBER. The amplitude of the output gap estimated using the BN Filter is comparable to that of the cycle obtained by the HP filter, as well as the implied estimated of the CBO, which is unlike what we see in Figure 4.<br /><br /> The BN filter add-in also accommodates the ability to incorporate knowledge about structural breaks. In particular, we will use 2006Q1 as a structural break which is consistent with the date found by a Bai and Perron (2003) test, used by Kamber, Morley and Wong (2018), and is consistent with independent work by Eo and Morley (2019). The following steps demonstrate the outcome: <ol> <li>From the workfile window, double click on <b>LOGGDP</b> to open the spreadsheet view of the series. <li>To access the BN filter dialog, click on <b>Proc/Add-ins/BN Filter</b> <li>Select the <b>Structural Break</b> box. <li>In the <b>Date of structural break</b> text box, enter <i>2006Q1</i>. <li>Hit <b>OK</b>. </ol><br /> <!-- :::::::::: FIGURE 9 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/bnfilter/bnfcyclesb.png"><img height="auto" src="http://www.eviews.com/blog/bnfilter/bnfcyclesb.png" title="BN Filter Cycle (Structural Break)" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 9: BN Filter Cycle (Structural Break)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 9 :::::::::: --> Now we see a more positive output gap post-2006 as the structural break accounts for the fact that the average GDP growth rate has fallen.<br /><br /> Suppose however that we were ignorant about the actual date of the break. This might be the case in practice as it could take a decade or more before one could empirically identify a structural break date. In this case, a possible option is to use a rolling window for the average growth rate. In this example, we use a backward window of 40 quarters as the average growth rate. The idea is that if there were breaks, they would be reflected in this window. When this is the case, we proceed as follows: <ol> <li>From the workfile window, double click on <b>LOGGDP</b> to open the spreadsheet view of the series. <li>To access the BN filter dialog, click on <b>Proc/Add-ins/BN Filter</b> <li>Select the <b>Dynamic mean adjustment</b> box. <li>Hit <b>OK</b>. </ol><br /> <!-- :::::::::: FIGURES 10a and 10b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 10a :::::::::: --> <center> <a href="http://www.eviews.com/blog/bnfilter/bnfcycledma.png"><img height="auto" src="http://www.eviews.com/blog/bnfilter/bnfcycledma.png" title="BN Filter Cycle (Dynamic Mean Adjustment)" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 9b :::::::::: --> <center> <a href="http://www.eviews.com/blog/bnfilter/bnfcsbvsedma.png"><img height="auto" src="http://www.eviews.com/blog/bnfilter/bnfcsbvsedma.png" title="BN Filter Cycle (Known vs Unknown Structural Break)" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 10a: BN Filter Cycle (Dynamic Mean Adjustment)</small> </center> </td> <td class="nb"> <center> <small>Figure 10b: BN Filter Cycle (Known vs Unknown Structural Break)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 10a and 10b :::::::::: --> Evidently, the estimated output gap looks similar to the one estimated with an explicit structural break in 2006Q1. In general, this suggests that using a backward window to adjust for the mean growth rate might be a useful real-time strategy for dealing with breaks.<br /><br /> Users are not constrained to the automatic option, which balances the trade off between fit and amplitude. The BN filter add-in also allows users to specify a desired signal-to-noise ratio. For instance, the following example compares the difference in setting the signal-to-noise ratio $ \delta $, to 0.05 (which implies 5% of the variance is permanent), against the default 0.25 which we derived earlier by leaving $ \delta $ unspecified, and so uses the procedure which balances the trade off between fit and amplitude. To do so, we proceed as follows: <ol> <li>From the workfile window, double click on <b>LOGGDP</b> to open the spreadsheet view of the series. <li>To access the BN filter dialog, click on <b>Proc/Add-ins/BN Filter</b> <li>Select the <b>Structural Break</b> box. <li>In the <b>Date of structural break</b> text box, enter <i>2006Q1</i>. <li>Hit <b>OK</b>. </ol><br /> The plot below summarizes the exercise.<br /><br /> <!-- :::::::::: FIGURE 11 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/bnfilter/bnfc25vs5.png"><img height="auto" src="http://www.eviews.com/blog/bnfilter/bnfc25vs5.png" title="BN Filter Cycle (delta = 0.25 vs delta = 0.05)" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 11: BN Filter Cycle (delta = 0.25 vs delta = 0.05</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 11 :::::::::: --> Unsurprisingly, specifying $ \delta = 0.05 $ results in an output gap with a larger amplitude than the default as the new specification implies a smaller proportion of the shocks to the forecast error is parsed to the trend, and so a larger proportion is parsed to the cycle, leading to a larger amplitude cycle.<br /><br /> Finally, we come back to the issue of revision. As we mentioned earlier, the BN filter should produce output gaps that are less revised as long as the AR forecasting model is stable, especially when compared to the heavily revised HP Filter. Here, we show the output gap estimated using the BN filter with data up to 2008Q3, and one ex-post up to 2019Q3. Clearly, the output gap is hardly revised, which address a key critique of Orphanides and Van Norden (2003).<br /><br /> <!-- :::::::::: FIGURE 11 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/bnfilter/bnfcexpost.png"><img height="auto" src="http://www.eviews.com/blog/bnfilter/bnfcexpost.png" title="BN Filter Cycle (Ex-Post)" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 11: BN Filter Cycle (Ex-Post)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 11 :::::::::: --> <h3 id="sec6">Conclusion</h3> In this blog post we have outlined the BN filter add-in associated with the work of Kamber, Morley and Wong (2018). In general, we hope the ease of using the add-in, together with some of the useful properties of the BN Filter will encourage practitioners to explore using the procedure in their work.<br /><br /> <h3 id="sec7">Files</h3> <ul> <li><a href="http://www.eviews.com/blog/bnfilter/bnfilter_blog.prg">bnfilter_blog.prg</a> </ul> <br /><br /> <hr /> <h3 id="sec8">References</h3> <ol class="bib2xhtml"> <li><a name="bai-2003"></a>Bai J. and Perron P.: Computation and analysis of multiple structural change models <cite>Journal of Applied Econometrics</cite>, 18(1) 1–22, 2003. </li> <li><a name="beveridge-1981"></a>Beveridge S. and Nelson C. R.: A new approach to decomposition of economic time series into permanent and transitory components with particular attention to measurement of the business cycle <cite>Journal of Monetary Economics</cite>, 7(2) 151–174, 1981. </li> <li><a name="eo-2019"></a>Eo Y. and Morley J.: Why has the US economy stagnated since the Great Recession <cite>University of Sydeny Working Papers 2017-14</cite>, 2019. </li> <li><a name="kamber-2018"></a>Kamber G., Morley J., and Wong B.: Intuitive and reliable estimates of the output gap from a Beveridge-Nelson filter <cite>The Review of Economics and Statistics</cite>, 100(3) 550–566, 2018. </li> <li><a name="orphanides-2002"></a>Orphanides A and Van Norden S.: The unreliability of output-gap estimates in real time <cite>The Review of Economics and Statistics</cite>, 84(4) 569–583, 2002. </li> <li><a name="watson-1986"></a>Watson M.: Univariate detrending methods with stochastic trends <cite>Journal of Monetary Economics</cite>, 18(1) 49–75, 1986. </li> </ol></span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com20tag:blogger.com,1999:blog-6883247404678549489.post-68080614447683089002019-12-04T09:39:00.000-08:002019-12-04T09:39:38.953-08:00Sign and Zero Restricted VAR Add-In<style> table, th, td { border: 1px solid black; border-collapse: collapse; } th { padding: 5px; text-align: middle; } td { padding: 5px; text-align: left; } </style> <script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { lb: "{\\left(}", rb: "{\\right)}", bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"></script> <span style="font-family: "verdana" sans-serif"><i>Authors and guest post by Davaajargal Luvsannyam and Ulziikhutag Munkhtsetseg</i><br /><br /> In our previous <a href="http://blog.eviews.com/2019/10/sign-restricted-var-add-in.html">blog entry</a>, we discussed the sign restricted VAR (SRVAR) add-in for EViews. Here, we will discuss imposing a further zero restrictions on the impact period of the impulse response function (IRF) using the ARW and SRVAR add-ins in tandem.<a name='more'></a><br /><br /> <h3>Table of Contents</h3><ol> <li><a href="#sec1">Introduction</a> <li><a href="#sec2">Orthogonal Reduced-Form Parameterization</a> <li><a href="#sec3">ARW Algorithms</a> <li><a href="#sec4">ARW EViews Add-in</a> <li><a href="#sec5">Conclusion</a> <li><a href="#sec6">References</a></ol><br /> <h3 id="sec1">Introduction</h3> Note that it is certainly possible to impose both sign and exclusion restrictions. For example, Mountford and Uhlig (2009) are motivated by the idea that fiscal policy shocks are identified as orthogonal to both monetary policy and business cycle shocks, and use a penalty function approach (PFA) to impose zero restrictions. (For details on the PFA, please see our <a href="http://blog.eviews.com/2019/10/sign-restricted-var-add-in.html">SRVAR blog entry</a>.) They also considered anticipated government revenue shocks in which government revenue is restricted to rise one year following some impulse. Furthermore, Beaudry, Nam, and Wang (2011) estimate a structural VAR model including total factor productivity, stock prices, real consumption, real federal funds rate and hours worked. They use the PFA to show that a positive optimism shock causes an increase in both consumption and hours worked. Recently, Arias, Rubio-Ramirez, and Waggoner (2018), henceforth ARW, developed algorithms to independently draw from a family of conjugate posterior distributions over the structural parameterization when sign and zero restrictions are used to identify SRVARs. They showed the dangers of using the PFA when implementing sign and zero restrictions together to identify structural VARs (SVARs).<br /><br /> <h3 id="sec2">Orthogonal Reduced-Form Parameterization</h3> ARW focus on two SVAR parameterizations. In addition to the classical structural parameterization, they show that SVARs can also be written as a product of a reduced-form parameters and a set of orthogonal matrices. This is called the <i>orthogonal reduced-form parameterization</i>, henceforth, ORF. The algorithms ARW propose draw from a conjugate posterior distribution over the ORF and then transform said draws into a structural parameterization. In particular, they use the normal-inverse-Wishart distribution as the prior conjugate distribution, and develop a change of variable theory that characterizes the induced family of densities over the structural parameterization. This theory shows that a uniform-normal-inverse-Wishart density over the ORF parameterization induces a normal-generalized-normal density over the structural parameterization.<br /><br /> To motivate their contribution, ARW first show that existing algorithms for SVARs identified only by sign restrictions, conditional on a sign restriction using the change of variable theory, operate on independent draws from the normal-generalized-normal distribution over the structural parameterization. These algorithms independently draw from the uniform-normal-inverse-Wishart distribution over the ORF parameterization and only accept draws that impose a sign restriction.<br /><br /> Next, ARW generalize these algorithms to also consider zero restrictions. The key to this generalization is that, conditional on the reduced-form parameters, the class of zero restrictions on the structural parameters maps to linear restrictions on the orthogonal matrices. The resulting approach generalization independently draws from normal-inverse-Wishart over the reduced-form parameters and from the set of orthogonal matrices such that the zero restrictions hold. In this regard, conditional on the zero restrictions, they show that this generalization does not induce a distribution over the structural parameterization from the family of normal-generalized-normal distributions. Furthermore, they derive the induced distribution and write an importance sampler that, conditional on the sign and zero restrictions, independently draws from normal-generalized-normal distributions over the structural parameterization.<br /><br /> To formalize these ideas, consider the SVAR with the general form: \begin{align} Y_t^{\prime} A_{0} = \sum_{i=1}^{p} Y_{t-i}^{\prime}A_{i} + c + \epsilon_t^{\prime}, \quad t=1, \ldots, T \label{eq1} \end{align} where $ Y_t $ is an $ n\times 1 $ vector of endogenous variables, $ A_i $ are parameter matrices of size of $ n\times n $ with $ A_{0} $ invertible, $ c $ is a $ 1\times n $ vector of parameters, $ \epsilon_t $ is an $ n\times 1 $ vector of exogenous structural shocks, $ p $ is the lag length, and $ T $ is the sample size.<br /><br /> We can also summarize equation \eqref{eq1} as follows: \begin{align} Y_{t}^{\prime}A_{0} = X_{t}^{\prime}A_{+} + \epsilon_{t}^{\prime} \label{eq2} \end{align} where $ A_{+}^{\prime} = \left[A_{1}^{\prime}, \ldots, A_{p}^{\prime}, c^{\prime}\right]$ and $ X_{t}^{\prime} = \left[Y_{t-1}^{\prime}, \ldots, Y_{t-p}^{\prime}, 1\right] $.<br /><br /> The reduced form can now be written as: \begin{align} Y_{t}^{\prime} = X_{t}^{\prime}B + u_{t}^{\prime} \label{eq3} \end{align} where $ B = A_{+}A_{0}^{-1}, u_{t}^{\prime} = \epsilon_{t}^{\prime}A_{0}^{-1} $, and $ E(u_{t}u_{t}^{\prime}) = \Sigma = \left(A_{0}A_{0}^{\prime}\right)^{-1} $. Naturally, $ B $ and $ \Sigma $ are the reduced form parameters.<br /><br /> We can further write equation \eqref{eq3} as the orthogonal reduced-form parameterization \begin{align} Y_{t}^{\prime} = X_{t}^{\prime}B + \epsilon_{t}^{\prime}Q^{\prime}h(\Sigma) \label{eq4} \end{align} where the $ n\times n $ matrix $ h(\Sigma) $ is the Cholesky decomposition of covariance matrix $ \Sigma $.<br /><br /> Given equations \eqref{eq2} and \eqref{eq4}, in addition to the Cholesky decomposition $ h $, we can define a mapping between $ \left(A_{0}, A_{+}\right) $ and $ (B, \Sigma, Q) $ by: \begin{align} f_{h}\left(A_{0}, A_{+}\right) = \left(A_{+}A_{0}^{-1}, \left(A_{0}A_{0}^{\prime}\right)^{-1}, h\left(\left(A_{0}A_{0}^{\prime}\right)^{-1}\right)A_{0}\right) \label{eq5} \end{align} where the first element of the triad on the right corresponds to $ B $, the second to $ \Sigma $, and the third to $ Q $.<br /><br /> Note further that the function $ f_{h} $ is invertible with inverse defined by: \begin{align} f_{h}^{-1} (B,\Sigma, Q) = \left(h(\Sigma)^{-1}Q, Bh(\Sigma)^{-1}Q\right) \label{eq6} \end{align} where the first term on the right corresponds to $ A_{0} $ and the second to $ A_{+} $.<br /><br /> Thus, the ORF parameterization makes clear how the structural parameters depend on the reduced form parameters and orthogonal matrices.<br /><br /> <h3 id="sec3">ARW Algorithms</h3> Although ARW propose three different algorithms, the most important is in fact the third. The latter draws from a distribution over the ORF parameterization conditional on the sign and zero restriction and then transforms the draws into the structural parameterization. Since Algorithm 3 also depends on Algorithm 2, we present the latter here and recommend readers to refer to the supplementary materials of ARW (2018) if they require further details.<br /><br /> <h4>Algorithm 2</h4> Let $ Z_j $ define the zero restriction matrix on the $ j^{\text{th}} $ structural shock, and let $ z_{j} $ denote the number of zero restrictions associated with the $ j^{\text{th}} $ structural shock. Then: <ol> <li>Draw $ (B, \Sigma) $ independently from Normal-inverse-Wishart distribution. <li>For $ j \in \{1, \ldots, n\} $ draw $ X_{j} \in \mathbf{R}^{n+1-j-Z_{j}} $ independently from a standard normal distribution and set $ W_{j} = X_{j} / ||X_{j}||$. <li>Define $ Q = [q_{1}, \ldots q_{n}] $ recursively as $ q_{j} = K_{j}W_{j} $ for any matrix $ K_{j} $ whose columns form an orthonormal basis for the null space of the $ (j-1+z_{j})\times n $ matrix \begin{align} M_{j} = \left[q_{1}, \ldots, q_{j-1},\left(Z_{j}F\left(f_{h}^{-1}(B, \Sigma, I_{n})\right)\right)\right] \end{align} <li>Set $ (A_{0},A_{+}) = f_{h}^{-1}(B,\Sigma,Q) $.<br /><br /></ol> <h4>Algorithm 3</h4> Let $ \mathcal{Z} $ denote the set of all structural parameters that satisfy the zero restrictions, and define $ v_{(g^{\circ}f_{h})|\mathcal{Z}} $ as te volume element. Then: <ol> <li>Use Algorithm 2 to independently draw $ (A_{0}, A_{+}) $. <li>If $ (A_{0}, A_{+}) $ satisfies the sign restrictions, set its importance weight to $$ \frac{|\det(A_{0})|^{-(2n+m+1)}}{v_{(g^{\circ}f_{h})|\mathcal{Z}}(A_{0}A_{+})} $$ otherwise, set its importance weight to zero. <li>Return to Step 1 until the required number of draws has been obtained. <li>Re-sample with replacement using the importance weights.<br /><br /></ol> <h3 id="sec4">ARW EViews Add-in</h3> Now we turn to the implementation of the ARW add-in. First, we need to download and install the add-in from the EViews website. The latter can be found at <a href="https://www.eviews.com/Addins/arw.aipz">https://www.eviews.com/Addins/arw.aipz</a>. We can also do this from inside EViews itself. In particular, after opening EViews, click on <b>Add-ins</b> from the main menu, and click on <b>Download Add-ins...</b>. From here, locate the <i>ARW</i> add-in and click on <b>Install</b>.<br /><br /> <!-- :::::::::: FIGURE 1 :::::::::: --> <center> <a href="http://www.eviews.com/blog/arw/addin_download.png"><img height="auto" src="http://www.eviews.com/blog/arw/addin_download.png" title="Add-ins Download" width="360" /></a><br /> <small>Figure 1: Add-in installation</small><br /><br /> </center><!-- :::::::::: FIGURE 1 :::::::::: --> After installing, we open the data file named as <i>data.WF1</i> which can be found in the installation folder, typically located in <b>[Windows User Folder]/Documents/EViews Addins/ARW</b>.<br /><br /> <!-- :::::::::: FIGURE 2 :::::::::: --> <center> <a href="http://www.eviews.com/blog/arw/workfile.png"><img height="auto" src="http://www.eviews.com/blog/arw/workfile.png" title="ARW (2018) Data" width="360" /></a><br /> <small>Figure 2: ARW (2018) Data</small><br /><br /> </center><!-- :::::::::: FIGURE 2 :::::::::: --> We now replicate Figures 1 and Table 3 from ARW. We can of course do this in EViews as follows.<br /><br /> <ol> <li>Click on the <b>Add-ins</b> menu item in the main EViews menu, and click on <b>Sign restricted VAR</b>. <li>Under <b>Endogenous variables</b> enter <i>tfp stock cons ffr hour</i>. <li>Check the <b>Include constant</b> option. <li>Under <b>Number of lags</b>, enter <i>4</i>. <li>In the <b>Sign restriction vector</b> textbox enter <i>+2</i>. <li>Under <b>Sign restriction method</b> check <i>Penalty</i>. <li>In the <b>Number of horizons</b> enter <i>40</i></li> <li>Under <b>Zero restriction</b> textbox enter <i>tfp</i>. <li>Check the <b>variance decomposition box</b>. <li>Hit <b>OK</b>.<br /><br /> </ol> <!-- :::::::::: FIGURE 3 :::::::::: --> <center> <a href="http://www.eviews.com/blog/arw/pfa.png"><img height="auto" src="http://www.eviews.com/blog/arw/pfa.png" title="SRVAR Add-in (PFA)" width="360" /></a><br /> <small>Figure 3: SRVAR Add-in (PFA)</small><br /><br /> </center><!-- :::::::::: FIGURE 3 :::::::::: --> The steps above produce the following output (Panel A of Figure 1 of ARW):<br /><br /> <!-- :::::::::: FIGURE 4 :::::::::: --><center> <a href="http://www.eviews.com/blog/arw/panela.png"><img height="auto" src="http://www.eviews.com/blog/arw/panela.png" title="PFA Output" width="360" /></a><br /> <small>Figure 4: PFA Output</small><br /><br /> </center><!-- :::::::::: FIGURE 4 :::::::::: --> Next, we invoke the ARW add-in and proceed with the ARW Algorithm 3.<br /><br /> <ol> <li>Click on the <b>Add-ins</b> menu item in the main EViews menu, and click on <b>Sign and zero restricted VAR</b>. <li>Under <b>Endogenous variables</b> enter <i>tfp stock cons ffr hour</i>. <li>Check the <b>Include constant</b> option. <li>Under <b>Number of lags</b>, enter <i>4</i>. <li>In the <b>Sign restriction vector</b> textbox enter <i>+stock</i>. <li>In the <b>Zero restrictions</b> textbox enter <i>tfp</i>. <li>Under<b>Number of steps</b> enter <i>40</i>. <li>Check the <b>variance decomposition box</b>. <li>Hit <b>OK</b>.<br /><br /> </ol> <!-- :::::::::: FIGURE 5 :::::::::: --> <center> <a href="http://www.eviews.com/blog/arw/isampler.png"><img height="auto" src="http://www.eviews.com/blog/arw/isampler.png" title="ARW Add-in (Importance Sampler)" width="360" /></a><br /> <small>Figure 5: ARW Add-in (Importance Sampler)</small><br /><br /> </center><!-- :::::::::: FIGURE 5 :::::::::: --> The steps above produce the following output (Panel B of Figure 1 of ARW):<br /><br /> <!-- :::::::::: FIGURE 6 :::::::::: --> <center> <a href="http://www.eviews.com/blog/arw/panelb.png"><img height="auto" src="http://www.eviews.com/blog/arw/panelb.png" title="Importance Sampler Output" width="360" /></a><br /> <small>Figure 6: Importance Sampler Output</small><br /><br /> </center><!-- :::::::::: FIGURE 6 :::::::::: --> Figures 5 and 6 above illustrates the IRFs using the PFA and importance sampler methods, respectively. In case of the former, we can see the IRFs with probability bands for adjusted TFP, stock prices, consumption, real interest rate,and hours worked under the PFA. Examining the confidence bands around IRFs allows us to conclude that optimism shocks boost consumption and hours worked, as the corresponding IRFs do not contain a zero for at least 20 quarters.<br /><br /> Alternatively, the IRFs of the same variables obtained using the importance sampler yield a different result. For consumption and hours worked, the confidence bands are wider and contain zero. Furthermore, the corresponding point-wise median IRFs are closer to zero compared to those obtained using the PFA. This shows that the PFA exaggerates the effects of optimism shocks on stock prices, consumption, and hours worked, by generating much narrower confidence bands and larger point-wise median IRFs. In this regard, as pointed out by Uhlig (2005), we can see that the PFA includes additional identification restrictions when implementing sign and zero restrictions.<br /><br /> To further summarize the results, we present the table below which gives the specifics of the output figures above.<br /><br /> <center> <table style="width:100%"> <tr> <th></th> <th colspan="3">Penalty Function Approach</th> <th colspan="3">Importance Sampler</th> </tr> <tr> <td>Adjusted TFP</td> <td>0.07</td> <td><b>0.17</b></td> <td>0.29</td> <td>0.03</td> <td><b>0.11</b></td> <td>0.23</td> </tr> <tr> <td>Stock Prices</td> <td>0.54</td> <td><b>0.72</b></td> <td>0.84</td> <td>0.05</td> <td><b>0.29</b></td> <td>0.57</td> </tr> <tr> <td>Consumption</td> <td>0.13</td> <td><b>0.27</b></td> <td>0.43</td> <td>0.03</td> <td><b>0.17</b></td> <td>0.50</td> </tr> <tr> <td>Real Interest Rate</td> <td>0.07</td> <td><b>0.14</b></td> <td>0.23</td> <td>0.08</td> <td><b>0.20</b></td> <td>0.39</td> </tr> <tr> <td>Hours Worked</td> <td>0.20</td> <td><b>0.31</b></td> <td>0.45</td> <td>0.04</td> <td><b>0.18</b></td> <td>0.56</td> </tr> </table> <small>Table I: Forecast Error Variance Decomposition (FEVD)</small><br /><br /></center> Table I shows the contribution of shocks to the Forecast Error Variance Decomposition (FEVD) using the PFA and the importance sampler for the chosen horizon of 40 periods and 68 percent equal-tailed probability intervals. Under the PFA, the share of FEVD attributable to optimism shocks of consumption and hours worked is 27 and 31 percent, respectively. However, the contribution of optimism shocks to the FEVD of stock prices is 72 percent under the PFA in contrast to 29 percent using the importance sampler. It should be noted that for most variables, when using the importance sampler, optimism shocks contribute less to the FEVD, and probability intervals for the FEVD are broader as opposed to those obtained under the PFA.<br /><br /> <h3 id="sec5">Conclusion</h3> In this blog entry we presented the ARW add-in for EViews. The add-in is based on the work of ARW (2018) and generates impulse response curves based on the importance sampler which accommodates both sign and zero restrictions in the VAR model.<br /><br /> <hr /><h3 id="sec6">References</h3> <ol class="bib2xhtml"> <li><a name="arias-2018"></a>Arias J., Rubio-Ramirez J., and Waggoner D.: Inference Based on SVARs Identified with Sign and Zero Restrictions: Theory and Applications <cite>Econometrica</cite>, 86:685–720, 2018. </li> <li><a name="beaudry-2011"></a>Beaudry P., Nam D., and Wang J.: Do mood swing drive business cycle and is it rational? <cite>NBER Working Paper 17651</cite>, 2011. </li> <li><a name="mountford-2009"></a>Mountford A. and Uhlig H.: What are the effects of fiscal policy shocks? <cite>Journal of Applied Econometrics</cite>, 24:960–992, 2009. </li> <li><a name="uhlig-2005"></a>Uhlig H.: What are the effects of monetary policy on output? Results from an agnostic identification procedure. <cite>Journal of Monetary Economics</cite>, 52(2):381–419, 2005. </li> </ol></span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com3tag:blogger.com,1999:blog-6883247404678549489.post-44432315972472118622019-11-06T10:23:00.000-08:002019-11-06T13:02:01.360-08:00Dealing with the log of zero in regression models<script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { lb: "{\\left(}", rb: "{\\right)}", bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"></script> <span style="font-family: "verdana" sans-serif"><i>Author and guest post by Eren Ocakverdi</i><br /><br /> The title of this blog piece is a verbatim excerpt from the Bellego and Pape (2019) paper suggested by Professor David E. Giles in his <a href="https://davegiles.blogspot.com/2019/10/october-reading.html">October reading list</a>. (Editor's note: Professor Giles has recently announced the end of his blog - it is a fantastic resource and will be missed!). The topic is immediately familiar to practitioners who occasionally encounter the difficulty in applied work. In this regard, it is reassuring that the frustration is being addressed and that there is indeed an ongoing quest for the <i>silver bullet</i>.<a name='more'></a><br /><br /> <h3>Table of Contents</h3><ol> <li><a href="#sec1">Introduction</a> <li><a href="#sec2">A Novel Approach</a> <li><a href="#sec3">Files</a> <li><a href="#sec4">References</a></ol><br /> <h3 id="sec1">Introduction</h3> Consider the following data generating process where the dependent variable may contain zeros: $$ \log(y_i) = \alpha + x_i^\prime \beta + \epsilon_i \quad \text{with} \quad E(\epsilon_i)=0 $$ The most common remedy to the <i>logarithm of zero value</i> problem among practitioners is to add a common (observation independent) positive constant to the problematic observations. In other words, to work with the model: $$ \log(y_i + \Delta) = \alpha + x_i^\prime \beta + \omega_i $$ where $ \Delta $ is the corrective constant.<br /><br /> In the aforementioned paper, the authors use Monte Carlo simulations to demonstrate that the bias incurred by this correction is not necessarily negligible for small values of $ \Delta $, and in fact, may be substantial.<br /><br /> <!-- :::::::::: FIGURE 1 :::::::::: --><center> <a href="http://www.eviews.com/blog/log_of_zero/bias.png"><img height="auto" src="http://www.eviews.com/blog/log_of_zero/bias.png" title="Add-ins Download" width="360" /></a><br /> <small>Figure 1: Estimation bias as a function of $ \Delta $ </small><br /><br /></center><!-- :::::::::: FIGURE 1 :::::::::: --> In order to handle the zeros in model variables, the paper offers a new (complementary) solution that: <ol> <li>Does not generate computational bias by arbitrary normalization. </li> <li>Does not generate correlation between the error term and regressors. </li> <li>Does not require the deletion of observation.</li> <li>Does not require the estimation of a supplementary parameter.</li> <li>Does not require addition of a discretionary constant.</li><br /><br /></ol> <h3 id="sec2">A Novel Approach</h3> Bellego and Pape (2019) suggest that instead of adding a common positive constant $ \Delta $, one ought to add some optimal, observation-dependent positive value $ \Delta_{i} $. The novel strategy results in the following model and is estimated via GMM: $$ \log(y_i + \Delta_{i}) = \alpha + x_i^\prime \beta + \eta_{i} $$ where $ \Delta_i = \exp(x_i^\prime \beta) $ and $ \eta_i = \log(1 + \exp(\alpha + \epsilon_i)) $.<br /><br /> Since the details can be referred to in the original paper, here I’d like to replicate the simulation exercise in which the authors illustrate their method and make a comparison with other approaches. (The tables below can be replicated in EViews by running the program file <i>loglinear.prg</i>.)<br /><br /> <!-- :::::::::: FIGURE 2 :::::::::: --><center> <a href="http://www.eviews.com/blog/log_of_zero/table1.png"><img height="auto" src="http://www.eviews.com/blog/log_of_zero/table1.png" title="Add-ins Download" width="360" /></a><br /> <small>Figure 2: Output of OLS estimation (with $ \Delta = 1 $)</small><br /><br /></center><!-- :::::::::: FIGURE 2 :::::::::: --> <!-- :::::::::: FIGURE 3 :::::::::: --><center> <a href="http://www.eviews.com/blog/log_of_zero/table2.png"><img height="auto" src="http://www.eviews.com/blog/log_of_zero/table2.png" title="Add-ins Download" width="360" /></a><br /> <small>Figure 3: Output of Pseudo Poissson Maximum Likelihood (PPML) estimation</small><br /><br /></center><!-- :::::::::: FIGURE 3 :::::::::: --> <!-- :::::::::: FIGURE 4 :::::::::: --><center> <a href="http://www.eviews.com/blog/log_of_zero/table3.png"><img height="auto" src="http://www.eviews.com/blog/log_of_zero/table3.png" title="Add-ins Download" width="360" /></a><br /> <small>Figure 4: Output of proposed solution (GMM estimation)</small><br /><br /></center><!-- :::::::::: FIGURE 4 :::::::::: --> Simulation results show that both the PPML and the GMM solutions provide correct estimates (i.e. $ \alpha = 0 $ , $ \beta_{1} = \beta_{2} = 1 $), whereas OLS results are biased due to adding a common constant to all data points. Although $ \alpha $ is not identified in the proposed solution, the authors suggest OLS estimation to obtain the coefficient:<br /><br /> <!-- :::::::::: FIGURE 5 :::::::::: --><center> <a href="http://www.eviews.com/blog/log_of_zero/table4.png"><img height="auto" src="http://www.eviews.com/blog/log_of_zero/table4.png" title="Add-ins Download" width="360" /></a><br /> <small>Figure 5: OLS estimation of alpha parameter: $ \log(\exp(\eta_i)-1)=\alpha+\epsilon_i $</small><br /><br /></center><!-- :::::::::: FIGURE 5 :::::::::: --> When zeros are observed in both the dependent and independent variables, the authors suggest a functional coefficient model of the form: $$ \log(y_i) = \alpha + \mathbb{1}_{x_i > 0}\times\log(x_i)\beta_{x_i>0}+\mathbb{1}_{x_i=0}\times\beta_{x_i=0}+\epsilon_i $$ Again, a simulation exercise is carried out to compare the estimated coefficients with different methods. (The tables below can be reproduced in EViews by running the program <i>loglog.prg</i>.)<br /><br /> <!-- :::::::::: FIGURE 6 :::::::::: --><center> <a href="http://www.eviews.com/blog/log_of_zero/table5.png"><img height="auto" src="http://www.eviews.com/blog/log_of_zero/table5.png" title="Add-ins Download" width="360" /></a><br /> <small>Figure 6: OLS estimation</small><br /><br /></center><!-- :::::::::: FIGURE 6 :::::::::: --> <!-- :::::::::: FIGURE 7 :::::::::: --><center> <a href="http://www.eviews.com/blog/log_of_zero/table6.png"><img height="auto" src="http://www.eviews.com/blog/log_of_zero/table6.png" title="Add-ins Download" width="360" /></a><br /> <small>Figure 7: PPML estimation</small><br /><br /></center><!-- :::::::::: FIGURE 7 :::::::::: --> <!-- :::::::::: FIGURE 8 :::::::::: --><center> <a href="http://www.eviews.com/blog/log_of_zero/table7.png"><img height="auto" src="http://www.eviews.com/blog/log_of_zero/table7.png" title="Add-ins Download" width="360" /></a><br /> <small>Figure 8: GMM estimation</small><br /><br /></center><!-- :::::::::: FIGURE 8 :::::::::: --> Simulation results show that the suggested (flexible) formulation of the $ \beta $ coefficients works well for all estimation methods ($ \alpha=0 $ and $ \beta = 1.5 $).<br /><br /> <hr /><h3 id="sec3">Files</h3> <ol> <li><a href="http://www.eviews.com/blog/log_of_zero/deltasimul.prg">deltasimul.prg</a> <li><a href="http://www.eviews.com/blog/log_of_zero/loglinear.prg">loglinear.prg</a> <li><a href="http://www.eviews.com/blog/log_of_zero/loglog.prg">loglog.prg</a></ol><br /> <hr /><h3 id="sec4">References</h3> <ol class="bib2xhtml"> <!-- Authors: Bellego and Paper (2019) --><li><a name="bellego_pape-2019"></a>Bellego, C. and L-D. Pape. Dealing with the log of zero in regression models. <cite>CREST: Working Paper</cite>, No:2019-13, 2019.</li></ol></span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com0tag:blogger.com,1999:blog-6883247404678549489.post-48467357268436160422019-10-14T13:50:00.001-07:002019-12-03T12:39:35.078-08:00Sign Restricted VAR Add-In<script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { lb: "{\\left(}", rb: "{\\right)}", bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"></script> <span style="font-family: "verdana" sans-serif"><i>Authors and guest post by Davaajargal Luvsannyam and Ulziikhutag Munkhtsetseg</i><br /><br /> Nowadays, sign restricted VARs (SRVARs) are becoming popular and can be considered as an indispensable tool for macroeconomic analysis. They have been used for macroeconomic policy analysis when investigating the sources of business cycle fluctuations and providing a benchmark against which modern dynamic macroeconomic theories are evaluated. Traditional structural VARs are identified with the exclusion restriction which is sometimes difficult to justify by economic theory. In contrast, SRVARs can easily identify structural shocks since in many cases, economic theory only offers guidance on the sign of structural impulse responses on impact.<a name='more'></a><br /><br /> <h3>Table of Contents</h3><ol> <li><a href="#sec1">Introduction</a> <li><a href="#sec2">Bayesian Inference of SRVARs</a> <li><a href="#sec3">Recovering Structural Shocks from an SRVAR</a> <li><a href="#sec4">RSVAR EViews Add-in</a> <li><a href="#sec5">Conclusion</a> <li><a href="#sec6">References</a></ol><br /> <h3 id="sec1">Introduction</h3> Following the seminal work of Uhlig (2005), the uniform-normal-inverse-Wishart posterior over the orthogonal reduced-form parameterization has been dominant for SRVARs. Recently Arias, Rubio-Ramirez and Waggoner (2018), henceforth ARW, developed algorithms to independently draw from a family of conjugate posterior distributions over the structural parameterization when sign and zero restrictions are used to identify SRVARs. In particular, They show the dangers of using penalty function approaches (PFA) when implementing sign and zero restrictions to identify structural VARs (SVARs). In this blog, we describe the SRVAR add-in based on Uhlig (2005).<br /><br /> The main difference between a classic VAR and a sign restricted VAR is interpretation. For traditional structural VARs (SVARs), there is a unique point estimate of the structural impulse response function. Because sign restrictions represent inequality restrictions, sign restricted VARs are only set identified. In other words, the data are potentially consistent with a wide range of structural models that are all admissible in that they satisfy the identifying restrictions.<br /><br /> There have been both frequentist and Bayesian approaches to summarizing estimates of the admissible set of sign-identified structural VAR models. However, the most common approach for sign restricted VARs is based on Bayesian methods of inference. For example, Uhlig (2005) used a Bayesian approach which is computationally simple and a clean way of drawing error bands for impulse responses.<br /><br /> <h3 id="sec2">Bayesian Inference of SRVARs</h3> A typical VAR model is summarized by \begin{align} Y_t = B_1 Y_{t-1} + B_2 Y_{t-2} + \cdots + B_l Y_{t-l} + u_t, \quad t=1, \ldots, T \label{eq1} \end{align} where $ Y_t $ is an $ m\times 1 $ vector of data, $ B_i $ are coefficient matrices of size of $ m\times m $, and $ u_t $ is the one-step ahead prediction error with variance covariance matrix $ \mathbf{\Sigma} $. An intercept and a time trend is also sometimes added to \eqref{eq1}.<br /><br /> Next, stack the system in \eqref{eq1} as follows: \begin{align} \mathbf{Y} = \mathbf{XB} + \mathbf{u} \label{eq2} \end{align} where $ \mathbf{Y} = [Y_{1}, \ldots, Y_{T}]^{\prime} $, $ \mathbf{X} = [X_{1}, \ldots, X_{T}]^{\prime} $ and $ X_{t} = [Y_{t-1}^{\prime}, \ldots, Y_{t-l}^{\prime}] $, $ \mathbf{u} = [u_{1}, \ldots, u_{T}]^{\prime} $, and $ \mathbf{B} = [B_{1}, \ldots, B_{l}]^{\prime} $. It is also assumed that the $ u_{t} $'s are independent and normally distributed with covariance matrix $ \mathbf{\Sigma} $.<br /><br /> Model \eqref{eq2} is typically estimated using maximum likelihood (ML) estimation. In particular, the ML estimates of $ \left(\mathbf{B}, \mathbf{\Sigma}\right) $ is given by: \begin{align} \widehat{\mathbf{B}} &= \left(\mathbf{X}^{\prime}\mathbf{X}\right)^{-1}\mathbf{X}^{\prime}\mathbf{Y} \label{eq3} \\ \widehat{\mathbf{\Sigma}} &= \frac{1}{T}\left(\mathbf{Y} - \mathbf{X}\widehat{\mathbf{B}}\right)^{\prime}\left(\mathbf{Y} - \mathbf{X}\widehat{\mathbf{B}}\right) \label{eq4} \end{align} Next, note that a proper Wishart distribution of $ \left(\mathbf{B}, \mathbf{\Sigma}\right) $ centered around $ \left(\bar{\mathbf{B}}, \mathbf{S}\right) $, is characterized by the mean coefficient matrix $ \bar{\mathbf{B}} $, a positive definite mean covariance matrix $ \mathbf{S} $ along with an additional positive definite matrix $ \mathbf{N} $ of size $ ml \times ml $, and a degrees-of-freedom parameter $ v \geq 0 $. In this regard, Uhlig (2005) consider the priors and posterior for $ \left(\mathbf{B}, \mathbf{\Sigma}\right) $ to belong to the Normal-Wishart family $ W\left(\mathbf{S}^{-1} / v, v\right) $, with $ E\left(\mathbf{\Sigma}^{-1}\right) = \mathbf{S}^{-1} $, whereas the columnwise vectorized form of the coefficient matrix, $ vec\left(\mathbf{B}\right) $, conditional on $ \mathbf{\Sigma} $, is assumed to follow the Normal distribution $ \mathcal{N}\left(vec\left(\bar{\mathbf{B}}\right), \mathbf{\Sigma} \bigotimes N^{-1}\right) $.<br /><br /> Furthermore, Proposition A.1 in Uhlig (1994) shows that if the prior is characterized by the set of parameters $ \left(\bar{\mathbf{B}}_{0}, \mathbf{S}_{0}, \mathbf{N}_{0}, v_{0}\right) $, the posterior is then parameterized by the set $ \left(\bar{\mathbf{B}}_{T}, \mathbf{S}_{T}, \mathbf{N}_{T}, v_{T}\right) $ where: \begin{align} v_{T} &= T + v_{0} \label{eq5} \\ \mathbf{N}_{T} &= \mathbf{N}_{0} + \mathbf{X}^{\prime}\mathbf{X} \label{eq6} \\ \bar{B}_{T} &= \mathbf{N}_{T}^{-1} \left(\mathbf{N}_{0}\bar{\mathbf{B}}_{0} + \mathbf{X}^{\prime}\mathbf{X}\widehat{\mathbf{B}}\right) \label{eq7} \\ \mathbf{S}_{T} &= \frac{v_{0}}{v_{T}}\mathbf{S}_{0} + \frac{T}{v_{T}}\widehat{\mathbf{\Sigma}} + \frac{1}{v_{T}}\left(\widehat{\mathbf{B}} - \bar{\mathbf{B}}_{0}\right)^{\prime}\mathbf{N}_{0}\mathbf{N}_{T}^{-1}\left(\widehat{\mathbf{B}} - \bar{\mathbf{B}}_{0}\right) \label{eq8} \end{align} For instance, in the case of a flat prior with $ \bar{\mathbf{B}}_{0} $ and $ \mathbf{S}_{0} $ arbitrary and $ \mathbf{N}_{0} = v_{0} = 0 $, Uhlig (2005) show that $ \bar{\mathbf{B}}_{T} = \widehat{\mathbf{B}}, \mathbf{S}_{T} = \widehat{\mathbf{\Sigma}}, \mathbf{N}_{T} = \mathbf{X}^{\prime}\mathbf{X}, $ and $ v_{T} = T $.<br /><br /> <h3 id="sec3">Recovering Structural Shocks from an SRVAR</h3> Here we consider two approaches to recovering the structural shocks from an SRVAR. The first is based on what's known as the <b>rejection method</b>. In particular, the latter consists of the following algorithmic steps: <ol> <li>Run an unrestricted VAR in order to get $ \widehat{\mathbf{B}} $ and $ \widehat{\mathbf{\Sigma}} $. </li> <li>Randomly draw $ \bar{\mathbf{B}}_{T} $ and $ \mathbf{S}_{T} $ from the posterior distributions. </li> <li>Extract the orthogonal innovations from the model using a Cholesky decomposition.</li> <li>Calculate the resulting impulse responses from Step 3.</li> <li>Randomly draw an orthogonal impulse vector $ \mathbf{\alpha} $.</li> <li>Multiply the responses from Step 4 by $ \mathbf{\alpha} $ and check if they match the imposed signs.</li> <li>If yes, keep the response. If not, drop the draw.</li></ol> Note here that a draw $ \mathbf{\alpha} $ from an $ m $-dimensional unit sphere is easily obtained drawing $ \widetilde{\mathbf{\alpha}} $ from an $ m $-dimensional standard normal distribution and then normalizing its length to unity. In other words, $ \mathbf{\alpha} = \widetilde{\mathbf{\alpha}} / ||\widetilde{\mathbf{\alpha}}||$.<br /><br /> The second approach, proposed in Uhlig (2005), is called the <b>penalty function method</b>. In particular, the latter proposes the minimization of a penalty function given by: \begin{align} b(x) = \begin{cases} x &\quad \text{if } x \leq 0\\ 100 x &\quad \text{if } x > 0 \end{cases} \end{align} which penalizes positive responses in linear proportion, and rewards negative responses in linear proportion, albeit at a slope 100 times smaller than those on positive sides.<br /><br /> The steps involved in this algorithm can be summarized as follows: <ol> <li>Run an unrestricted VAR in order to get $ \widehat{\mathbf{B}} $ and $ \widehat{\mathbf{\Sigma}} $. </li> <li>Randomly draw $ \bar{\mathbf{B}}_{T} $ and $ \mathbf{S}_{T} $ from the posterior distributions. </li> <li>Extract the orthogonal innovations from the model using a Cholesky decomposition.</li> <li>Calculate the resulting impulse responses from Step 3.</li> <li>Minimize the penalty function with respect to an orthogonal impulse vector $ \mathbf{\alpha} $.</li> <li>Multiply the responses from Step 4 by $ \mathbf{\alpha}.$ </ol> Now, let $ r_{(j, \mathbf{\alpha})}(k) $ denote the response of variable $ j $ at step $ k $ to the impulse vector $ \mathbf{\alpha} $. Then the underlying minimization problem can be written as follows: \begin{align} \min_{\mathbf{\alpha}} \mathbf{\Psi}(\mathbf{\alpha}) = \sum_{j \in J}\sum_{k \in K}b\left(l_{j}\frac{r_{(j, \mathbf{\alpha})}(k)}{\sigma_{j}}\right) \end{align} To treat the signs equally, let $ l_j=-1 $ if the sign of restriction is positive and $ l_j=1 $ if the sign of restriction is negative. Scaling the variables is done by taking the standard errors, $ \sigma_{j} $ of the first differences of the variables. We parameterize the impulse vector $ \mathbf{\alpha} $ of the unit sphere in $ n $-space by randomly drawing $ n-1 $ from a standard Normal distribution and mapping the draw onto the $ n $ unit sphere using a stereographic projection.<br /><br /> <h3 id="sec4">SRVAR EViews Add-in</h3> Now we turn to the implementation of the SRVAR add-in. First, we need to download and install the add-in from the EViews website. The latter can be found at <a href="https://www.eviews.com/Addins/srvar.aipz">https://www.eviews.com/Addins/srvar.aipz</a>. We can also do this from inside EViews itself. In particular, after opening EViews, click on <b>Add-ins</b> from the main menu, and click on <b>Download Add-ins...</b>. From here, locate the <i>srvar</i> add-in and click on <b>Install</b>.<br /><br /> <!-- :::::::::: FIGURE 1 :::::::::: --> <center> <a href="http://www.eviews.com/blog/srvar/addin_download.png"><img height="auto" src="http://www.eviews.com/blog/srvar/addin_download.png" title="Add-ins Download" width="360" /></a><br /> <small>Figure 1: Polynomial Sieve Estimation</small><br /><br /> </center><!-- :::::::::: FIGURE 1 :::::::::: --> After installing, we import the data file named as <i>uhligdata1.xls</i> which can be found in the installation folder, typically located in <b>[Windows User Folder]/Documents/EViews Addins/srvar</b>.<br /><br /> <!-- :::::::::: FIGURE 2 :::::::::: --> <center> <a href="http://www.eviews.com/blog/srvar/workfile.png"><img height="auto" src="http://www.eviews.com/blog/srvar/workfile.png" title="Uhlig (2005) Data" width="360" /></a><br /> <small>Figure 2: Uhlig (2005) Data</small><br /><br /> </center><!-- :::::::::: FIGURE 2 :::::::::: --> Next, we take the logarithm of the series <b>gdpc1</b> (real gdp), <b>gdpdef</b> (gdp price deflator), <b>cprindex</b> (commodity price index), <b>totresns</b> (total reserves), and <b>bognonbr</b> (non-borrowed reserves). To do this, we can issue the following EViews commands:<br /><br /> <PRE><br />series gdpc1 = @log(gdpc1)*100.0<br />series gdpdef = @log(gdpdef)*100.0<br />series cprindex = @log(cprindex)*100.0<br />series totresns = @log(totresns)*100.0<br />series bognonbr = @log(bognonbr)*100.0<br /></PRE> We now replicate Figures 5, 6, and 14 from Uhlig (2005). In particular, using the aforementioned variables, Uhlig (2005) first estimate a VAR with 12 lags without a constant and trend. We can of course do this in EViews as follows:<br /><br /> <ol> <li>Click on <b>Quick/Estimate VAR...</b> to open the VAR estimation window.</li> <li>In the VAR estimation window, under <b>Endogenous variables</b>, enter <i>gdpc1 gdpdef cprindex fedfunds bognonbr totresns</i>.</li> <li>Under <b>Lag Intervals for Endogenous</b> enter <i>1 12</i></li> <li>Under the <b>Exogenous variables</b>, remove the <i>c</i> to remove the constant.</li> <li>Hit OK</li> </ol> <!-- :::::::::: FIGURE 3 :::::::::: --> <center> <a href="http://www.eviews.com/blog/srvar/basic_var.png"><img height="auto" src="http://www.eviews.com/blog/srvar/basic_var.png" title="VAR Estimation Window" width="360" /></a><br /> <small>Figure 3: VAR Estimation Window</small><br /><br /> </center><!-- :::::::::: FIGURE 3 :::::::::: --> <!-- :::::::::: FIGURE 4 :::::::::: --><center> <a href="http://www.eviews.com/blog/srvar/basic_var_results.png"><img height="auto" src="http://www.eviews.com/blog/srvar/basic_var_results.png" title="VAR Estimation Results" width="360" /></a><br /> <small>Figure 4: VAR Estimation Results</small><br /><br /> </center><!-- :::::::::: FIGURE 4 :::::::::: --> Next, we obtain the 60 period-ahead impulse response function using asymptotic standard error bands and <b>fedfunds</b> as the impulse. We can do this as follows:<br /><br /> <ol> <li>From the VAR estimation window, click on <b>View/Impulse Response...</b> to open the impulse response estimation window.</li> <li>Under <b>Display Format</b>, click <b>Multiple Graphs</b>.</li> <li>Under <b>Response Standard Errors</b>, click on <b>Analytic (asymptotic)</b></li> <li>Under <b>Impulses</b>, enter <i>fedfunds</i>.</li> <li>Under <b>Responses</b> enter <i>gdpc1 gdpdef cprindex bognonbr totresns</i></li> <li>Under <b>Periods</b>, enter <i>60</i></li> <li>Hit OK</li> </ol> <!-- :::::::::: FIGURE 5 :::::::::: --> <center> <a href="http://www.eviews.com/blog/srvar/basic_irf.png"><img height="auto" src="http://www.eviews.com/blog/srvar/basic_irf.png" title="IRF Estimation Window" width="360" /></a><br /> <small>Figure 5: IRF Estimation Window</small><br /><br /> </center><!-- :::::::::: FIGURE 5 :::::::::: --> At last, Figure 5 of Uhlig (2005) is replicated below: <!-- :::::::::: FIGURE 6 :::::::::: --> <center> <a href="http://www.eviews.com/blog/srvar/basic_irf_graphs.png"><img height="auto" src="http://www.eviews.com/blog/srvar/basic_irf_graphs.png" title="IRF Graphs" width="360" /></a><br /> <small>Figure 6: IRF Graphs</small><br /><br /> </center><!-- :::::::::: FIGURE 6 :::::::::: --> The price puzzle pointed out by Sims (1992) is clearly visible in the graphs above. In particular, the GDP deflator increases after a contractionary monetary policy shock. By contrast, the sign restricted identification approach (show in Figure 9 below), avoids the price puzzle by construction.<br /><br /> To demonstrate how sign restricted VARs avoid the price puzzle, we now make use of the SRVAR add-in. In this regard, we first create the sign restriction vector. In particular, Uhlig (2005) suggests that the impulse responses be positive on the 4th variable <b>fedfunds</b>, and negative on the 2nd variable <b>gdpdef</b>, the 3rd variable <b>cprindex</b>, and the 5th variable <b>bognonbr</b>. Thus, we create the sign restriction vector by issuing the following command: <PRE><br />vector rest = @fill(+4, -2, -3, -5)<br /></PRE> At last, we invoke the SRVAR add-in and proceed with the rejection method as the SRVAR impulse response algorithm. We do this by clicking on the <b>Add-ins</b> menu in the main EViews menu, and click on <b>Sign restricted VAR</b>. This opens the SRVAR add-in window. There, we enter the following details:<br /><br /> <ol> <li>Under <b>Endogenous variables</b> enter <i>gdpc1 gdpdef cprindex fedfunds bognonbr totresns</i>.</li> <li>Click on <b>Include constant</b>, to remove the checkmark.</li> <li>Under <b>Number of lags</b>, enter <i>12</i>.</li> <li>In the <b>Sign restriction vector</b> textbox enter <i>+4, -2, -3, -5</i>.</li> <li>In the <b>Number of horizons</b> enter <i>60</i></li> <li>For the <b>Maximum number of restrictions</b> enter <i>6</i></li> <li>Hit OK</li> </ol> The steps above produce a graph of sign restricted VAR impulse responses which correspond to Figure 6 in Uhlig (2005). <!-- :::::::::: FIGURE 7 :::::::::: --><center> <a href="http://www.eviews.com/blog/srvar/srvar_irf_graphs.png"><img height="auto" src="http://www.eviews.com/blog/srvar/srvar_irf_graphs.png" title="SRVAR Impulse Responses (Rejection Method)" width="360" /></a><br /> <small>Figure 7: SRVAR Impulse Responses (Rejection Method)</small><br /><br /></center><!-- :::::::::: FIGURE 7 :::::::::: --> From the SRVAR impulse response graph, it is readily seen that there is no price puzzle by construction. However, the impulse response of real GDP is within a ±0.2% interval around zero. Alternatively, if using the SRVAR penalty function algorithm, the analogous figure is presented below: <!-- :::::::::: FIGURE 8 :::::::::: --><center> <a href="http://www.eviews.com/blog/srvar/srvar_irf_graphs_penalty.png"><img height="auto" src="http://www.eviews.com/blog/srvar/srvar_irf_graphs_penalty.png" title="SRVAR Impulse Responses (Penalty Function Method)" width="360" /></a><br /> <small>Figure 8: SRVAR Impulse Responses (Penalty Function Method)</small><br /><br /></center><!-- :::::::::: FIGURE 9 :::::::::: --> <h3 id="sec5">Conclusion</h3> In this blog entry we presented the sign restricted VAR add-in for EViews. The add-in is based on the work of Uhlig (2005) and generates impulse response curves based on Bayesian inference which accommodate sign restrictions in the VAR model. In the next blog, we will describe the implementation of the ARW add-in which will show how to impose zero restrictions on the impact period of the impulse response function.<br /><br /> <hr /><h3 id="sec6">References</h3> <ol class="bib2xhtml"> <!-- Authors: Uhlig Herald --><li><a name="uhlig-1994"></a>Uhlig Herald. What macroeconomist should know about unit roots: a Bayesian perspective. <cite>Economic Theory</cite>, 10:645–671, 1994.</li> <!-- Authors: Uhlig Herald --><li><a name="uhlig-2005"></a>Uhlig Herald. What are the effects of monetary policy on output? Results from an agnostic identification procedure. <cite>Journal of Monetary Economics</cite>, 52(2):381–419, 2005.</li> </ol></span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com16tag:blogger.com,1999:blog-6883247404678549489.post-50858909833227118062019-07-17T13:20:00.001-07:002019-07-17T13:20:09.515-07:00Pyeviews update: now compatible with Python 3<span style="font-family: "verdana" , sans-serif;">If you’re a user of both EViews and Python, then you may already be aware of pyeviews (if not, take a look at our original blog post <a href="http://blog.eviews.com/2016/03/pyeviews-python-eviews.html" target="_blank">here</a> or our whitepaper <a href="http://www.eviews.com/download/whitepapers/pyeviews.pdf" target="_blank">here</a>). </span><br /><span style="font-family: "verdana" , sans-serif;"><br /></span><span style="font-family: "verdana" , sans-serif;">Pyeviews has been updated and is now compatible with Python 3. We’ve also added support for numpy structured arrays and several additional time series frequencies. </span><br /><span style="font-family: "verdana" , sans-serif;"><br /></span><span style="font-family: "verdana" , sans-serif;">You can get these updates through pip:</span><br /><br /><span style="font-family: "courier new" , "courier" , monospace;">pip install pyeviews</span><br /><span style="font-family: "verdana" , sans-serif;"><br /></span><span style="font-family: "verdana" , sans-serif;">Through the conda-forge channel in Anaconda:</span><br /><br /><span style="font-family: "courier new" , "courier" , monospace;">conda install pyeviews -c conda-forge</span><br /><span style="font-family: "verdana" , sans-serif;"><br /></span><span style="font-family: "verdana" , sans-serif;">Or by typing:</span><br /><br /><span style="font-family: "courier new" , "courier" , monospace;">python setup.py install</span><br /><span style="font-family: "verdana" , sans-serif;"><br /></span><span style="font-family: "verdana" , sans-serif;">in your installation directory.</span><br /><span style="font-family: "verdana" , sans-serif;"><br /></span><span style="font-family: "verdana" , sans-serif;"><br /></span><br />IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com0tag:blogger.com,1999:blog-6883247404678549489.post-22104609406423388342019-06-26T13:04:00.000-07:002019-06-27T09:54:36.913-07:00Bayesian VAR Prior Comparison<script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { lb: "{\\left(}", rb: "{\\right)}", bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], ubar: ['{\\mkern 0.5mu\\underline{\\mkern-0.5mu#1\\mkern-0.5mu}\\mkern 0.5mu}', 1], undrln: ['{\\rlap{{\\hspace{-1pt}}\\underline{\\hphantom{H}}}{#1^{#4}\\vphantom{\\beta}}_{\\hspace{#3}\\vphantom{\\underline{}}_{#2}}}', 4], norm: ['{\\lVert#1\\rVert}', 1] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"></script> <span style="font-family: "verdana" sans-serif"> EViews 11 introduces a completely new Bayesian VAR engine that replaces one from previous versions of EViews. The new engine offers two new major priors; the Independent Normal-Wishart and the Giannone, Lenza and Primiceri, that compliment the previously implemented Minnesota/Litterman, Normal-Flat, Normal-Wishart and Sims-Zha priors. The new priors were enhanced with new options for forming the underlying covariance matrices that make up essential components of the prior.<a name='more'></a><br /><br /> The covariance matrices that form the prior specification are generally formed by specifying a matrix alongside a number of hyper-parameters which define any non-zero elements of the matrix. The hyper-parameters themselves are either selected by the researcher, or taken from an initial error covariance estimate. Sensitivity of the posterior distribution to the choice of hyper-parameter is a well researched topic, with practitioners often selecting many different hyper-parameter values to check their analysis does not change based solely on (an often arbitrary) choice of parameter. However, this sensitivity analysis is restricted to the parameters selected by the researcher, with often only passing thought given to those estimated by an initial covariance estimate.<br /><br /> Since EViews 11 offers a number of choices for estimating the initial covariance, we thought it would be interesting to perform a comparison of forecast accuracy both across prior types, and across choices of initial covariance estimate.<br /><br /> <h3>Table of Contents</h3><ol> <li><a href="#sec1">Prior Technical Details</a> <li><a href="#sec2">Estimating a Bayesian VAR in EViews</a> <li><a href="#sec3">Data and Models</a> <li><a href="#sec4">Results</a> <li><a href="#sec5">Conclusions</a></ol><br /> <h3 id="sec1">Prior Technical Details</h3> We will not provide in-depth details of each prior type here, leaving such details to the <a href="http://www.eviews.com/help/helpintro.html#page/content%2FbVAR-Bayesian_VAR_Models.html%23">EViews documentation</a> and its <a href="http://www.eviews.com/help/content/bVAR-References.html#">references</a>. However we will provide a summary with enough details to demonstrate how an initial covariance matrix influences each prior type. We will also, for sake of notational convenience, ignore exogenous variables and the constant from our discussion.<br /><br /> First we write the VAR as: $$y_t = \sum_{j=1}^p\Pi_jy_{t-j}+\epsilon_t$$ where <ul> <li><h4></h4>$y_t = (y_{1t},y_{2t}, ..., y_{Mt})'$ is an M vector of endogenous variables <li><h4></h4>$\Pi_j$ are $M\times M$ matrices of lag coefficients <li><h4></h4>$\epsilon_t$ is an $M$ vector of errors where we assume $\epsilon_t\sim N(0,\Sigma)$<br /><br /></ul> If we define $x_t=(y_{t-1}', ..., y_{t-p})$ stack variables to form, for example, $Y = (y_1, ...., y_T)'$, and let $y=vec(Y')$, the multivariate normal assumption on $\epsilon_t$ gives us: $$(y\mid \beta)\sim N((X\otimes I_M)\beta, I_T\otimes \Sigma)$$ Bayesian estimation of VAR models then centers around the derivation of posterior distributions of $\beta$ and $\Sigma$ based upon the above multivariate distribution, and prior distributional assumptions on $\beta$ and $\Sigma$.<br /><br /> To demonstrate how each prior relies on an initial estimate of $\Sigma$, for the priors other than Litterman, we only need to consider the component of each prior relating to the distribution $\beta$, and in particular its covariance. <ol> <li><h4><i>Litterman/Minnesota Prior</i></h4> $$\beta \sim N\left(\undrln{\beta}{Mn}{2.25pt}{}, \undrln{V}{Mn}{2.25pt}{}\right)$$ $\undrln{V}{Mn}{2.25pt}{}$ is assumed to be a diagonal matrix. The diagonal elements corresponding to endogenous variables, $i,j$ at lag $l$ are specified by: $$\undrln{V}{Mn, i,j}{-4.5pt}{l} = \begin{cases} \left(\frac{\lambda_1}{l^{\lambda_3}}\right)^2 &\text{for } i = j\\ \left(\frac{\lambda_1 \lambda_2 \sigma_i}{l^{\lambda_3} \sigma_j}\right)^2 &\text{for } i \neq j \end{cases} $$ where $\lambda_1$, $\lambda_2$ and $\lambda_3$ are hyper-parameters chosen by the researcher, and $\sigma_i$ is the square root of the corresponding $(i,i)^{\text{th}}$ element of an initial estimate of $\Sigma$.<br /><br /> The Litterman/Minnesota prior also assumes that $\Sigma$ is fixed, forming no prior on $\Sigma$, just using the initial estimate as given.<br /><br /> <li><h4><i>Normal-Flat and Normal-Wishart</i></h4> $$\beta\mid\Sigma\sim N\left(\undrln{\beta}{N}{2.25pt}{}, \undrln{H}{N}{0pt}{}\otimes\Sigma\right)$$ where $\undrln{H}{N}{0pt}{} = c_3I_M$ and $c_3$ is a chosen hyper-parameter. As such, the Normal-Flat and Normal-Wishart priors do not rely on an initial estimate of the error covariance at all.<br /><br /> <li><h4><i>Independent Normal-Wishart</i></h4> $$\beta\sim N\left(\undrln{\beta}{INW}{2.25pt}{}, \undrln{H}{INW}{0pt}{}\otimes\Sigma\right)$$ where, again, $\undrln{H}{INW}{0pt}{} = c_3I_M$ and $c_3$ is a chosen hyper-parameter. Thus, like the Normal-Flat and Normal-Wishart priors the prior matrices do not depend upon an initial $\Sigma$ estimate. However, the Independent Normal-Wishart requires an MCMC chain to derive the posterior distributions, and the MCMC chain does require an initial estimate for $\Sigma$ to start the chain (although, hopefully, the impact of this starting estimate should be minimal).<br /><br /> <li><h4><i>Sims-Zha</i></h4> $$\beta\mid\beta_0\sim N\left(\undrln{\beta}{SZ}{2.25pt}{}, \undrln{H}{SZ}{0pt}{}\otimes\Sigma\right)$$ $\undrln{H}{SZ}{0pt}{}$ is assumed to be a diagonal matrix. The diagonal elements corresponding to endogenous variables, $i,j$ at lag $l$ are specified by: $$\undrln{H}{SZ, i, j}{-4.5pt}{l} = \left(\frac{\lambda_0\lambda_1}{\sigma_j l^{\lambda_3}}\right)^2 \text{for } i = j$$ where $\lambda_0$, $\lambda_1$ and $\lambda_3$ are hyper-parameters chosen by the researcher, and $\sigma_i$ is the square root of the corresponding $(i,i)^{\text{th}}$ element of an initial estimate of $\Sigma$.<br /><br /> <li><h4><i>Giannone, Lenza and Primiceri</i></h4> $$\beta\mid\beta_0\sim N(\undrln{\beta}{GLP}{2.25pt}{}, \undrln{H}{GLP}{0pt}{}\otimes\Sigma)$$ $\undrln{H}{GLP}{0pt}{}$ is assumed to be a diagonal matrix. The diagonal elements corresponding to endogenous variables, $i,j$ at lag $l$ are specified by: $$\undrln{H}{GLP,i,j}{-4.5pt}{l} = \left(\frac{\lambda_1}{\phi_j l^{\lambda_3}}\right)^2 \text{for } i = j$$ where $\lambda_1$, $\lambda_3$ and $\phi_j$ are hyper-parameters of the prior.<br /><br /> GLP's method revolves around using optimization techniques to select the optimal hyper-parameter values. However, it is possible to optimize only a subset of the hyper-parameters and select others. $\phi_j$ is often set, rather than optimized, as $\phi_j = \sigma_j$ and is the square root of the corresponding $(j,j)^{\text{th}}$ element of an initial estimate of $\Sigma$. Even when $\phi_j$ is optimized rather than set, an inititial estimate is used as the starting point of the optimizer.<br /><br /></ol> Of these priors, only the normal-flat and normal-Wishart priors do not rely on an initial estimate of $\Sigma$ at all. Consequently the method used for that initial estimate might have a large impact on the final results.<br /><br /> Different implementations of Bayesian VAR estimations use different methods to calculate the initial $\Sigma$. Some of these methods are:<br /><br /> <ul> <li><h4></h4>A classical VAR model. <li><h4></h4>A classical VAR model with the off-diagonal elements replaced with zero. <li><h4></h4>A univariate AR(p) model for each endogenous variable (forcing $\Sigma$ to be diagonal). <li><h4></h4>A univariate AR(1) model for each endogenous variable (forcing $\Sigma$ to be diagonal).<br /><br /></ul> With each of these methods, there is also the decision as to whether to degree-of-freedom adjust the final estimate (and if so, by what factor), and whether to include any exogenous variables from the Bayesian VAR in the calculation of the classical VAR or univariate AR models.<br /><br /> Bayesian VAR priors can be complimented with the addition of dummy-observation priors to increase the predictive power of the model. There are two specific priors - the sum-of-coefficients prior that adds additional observations to the start of the data to account for any unit root issues, and the dummy-initial-observation prior which adds additional observations to account for cointegration.<br /><br /> With the addition of extra observations to the data used in the Bayesian prior, there is also a choice to be made as whether those additional observations are also included in any initial covariance estimation.<br /><br /> <h3 id="sec2">Estimating a Bayesian VAR in EViews</h3> Estimating VARs in EViews is straight forward, you simply select the variables you want in your VAR, right click, select <i>Open As VAR</i> and then fill in the details of the VAR, including the estimation sample and the number of lags. For Bayesian VARs the only additional steps that need to be taken are changing the VAR type to Bayesian, and then filling in the details of the prior you want to use and any hyper-parameter specification.<br /><br /> For full details on how to estimate a Bayesian VAR in EViews, refer to the <a href="http://www.eviews.com/help/content/bVAR-Estimating_a_Bayesian_VAR_in_EViews.html#">documentation</a>, and <a href="http://www.eviews.com/help/content/bVAR-Examples.html#">examples</a>.<br /><br /> However we’ve also provided a simple video demonstration of both importing the data used in this blog post, and estimating and forecasting the normal-Wishart prior.<br /><br /> <center><iframe width="640" height="540" src="http://www.eviews.com/blog/bvar/video/video_player.html?embedIFrameId=embeddedSmartPlayerInstance" webkitallowfullscreen=""></iframe><br /><br /></center> <h3 id="sec3">Data and Models</h3> To evaluate the forecasting performance of the priors under different initial covariance estimation methods, we'll perform an experiment closely following that performed in Giannone, Lenza and Primiceri (GLP). Notably, we use the Stock and Watson (2008) data set which includes data on 149 quarterly US macroeconomic variables between 1959Q1 and 2008Q4. <br /><br /> Following GLP we produce forecasts from the BVARs recursively for two forecast lengths (1 quarter and 1 year), starting with data from 1959 to 1974, then increasing the estimation sample by one quarter at a time, to give 128 different estimations.<br /><br /> We perform two sets of experiments, each representing a different sized VAR:<br /><br /> <ul> <li><h4></h4>SMALL containing just three variables - GDP, the GDP deflator and the federal funds rate. <li><h4></h4>MEDIUM containing seven variables - adding consumption, investment, hours and wages.<br /><br /></ul> Each of these VARs is estimated at five lags using a classical VAR and 39 different combinations of prior and initial covariance options:<br /><br /> <!-- :::::::::: TABLE 0 :::::::::: --><center> <a href="http://www.eviews.com/blog/bvar/table0.png"><img height="auto" src="http://www.eviews.com/blog/bvar/table0.png" title="Models Overview" width="600" /></a><br /><br /></center><!-- :::::::::: TABLE 0 :::::::::: --> After each BVAR estimation, Bayesian sampling of the forecast period is performed - drawing from the full posterior distributions for the Litterman, Normal-flat, Normal-Wishart and Sims-Zha priors, and running MCMC draws for the Independent normal-Wishart and GLP priors. The mean of the draws is used as a point estimate, and the root mean square error (RMSE) is calculated. Each forecast draw uses 100,000 iterations. With 39*128=4,992 forecasts and two sizes of VARs, that is a total of 1 billion draws!<br /><br /> <h3 id="sec4">Results</h3> The following tables show the average root-mean square of each of the four sets of forecasts. Click on a table to enlarge the image.<br /><br /> <!-- :::::::::: TABLE 1 :::::::::: --><center> <a href="http://www.eviews.com/blog/bvar/table1.png"><img height="auto" src="http://www.eviews.com/blog/bvar/table1.png" title="Three variable VAR one quarter GDP forecast RMSE" width="720" /></a><br /><br /></center><!-- :::::::::: TABLE 1 :::::::::: --> <!-- :::::::::: TABLE 2 :::::::::: --><center> <a href="http://www.eviews.com/blog/bvar/table2.png"><img height="auto" src="http://www.eviews.com/blog/bvar/table2.png" title="Three variable VAR one year GDP forecast RMSE" width="720" /></a><br /><br /></center><!-- :::::::::: TABLE 2 :::::::::: --> <!-- :::::::::: TABLE 3 :::::::::: --><center> <a href="http://www.eviews.com/blog/bvar/table3.png"><img height="auto" src="http://www.eviews.com/blog/bvar/table3.png" title="Five variable VAR one quarter GDP forecast RMSE" width="720" /></a><br /><br /></center><!-- :::::::::: TABLE 3 :::::::::: --> <!-- :::::::::: TABLE 4 :::::::::: --><center> <a href="http://www.eviews.com/blog/bvar/table4.png"><img height="auto" src="http://www.eviews.com/blog/bvar/table4.png" title="Five variable VAR one year GDP forecast RMSE" width="720" /></a><br /><br /></center><!-- :::::::::: TABLE 4 :::::::::: --> <h3 id="sec5">Conclusions</h3>For the three variable one-quarter ahead experiment, it is clear that the GLP prior is more effective than the other prior types, although the Litterman prior is relatively close in accuracy. In terms of which covariance method performs best, there is no clear winner, with the differences between covariance choice only having a large impact on the Litterman and GLP priors.<br /><br /> The choice of whether to include dummy observation priors, and if so whether to include them in the covariance calculation, choice appears to only impact the GLP prior severely.<br /><br /> The overall winner, at least in terms of RMSE, was the GLP prior with a diagonal VAR used for initial covariance choice without dummy observations.<br /><br /> A similar story is told for the three variable one-year ahead experiment, however this time the Litterman prior is the clear winner. Again there is not much difference between covariance choices and dummy observation choices. Notably, although Litterman does best across the options, the overall most accurate was the Normal-flat.<br /><br /> Expanding to the five variable VARs, the one-quarter ahead experiment is not as clear-cut as the three variable equivalent. Across covariance options is a toss-up between Litterman and GLP. The effect of covariance has a bigger impact, with the Univariate AR(5) option looking best.<br /><br /> For the first time, optimizing $\phi$ in the GLP prior has a positive impact, with the version including dummy observations being the overall most accurate option combination.<br /><br /> The final experiment is similar, no clear-cut winner in terms of prior choice, although Litterman might just edge GLP. Choice of covariance again has an impact, with again a univariate AR(5) looking best.<br /><br /> Across all the experiments it is difficult to give an overall winner. The original Litterman and GLP priors are ahead of the others, but knowing which covariance choice to select or whether to include dummy observations is more ambiguous.<br /><br /> One absolutely clear result is, however, that no matter which combination of prior and options are selected, the Bayesian VAR will vastly outperform a classical VAR.<br /><br /> Finally, it is worth mentioning that these results are, with the obvious exception of the GLP prior, for a fixed set of hyper-parameters, and the conclusions may differ if attention is given to simultaneously finding the best set of hyper-parameters and covariance choice. </span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com1tag:blogger.com,1999:blog-6883247404678549489.post-24918477283176795732019-05-13T09:34:00.000-07:002019-05-14T11:00:17.335-07:00Functional Coefficient Estimation: Part I (Nonparametric Estimation)<script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { lb: "{\\left(}", rb: "{\\right)}", bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"></script> <span style="font-family: "verdana" sans-serif"> Recently, EViews 11 introduced several new nonparametric techniques. One of those features is the ability to estimate functional coefficient models. To help familiarize users with this important technique, we're launching a multi-part blog series on nonparametric estimation, with a particular focus on the theoretical and practical aspects of functional coefficient estimation. Before delving into the subject matter however, in this Part I of the series, we give a brief and gentle introduction to some of the most important principles underlying nonparametric estimation, and illustrate them using EViews programs.<a name='more'></a><br /><br /> <h3>Table of Contents</h3><ol> <li><a href="#sec1">Nonparametric Estimation</a> <li><a href="#sec2">Global Methods</a> <ol type="i"> <li><a href="#sec2.1">Optimal Sieve Length</a> <li><a href="#sec2.2">Critiques</a> </ol> <li><a href="#sec3">Local Methods</a> <ol type="i"> <li><a href="#sec3.1">Localized Kernel Regression</a> <li><a href="#sec3.2">Bandwidth Selection</a> </ol> <li><a href="#sec4">Conclusion</a> <li><a href="#sec5">Files</a> <li><a href="#sec6">References</a></ol><br /> <h3 id="sec1">Nonparametric Estimation</h3> Traditional least squares regression is parametric in nature. It confines relationships between the dependent variable $ Y_{t} $ and independent variables (regressors) $ X_{1,t}, X_{2,t}, \ldots $ to be, in expectation, linear in the parameter space. For instance, if the true data generating process (DGP) for $ Y_{t} $ derives from $ p $ regressors, the least squares regression model postulates that: $$ Y_{t} = m(x_{1}, \ldots, x_{p}) \equiv E(Y_t | X_{1,t} = x_{1}, \ldots, X_{p,t} = x_{p}) = \beta_0 + \sum_{k=1}^{p}{\beta_k x_{k}} $$ Since this relationship holds only in expectation, a statistically equivalent form of this statement is: \begin{align} Y_t &= m\left(X_{1,t}, \ldots, X_{p,t}\right) + \epsilon_{t} \nonumber \\ &=\beta_0 + \sum_{k=1}^{p}{\beta_k X_{k,t}} + \epsilon_t \label{eq.1.1} \end{align} where the error term $ \epsilon_{t} $ has mean zero, and parameter estimates are solutions to the minimization problem: $$ \arg\!\min_{\hspace{-1em}\beta_{0}, \ldots, \beta_{p}} E\left(Y_{t} - \beta_0 + \sum_{k=1}^{p}{\beta_k X_{k,t}}\right)^{2} $$ Nevertheless, while this framework is typically sufficient for most applications, and is obviously very appealing and intuitive, when the true but unknown DGP is in fact non-linear, inference is rendered unreliable.<br /><br /> On the other hand, nonparametric modelling prefers to remain agnostic about functional forms. Relationships are, in expectation, simply functionals $ m(\cdot) $, and if the true DGP for $ Y_{t} $ is a function of $ p $ regressors, then: $$ Y_t = m\left(X_{1,t}, \ldots, X_{p,t}\right) + \epsilon_{t} $$ Here, estimators of $ m(\cdot) $ can generally be cast as minimization problems of the form: \begin{align} \arg\!\min_{\hspace{-1em} m\in \mathcal{M}} E\left(Y_{t} - m\left(X_{1,t}, \ldots, X_{p,t}\right)\right)^{2} \label{eq.1.2} \end{align} where $ \mathcal{M} $ is now a function space. In this regard, a nonparametric estimator can be thought of as a solution to a search problem over functions as opposed to parameters.<br /><br /> The problem in \eqref{eq.1.2}, however, is infeasible. It turns out the function space is effectively uncountable. In fact, even if arguing to the contrary, solutions would be unidentified since different functions in $ \mathcal{M} $ can map to the same range. Accordingly, general practice is to reduce $ \mathcal{M} $ to a lower dimensional countable space and optimize over it. This typically implies a reduction of the problem to a parametric framework so that the problem in \eqref{eq.1.2} is cast into: \begin{align} \arg\!\min_{\hspace{-1em} m\in \mathcal{M}} E\left(Y_{t} - h\left(X_{1,t}, \ldots, X_{p,t}; \mathbf{\Theta} \right)\right)^{2} \label{eq.1.3} \end{align} where $ h(\cdot; \mathbf{\Theta}) \in \mathcal{H} $ is a function with associated parameters $ \mathbf{\Theta} \in \mathbf{R}^{q} $ and $ \mathcal{H} $ is a function space which is <i>dense</i> in $ \mathcal{M} $; formally, $ h^{\star} \in \mathcal{H} \rightarrow m^{\star} \in \mathcal{M} $ where $ \rightarrow $ denotes asymptotic convergence. Recall that this means that any feasible estimate $ h^{\star} $ must become arbitrarily close to the unfeasible estimate $ m^{\star} $ as the space $ \mathcal{H} $ grows to asymptotic equivalence with $ \mathcal{M} $. In this regard, nonparametric estimators are typically classified into either <i>global</i> or <i>local</i> kinds.<br /><br /> <h3 id="sec2">Global Methods</h3> Global estimators, generally synonymous with the class of <i>sieve</i> estimators introduced by Grenander (1981), approximate arbitrary functions by simpler functions which are uniformly dense in the target space $ \mathcal{M} $. A particularly important class of such estimators are <i>linear sieves</i> which are constructed as linear combinations of popular basis functions. The latter include <i>Bernstein polynomials</i>, <i>Chebychev polynomials</i>, <i>Hermite polynomials</i>, <i>Fourier series</i>, <i>polynomial splines</i>, <i>B-splines</i>, and <i>wavelets</i>. Formally, when the function $ m(\cdot) $ is univariate, linear sieves assume the following general structure: \begin{align} \mathcal{H}_{J} = \left\{h \in \mathcal{M}: h(x; \mathbf{\Theta}) = \sum_{j=1}^{J}\theta_{k}f_{j}(x)\right\} \label{eq.1.4} \end{align} where $ \theta_{j} \in \mathbf{\Theta} $, $ f_{j}(\cdot) $ is one of the aforementioned basis functions, and $ J \rightarrow \infty$.<br /><br /> For instance, if the sieve exploits the <i>Stone-Weierstrass' Approximation Theorem</i> which claims that any continuously differentiable function over a compact interval, can be uniformly approximated on that interval by a polynomial to any degree, then $ f_{j}(x) = x^{j-1} $. In particular, if the unknown function of interest is $ m(x) $, then choosing to approximate the latter with a polynomial of degree $ J = J^{\star} < \infty $ (some integer), reduces the problem in \eqref{eq.1.4} to: $$ \arg\!\min_{\hspace{-1em} m\in \mathcal{M}} E\left(Y_{t} - \theta_{0} + \sum_{j=1}^{J^{\star}}\theta_{j}X_{t}^{j} \right)^{2} $$ where $ Y_{t} $ are the values we observe from the theoretical function $ m(x) $, and $ X_{t} $ is the regressor we're using to estimate it. Usual least squares now yields $ \widehat{\theta}_{j} $ for $ j=1,\ldots, J^{\star} $. Furthermore, $ m(x) $ can be approximated as $$ m(x) \approx \widehat{\theta}_{0} + \sum_{j=1}^{J^{\star}}\widehat{\theta}_{j}x^{j} $$ where $ x $ is evaluated on some grid $ [a,b] $, where it can have arbitrary length, or even on the original regressor values so that $ x \equiv X_{t} $.<br /><br /> To demonstrates the procedure, define the true but unknown function $ m(x) $ as: \begin{align} m(x) = \sin(x)\cos(\frac{1}{x}) + \log\left(x + \sqrt{x^2+1}\right) \quad x \in [-6,6]\label{eq.1.5} \end{align} Furthermore, generate observable data from $ m(x) $ as $ Y_{t} = m(x) + 0.5\epsilon_{t} $ and generate the regressor data as $ X_{t} = x - 0.5 + \eta_{t} $ where $ \epsilon_{t} $ and $ \eta_{t} $ are mutually independent respectively standard normal and standard uniform random variables. Estimation is now summarized for polynomial degrees 1, 5, and 15, respectively.<br /><br /> <!-- :::::::::: FIGURE 1 :::::::::: --><center> <a href="http://www.eviews.com/blog/funcoef/polysieveplot.jpeg"><img height="auto" src="http://www.eviews.com/blog/funcoef/polysieveplot.jpeg" title="Polynomial Sieve Estimation" width="360" /></a><br /> <small>Figure 1: Polynomial Sieve Estimation</small><br /><br /></center><!-- :::::::::: FIGURE 1 :::::::::: --> Alternatively, if the sieve exploits Hermite polynomials, one can construct the <i>Gaussian sieve</i> which reduces the problem in \eqref{eq.1.4} to: $$ \arg\!\min_{\hspace{-1em} m\in \mathcal{M}} E\left(Y_{t} - \theta_{0} + \sum_{j=1}^{J^{\star}}\theta_{j}\phi(X_{t})H_{j}(X_{t}) \right)^{2} $$ where $ \phi(\cdot) $ is the standard normal density and $ H_{j}(\cdot) $ are Hermite polynomials of degree $ j $. The figure below demonstrates the procedure using sieve lengths 1, 3, and 10, respectively.<br /><br /> <!-- :::::::::: FIGURE 2 :::::::::: --><center> <a href="http://www.eviews.com/blog/funcoef/gausssieveplot.jpeg"><img height="auto" src="http://www.eviews.com/blog/funcoef/gausssieveplot.jpeg" title="Gaussian Sieve Estimation" width="360" /></a><br /> <small>Figure 2: Gaussian Sieve Estimation</small><br /><br /></center><!-- :::::::::: FIGURE 2 :::::::::: --> Clearly, both sieve estimators are very similar. So how does one select an <i>optimal</i> sieve? There really isn't a prescription for such optimization. Each sieve has its advantages and disadvantages, but the general rule of thumb is to choose a sieve that most closely resembles the function of interest $ m(\cdot) $. For instance, if the function is polynomial, then using a polynomial sieve is probably best. Alternatively, if the function is expected to be smooth and concentrated around its mean, a Gaussian sieve will work well. On the other hand, the question of optimal sieve length lends itself to more concrete advice.<br /><br /> <h4 id="sec2.1">Optimal Sieve Length</h4> Given the examples explored above, it is evident that sieve length plays a major role in fitting accuracy. For instance, estimation with a low sieve length resulted in severe underfitting, while a higher sieve length resulted in better fit. The question of course is whether an optimal length can be determined.<br /><br /> Li et. al. (1987) studied three well-known procedures, all of which are based on the mean squared forecast error of the estimated function over a search grid $ \mathcal{J} \equiv \left\{J_{min},\ldots, J_{max}\right\} $, and all of which are asymptotically equivalent. In particular, let $ J^{\star} $ the optimal sieve length and consider:<br /><br /> <ol> <li>$ C_{p} $ method due to Mallows (1973): $$ J^{\star} = \min_{J \in \mathcal{J}} \frac{1}{T}\sum_{t=1}^{T}\left(Y_{t} - \widehat{m}(X_{t})\right)^{2} - 2\widehat{\sigma}^{2}\frac{J}{T} $$ where $ \widehat{\sigma}^{2} = \frac{1}{n}\sum_{t=1}^{T}\left(Y_{t} - \widehat{m}(X_{t})\right)^{2}$ <li>Generalized cross-validation method due to Craven and Wahba (1979): $$ J^{\star} = \min_{J \in \mathcal{J}} \frac{1}{(1 - (J/2))^{2}T}\sum_{t=1}^{T}\left(Y_{t} - \widehat{m}(X_{t})\right)^{2} $$ <li>Leave-one-out cross validation method due to Stone (1974): $$ J^{\star} = \min_{J \in \mathcal{J}} \frac{1}{T}\sum_{t=1}^{T}\left(Y_{t} - \widehat{m}_{\setminus t^{\star}}(X_{t})\right)^{2} $$ where the subscript notation $ \setminus t^{\star} $ indicates estimation after dropping observation $ t^{\star} $. </ol> Here we discuss the algorithm for the last of the three procedures. In particular, with the search grid $ \mathcal{J} $ defined as before, iterate the following steps over $ J \in \mathcal{J} $: <ol> <li>For each observation $ t^{\star} \in \left\{1, \ldots, T \right\} $: <ol type="i"> <li>Solve the optimization problem in \eqref{eq.1.4} using data from the pair $ (Y_{t}, X_{t})_{t \neq t^{\star}} $, and derive the estimated model as follows: $$ \widehat{m}_{J,\setminus t^{\star}}(x) \equiv \widehat{\theta}_{_{J,\setminus t^{\star}}0} + \sum_{j=1}^{J}\widehat{\theta}_{_{J,\setminus t^{\star}}j}f_{j}(x)$$ where the subscript $ J,\setminus t^{\star} $ indicates that parameters are estimated using sieve length $ J $, after dropping observation $ t^{\star} $. <li>Derive the forecast error for the dropped observation as follows: $$ e_{_{J}t^{\star}} \equiv Y_{t^{\star}} - \widehat{m}_{J,\setminus t^{\star}}(X_{t^{\star}}) $$ </ol> <li>Derive the cross-validation mean squared error for sieve length $ J $ as follows: $$ MSE_{J} = \frac{1}{T}\sum_{t=1}^{T} e_{_{J}t}^{2} $$ <li>Determine the optimal sieve length $ J^{\star} $ as the minimum $ MSE_{J} $ across $ \mathcal{J} $. In other words $$ J^{\star} = \min_{J\in\mathcal{J}} MSE_{J} $$ </ol> In words, the algorithm moves across the sieve search grid $ \mathcal{J} $ and computes an out-of-sample forecast error for each observation. The optimal sieve length is that which minimizes the average mean squared error across the search grid. We demonstrate the selection criteria and accompanied estimation when using a grid search from 1 to 15.<br /><br /> <!-- :::::::::: FIGURE 3 :::::::::: --><center> <a href="http://www.eviews.com/blog/funcoef/optest.jpeg"><img height="auto" src="http://www.eviews.com/blog/funcoef/optest.jpeg" title="Sieve Regression with Optimized Sieve Length Selection" width="720" /></a><br /> <small>Figure 3: Sieve Regression with Optimized Sieve Length Selection</small><br /><br /></center><!-- :::::::::: FIGURE 3 :::::::::: --> Evidently, both the polynomial and Gaussian sieve models ought to use a sieve length of 15.<br /><br /> <h4 id="sec2.2">Critiques</h4> While global nonparametric estimators are easy to work with, they exhibit several well recognized drawbacks. First, they leave little room for fine-tuning estimation. For instance, in the case of polynomial sieves, the polynomial degree is not continuous. In other words, if estimation underfits when sieve length is $ J $, but overfits when sieve length is $ J+1 $, then there is no polynomial degree $ J < J^{\star} < J+1 $.<br /><br /> Second, global estimators are often subject to infeasibility since regressor values may not be sufficiently small. This is because increased sieve lengths can result in the values of the regressor covariance matrix to become extremely large. In turn, this can render the inverse of the covariance matrix nearly singular, and by extension, render estimation infeasible. In other words, at some point, increasing the polynomial degree further does not lead to estimate improvements.<br /><br /> Lastly, it is worth pointing out that global estimators fit curves by smoothing (averaging) over the entire domain. As such, they can have difficulties handling observations with strong influences such as outliers and regime switches. This is due to the fact that outlying observations will be averaged with the rest of the data, resulting in a curve that significantly under- or over- fits these observations. To illustrate this point, consider a modification of equation \eqref{eq.1.5} with outliers when $ -1 < x \leq 1 $ : \begin{align} m(x) = \begin{cases} \sin(x)\cos(\frac{1}{x}) + \log\left(x + \sqrt{x^2+1}\right) & \text{if } x\in [-6,1]\\ \sin(x)\cos(\frac{1}{x}) + \log\left(x + \sqrt{x^2+1}\right) + 4 & \text{if } x \in (-1,1]\\ \sin(x)\cos(\frac{1}{x}) + \log\left(x + \sqrt{x^2+1}\right) - 2 & \text{if } x \in (1,6] \end{cases}\label{eq.1.6} \end{align} We generate $ Y_{t} $ and $ X_{t} $ as before, and estimate this model using both polynomial and Gaussian sieves based on cross-validated sieve length selection.<br /><br /> <!-- :::::::::: FIGURE 4 :::::::::: --><center> <a href="http://www.eviews.com/blog/funcoef/optestoutliers.jpeg"><img height="auto" src="http://www.eviews.com/blog/funcoef/optestoutliers.jpeg" title="Sieve Regression with Optimized Sieve Length Selection and Outliers" width="720" /></a><br /> <small>Figure 4: Sieve Regression with Optimized Sieve Length Selection and Outliers</small><br /><br /></center><!-- :::::::::: FIGURE 4 :::::::::: --> Clearly, both procedures have a difficult time handling jumps in the domain region $ -1 < x \leq 1 $. Nevertheless, it is evident that the Gaussian sieve does significantly better than polynomial regression. This is further corroborated by the leave-one-out cross-validation MSE values which indicate that the Gaussian sieve minimum MSE is roughly 6 times as small as the polynomial sieve minimum MSE.<br /><br /> It turns out that a number of these shortcomings can be mitigated by averaging locally instead of globally. In this regard, we turn to the idea of <i>local estimation</i> next.<br /><br /> <h3 id="sec3">Local Methods</h3> The general idea behind local nonparametric estimators is <i>local averaging</i>. The procedure partitions the functional variable $ x $ into <i>bins</i> of a particular size, and estimates $ m(x) $ as a linear interpolation of the average values of the dependent variable at the middle of each bin. We demonstrate the procedure when $ m(x) $ is the function in \eqref{eq.1.5}.<br /><br /> In particular, define $ Y_{t} $ as before, but let $ X_{t} = x $. In other words, we consider deterministic regressors. We will relax the latter assumption later, but this is momentarily more instructive as it leads to contiguous partitions of the explanatory variable $ X_{t} $. At last, define the bins as quantiles of $ x $ and consider the procedure with bin partitions equal to 2, 5, 15, and 30, respectively.<br /><br /> <!-- :::::::::: FIGURE 5 :::::::::: --><center> <a href="http://www.eviews.com/blog/funcoef/quantplot.jpeg"><img height="auto" src="http://www.eviews.com/blog/funcoef/quantplot.jpeg" title="Local Averaging with Quantiles" width="720" /></a><br /> <small>Figure 5: Local Averaging with Quantiles</small><br /><br /></center><!-- :::::::::: FIGURE 5 :::::::::: --> Clearly, when the number of bins is 2, the estimate is a straight line and severely underfits the objective function. Nevertheless, as the number of bins increases, so does the accuracy of the estimate. Indeed, local estimation here is shown to be significantly more accurate than global estimation used earlier on the same function $ m(x) $. This is of course a consequence of local averaging which performs piecemeal smoothing on only those observations restricted to each bin. Naturally, high leverage observations and outliers are better accommodated as they are averaged only with those observations in the immediate vicinity which also fall in the same bin. In fact, we can demonstrate this using the function $ m(x) $ in \eqref{eq.1.6}.<br /><br /> <!-- :::::::::: FIGURE 6 :::::::::: --><center> <a href="http://www.eviews.com/blog/funcoef/quantplotoutliers.jpeg"><img height="auto" src="http://www.eviews.com/blog/funcoef/quantplotoutliers.jpeg" title="Local Averaging with Quantiles and Outliers" width="720" /></a><br /> <small>Figure 6: Local Averaging with Quantiles and Outliers</small><br /><br /></center><!-- :::::::::: FIGURE 6 :::::::::: --> Evidently, increasing the number of bins leads to increasingly better adaptation to the presence of outlying observations.<br /><br /> It's worth pointing out here that unlike sieve estimation which can suffer from infeasibility with increased sieve length, in local estimation, there is in principle no limit to how finely we wish to define the bin width. Nevertheless, as is evident from the visuals, while increasing the number of bins will reduce bias, it will also introduce variance. In other words, smoothness is sacrificed at the expense of accuracy. This is of course the <i>bias-variance tradeoff</i> and is precisely the mechanism by which fine-tuning the estimator is possible.<br /><br /> <h4 id="sec3.1">Localized Kernel Regression</h4> The idea of local averaging can be extended to accommodate various bin types and sizes. The most popular approaches leverage information of the points at which estimates of $ m(x) $ are desired. For instance, if estimates of $ m(x) $ are desired at a set of points $ \left(x_{1}, \ldots, x_{J} \right) $, then the estimate $ \widehat{m}(x_{j}) $ can be the average of $ Y_{t} $ for each point $ X_{t} $ in some <i>neighborhood</i> of $ x_{j} $ for $ j=1,\ldots, J $. In other words, bins are defined as neighborhoods centered around the points $ x_{j} $, with the size of the neighborhood determined by some distance metric. Then, to gain control over the bias-variance tradeoff, neighborhood size can be exploited with a penalization scheme. In particular, penalization introduces a weight function which disadvantages those $ X_{t} $ that are too far from $ x_{j} $ in any direction. In other words, those $ X_{t} $ close to $ x_{j} $ (in the neighborhood) are assigned larger weights, whereas those $ X_{t} $ far from $ x_{j} $ (outside the neighborhood) are weighed down.<br /><br /> Formally, when the function $ m(\cdot) $ is univariate, local kernel estimators solve optimization problems of the form: \begin{align} \arg\!\min_{\hspace{-1em} \beta_{0}} E\left(Y_{t} - \beta_{0}\right)^{2}K_{h}\left(X_{t} - x_{j}\right) \quad \forall j \in \left\{1, \ldots, J\right\}\label{eq.1.7} \end{align} Here we use the traditional notation $ K_{h}(X_{t} - x_{j}) \equiv K\left(\frac{|X_{t} - x_{j}|}{h}\right) $ where $ K(\cdot) $ is a distributional weight function, otherwise known as a <i>kernel</i>, $ |\cdot| $ denotes a distance metric (typically Euclidean), $ h $ denotes the size of the local neighbourhood (bin), otherwise known as a <i>bandwidth</i>, and $ \beta_{0} \equiv \beta_{0}(x_{j}) $ due to its dependence on the evaluation point $ x_{j} $.<br /><br /> To gain further insight, it is easiest to think of $ K(\cdot) $ as a probability density function with support on $ [-1,1] $. For instance, consider the famous <i>Epanechnikov</i> kernel: $$ K(u) = \frac{3}{4}\left(1 - u^{2}\right) \quad \text{for} \quad |u| \leq 1 $$ or the <i>cosine</i> kernel specified by: $$ K(u) = \frac{\pi}{4}\cos(\frac{\pi}{2}u) \quad \text{for} \quad |u| \leq 1 $$ <table> <tbody> <tr> <td> <!-- :::::::::: FIGURE 7A :::::::::: --> <center> <a href="http://www.eviews.com/blog/funcoef/epankern.jpeg"><img height="auto" src="http://www.eviews.com/blog/funcoef/epankern.jpeg" title="Epanechnikov Kernel" width="360" /></a><br /> </center> <!-- :::::::::: FIGURE 7A :::::::::: --> </td> <td> <!-- :::::::::: FIGURE 7B :::::::::: --> <center> <a href="http://www.eviews.com/blog/funcoef/coskern.jpeg"><img height="auto" src="http://www.eviews.com/blog/funcoef/coskern.jpeg" title="Cosine Kernel" width="360" /></a><br /> </center> <!-- :::::::::: FIGURE 7B :::::::::: --> </td> </tr> <tr> <td> <center> <small>Figure 7A: Epanechnikov Kernel</small><br /><br /> </center> </td> <td> <center> <small>Figure 7B: Cosine Kernel</small><br /><br /> </center> </td> </tr> </tbody></table> Now, if $ |X_{t} - x| > h $, it is clear that $ K(\cdot) = 0 $. In other words, if the distance between $ X_{t} $ and $ x $ is larger than the bandwidth (neighborhood size), then $ X_{t} $ lies outside the neighborhood and its importance will be weighed down to zero. Alternatively, if $ |X_{t} - x| = 0 $, then $ X_{t} = x $ and $ X_{t} $ will be assigned the highest weight, which in the case of the Epanechnikov and cosine kernels, is 0.75 and 0.8, respectively <br /><br /> To demonstrate the mechanics, consider a kernel estimator based on $ k- $nearest neighbouring points, or the weighted $ k-NN $ estimator. In particular, this estimator defines the neighbourhood as all points $ X_{t} $, the distance of which to an evaluation point $ x_{j} $, are no greater than the distance of the $ k^{\text{th}} $ nearest point $ X_{t} $ to the same evaluation point $ x_{j} $. When used in the optimization problem \eqref{eq.1.7}, the resulting estimator is also sometimes referred to as <i>LOWESS</i> - LOcally Weighted Estimated Scatterplot Smoothing.<br /><br /> The algorithm used in the demonstration is relatively simple. First, define $ k^{\star} $ as the number of neighbouring points to be considered and define a grid $ \mathcal{X} \equiv \{x_{1}, \ldots, x_{J}\} $ of points at which an estimate of $ m(\cdot) $ is desired. Next, define a kernel function $ K(\cdot) $. Finally, for each $ j \in \{1, \ldots, J\}, $, execute the following: <ol> <li>For each $ t \in \{1,\ldots, T\} $, compute $ d_{t} = |X_{t} - x_{j}| $ -- the Euclidean distance between $ X_{t} $ and $ x_{j} $. <li>Order the $ d_{t} $ in ascending order to form the ordered set $ \{d_{(1)} \leq d_{(2)} \leq \ldots \leq d_{(T)}\} $. <li>Set the bandwidth as $ h = d_{(k^{\star})} $. <li>For each $ t \in \{1,\ldots, T\} $, compute a weight $ w_{t} \equiv K_{h}(X_{t} - x_{j}) $. <li>Solve the optimization problem: $$ \arg\!\min_{\hspace{-1em} \beta_{0}} E\left(Y_{t} - \beta_{0}\right)^{2}w_{t} $$ to derive the parameter estimate: $$ \widehat{m}_{\setminus t^{\star}}(x_{j}) \equiv \widehat{\beta}_{0}(x_{j}) = \frac{\sum_{t=1}^{T}w_{t}Y_{t}}{\sum_{t=1}^{T}w_{t}} $$ </ol> An estimate of $ m(x) $ along the domain $ \mathcal{X} $, is now the linear interpolation of the points $ \{\widehat{\beta}_{0}(x_{1}), \ldots, \widehat{\beta}_{0}(x_{J})\} $.<br /><br /> For instance, suppose $ m(x) $ is the curve defined in \eqref{eq.1.6}, the evaluation grid $ \mathcal{X} $ consists of points in the interval $ [-6,6] $, and $ K(\cdot) $ is the Epanechnikov kernel. Furthermore, suppose $ Y_{t} = m(x) + 0.5\epsilon_{t} $ and $ X_{t} = x - 0.5 + \eta_{t} $. Notice that we're back to treating the regressor as a stochastic variable. Then, the $ k-NN $ estimator of $ m(\cdot) $ with 15, 40, 100, and 200 nearest neighbour points, respectively, is illustrated below.<br /><br /> <!-- :::::::::: FIGURE 8 :::::::::: --><center> <a href="http://www.eviews.com/blog/funcoef/knnreg.jpeg"><img height="auto" src="http://www.eviews.com/blog/funcoef/knnreg.jpeg" title="k-NN Regression" width="720" /></a><br /> <small>Figure 8: k-NN Regression</small><br /><br /></center><!-- :::::::::: FIGURE 8 :::::::::: --> Clearly, the estimator can be very adaptive to the nuances of outlying points but can suffer from both underfitting and overfitting. In this regard, observe that the number of neighbouring points is directly proportional to neighbourhood (bandwidth) size. In other words, as the number of neighbouring points increases, the bandwidth increases. This is evidenced by a very volatile estimator when the number of neighbouring points is 15, and a significantly smoother estimator when th number of neighbouring points is 200. Therefore, there must be some optimal middle ground between undersmoothing and oversmoothing. In general, notice that apart from the lower zero bound, the bandwidth is not bounded above. Thus, there is an extensive range of bandwidth possibilities. So how does one define what constitutes an optimal bandwidth?<br /><br /> <h4 id="sec3.2">Bandwidth Selection</h4> While we will cover optimal bandwidth selection in greater detail in Part II of this series, it is not difficult to draw similarities between the role of bandwidth size in local estimation and sieve length in global methods. In fact similar methods for optimal bandwidth selection exist in the context of local kernel regression, and analogous to sieve methods, are also typically grid searches. In this regard, in order to avoid complicated theoretical discourse, consider momentarily the optimization problem in \eqref{eq.1.7}.<br /><br /> It is not difficult to demonstrate that the estimator $ \widehat{\beta}_{0}(x) $ satisfies: \begin{align*} \widehat{\beta}_{0}(x) &= \frac{T^{-1}\sum_{t=1}^{T}K_{h}\left(X_{t} - x\right)Y_{t}}{T^{-1}\sum_{t=1}^{T}K_{h}\left(X_{t} - x\right)}\\ &=\frac{1}{T}\sum_{t=1}^{T}\left(\frac{K_{h}\left(X_{t} - x\right)}{T^{-1}\sum_{i=1}^{T}K_{h}\left(X_{i} - x\right)}\right)Y_{t} \end{align*} Accordingly, if $ h\rightarrow 0 $, then $ \frac{K_{h}\left(X_{t} - x\right)}{T^{-1}\sum_{i=1}^{T}K_{h}\left(X_{i} - x\right)} \rightarrow T $ and is only defined on $ x = X_{t} $. In other words, as the bandwidth approaches zero, $ \widehat{\beta}_{0}(x) \equiv \widehat{\beta}_{0}(X_{t}) \rightarrow Y_{t} $, and the estimator is effectively an interpolation of the data. Naturally, this estimator has very small bias since it picks up every data point in $ Y_{t} $, but also has very large variance for the same reason.<br /><br /> Alternatively, should $ h \rightarrow \infty $, then $ \frac{K_{h}\left(X_{t} - x\right)}{T^{-1}\sum_{i=1}^{T}K_{h}\left(X_{i} - x\right)} \rightarrow 1 $ for all values of $ x $, and $ \widehat{\beta}_{0}(x) \rightarrow T^{-1}\sum_{t=1}^{T}Y_{t} $. That is, $ \widehat{\beta}_{0}(x) $ is a constant function equal to the mean of $ Y_{t} $, and therefore has zero variance, but suffers from very large modelling bias since it picks up only those points equal to the average.<br /><br /> Between these two extremes is an entire spectrum of models $ \left\{\mathcal{M}_{h} : h \in \left(0, \infty\right) \right\} $ ranging from the most complex $ \mathcal{M}_{0} $, to the least complex $ \mathcal{M}_{\infty} $. In other words, the bandwidth parameter $ h $ governs model complexity. Thus, the optimal bandwidth selection problem selects an $ h^{\star} $ to generate a model $ \mathcal{M}_{h^{\star}} $ best suited for the data under consideration. In other words, it reduces to the classical bias-variance tradeoff.<br /><br /> To demonstrate certain principles, we close this section by returning to the leave-one-out cross-validation procedure discussed earlier. As a matter of fact, the algorithm also applies to local kernel regression and we do so in the context of $ k-NN $ regression, also discussed earlier.<br /><br /> In particular, define a search grid $ \mathcal{K} \equiv \{k_{min}, \ldots, k_{max}\} $ of the number of neighbouring points, select a kernel function $ K(\cdot) $, and iterate the following steps over $ k \in \mathcal{K} $: <ol> <li>For each observation $ t^{\star} \in \left\{1, \ldots, T \right\} $: <ol type="i"> <li>For each $ t \neq t^{\star} \in \{1,\ldots, T\} $, compute $ d_{t \neq t^{\star}} = |X_{t} - X_{t^{\star}}| $. <li>Order the $ d_{t \neq t^{\star}} $ in ascending order to form the ordered set $ \{d_{t \neq t^{\star} (1)} \leq d_{t \neq t^{\star} (2)} \leq \ldots d_{t \neq t^{\star} (T-1)}\} $. <li>Set the bandwidth as $ h_{\setminus t^{\star}} = d_{t \neq t^{\star} (k)} $. <li>For each $ t \neq t^{\star} \in \{1,\ldots, T\} $ , compute a weight $ w_{_{\setminus t^{\star}}t} \equiv K_{h_{\setminus t^{\star}}}(X_{t} - X_{t^{\star}}) $. <li>Solve the optimization problem: $$ \arg\!\min_{\hspace{-1em} \beta_{0}} E\left(Y_{t} - \beta_{0}\right)^{2}w_{_{\setminus t^{\star}}t} $$ to derive the parameter estimate: $$ \widehat{m}_{k,\setminus t^{\star}}(X_{t^{\star}}) \equiv \widehat{\beta}_{_{k,\setminus t^{\star}}0}(X_{t^{\star}}) = \frac{\sum_{t\neq t^{\star}}^{T}w_{_{\setminus t^{\star}}t}Y_{t}}{\sum_{t\neq t^{\star}}^{T}w_{_{\setminus t^{\star}}t}} $$ where we use the subscript $ k,\setminus t^{\star} $ to denote explicit dependence on the number of neighbouring points $ k $ and the dropped observation $ t^{\star} $. <li>Derive the forecast error for the dropped observation as follows: $$ e_{_{k}t^{\star}} \equiv Y_{t^{\star}} - \widehat{m}_{k,\setminus t^{\star}}(X_{t^{\star}}) $$ </ol> <li>Derive the cross-validation mean squared error when using $ k $ nearest neighbouring points : $$ MSE_{k} = \frac{1}{T}\sum_{t=1}^{T} e_{_{k}t}^{2} $$ <li>Determine the optimal number of neighbouring points $ k^{\star} $ as the minimum $ MSE_{k} $ across $ \mathcal{K} $. In other words $$ k^{\star} = \min_{k\in\mathcal{K}} MSE_{k} $$ </ol> We close this section and blog entry with an illustration of the procedure. In particular, we again consider the function in \eqref{eq.1.6}, and use the cosine kernel to search for the optimal number of neighbouring points over the search grid $ \mathcal{K} \equiv \{40, \ldots, 80\} $.<br /><br /> <!-- :::::::::: FIGURE 9 :::::::::: --><center> <a href="http://www.eviews.com/blog/funcoef/knnregopt.jpeg"><img height="auto" src="http://www.eviews.com/blog/funcoef/knnregopt.jpeg" title="k-NN Regression with Optimal k" width="360" /></a><br /> <small>Figure 9: k-NN Regression with Optimized k</small><br /><br /></center><!-- :::::::::: FIGURE 9 :::::::::: --> <h3 id="sec4">Conclusion</h3> Given the recent introduction of functional coefficient estimation in EViews 11, our aim in this multi-part blog series is to complement this feature release with a theoretical and practical overview. As first step in this regard, we've dedicated this Part I of the series to gently introducing readers to the principles of nonparametric estimation, and illustrated them using EViews programs. In particular, we've covered principles of sieve and kernel estimation, as well as optimal sieve length and bandwidth selection. In Part II, we'll extend the principles discussed here and cover the theory underlying functional coefficient estimation in greater detail.<br /><br /> <h3 id="sec5">Files</h3>The workfile and program files can be downloaded here.<br /><br /> <ul> <li> <a href="http://www.eviews.com/blog/funcoef/sievereg.prg">sievereg.prg</a> <li> <a href="http://www.eviews.com/blog/funcoef/locavg.prg">locavg.prg</a> <li> <a href="http://www.eviews.com/blog/funcoef/knnreg.prg">knnreg.prg</a></ul><br /><br /> <hr /><h3 id="sec6">References</h3> <ol class="bib2xhtml"> <!-- Authors: Craven Peter and Wahba Grace --><li><a name="craven-1979"></a>Peter Craven and Grace Wahba. Estimating the correct degree of smoothing by the method of generalized cross-validation. <cite>Numerische Mathematik</cite>, 31:377–403, 1979.</li> <!-- Authors: Grenander Ulf --><li><a name="grenander-1981"></a>Ulf Grenander. Abstract inference. Technical report, 1981.</li> <!-- Authors: Li Ker Chau and others --><li><a name="li-1987"></a>Ker-Chau Li and others. Asymptotic optimality for csub>p</sub>, csub>l</sub> , cross-validation and generalized cross-validation: Discrete index set. <cite>The Annals of Statistics</cite>, 15(3):958–975, 1987.</li> <!-- Authors: Mallows Colin L --><li><a name="mallows-1973"></a>Colin L Mallows. Some comments on c p. <cite>Technometrics</cite>, 15(4):661–675, 1973.</li> <!-- Authors: Stone Mervyn --><li><a name="stone-1974"></a>Mervyn Stone. Cross-validation and multinomial prediction. <cite>Biometrika</cite>, 61(3):509–515, 1974.</li> </ol></span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com0tag:blogger.com,1999:blog-6883247404678549489.post-69999115724147797532019-04-23T15:09:00.001-07:002019-04-26T07:16:56.719-07:00Generalized Autoregressive Score (GAS) Models: EViews Plays with Python<script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { lb: "{\\left(}", rb: "{\\right)}", bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"></script> <span style="font-family: "verdana" sans-serif"> Starting with EViews 11, users can take advantage of communication between EViews and Python. This means that workflow can begin in EViews, switch over to Python, and be brought back into EViews seamlessly. To demonstrate this feature, we will use U.S. macroeconomic data on the unemployment rate to fit a GARCH model in EViews, transfer the data over and estimate a GAS model equivalent of the GARCH model in Python, transfer the data back to EViews, and compare the results.<br /><br /><a name='more'></a> <h3>Table of Contents</h3><ol> <li><a href="#sec1">GAS Models</a> <li><a href="#sec2">Example Description</a> <li><a href="#sec3">Preparatory Work</a> <li><a href="#sec4">Data Analysis in EViews</a> <li><a href="#sec5">Data Analysis in Python</a> <li><a href="#sec6">Back to EViews</a> <li><a href="#sec7">Files</a> <li><a href="#sec8">References</a></ol><br /> <h3 id="sec1">GAS Models</h3> Historically, time varying parameters have received an enormous amount of attention and the literature is saturated with numerous specifications and estimation techniques. Nevertheless, many of these specifications are often difficult to estimate, such as the family of stochastic volatility models, among which GARCH is a canonical example. In this regard, Creal, Koopman, and Lucas (2013) and Harvey (2013) proposed a novel family of time-varying parametric models estimated using the familiar maximum likelihood framework with the score of the conditional density function driving the updating mechanism. The family has now come to be known as the <b>generalized autoregressive score</b> (GAS) family or model.<br /><br /> GAS models are agnostic as to the type of data under consideration as long as the score function and the Hessian are well defined. In particular, the model assumes an input vector of random variables at time $ t $, say $ \pmb{y}_{t} \in \mathbf{R}^{q} $, where $ q=1 $ if the setting is univariate. Furthermore, the model assumes a conditional distribution at time $ t $ specified as: $$ \pmb{y}_{t} | \pmb{y}_{1}, \ldots, \pmb{y}_{t-1} \sim p(\pmb{y}_{t}; \pmb{\theta}_{t}) $$ where $ \pmb{\theta}_{t} \equiv \pmb{\theta}_{t} (\pmb{y}_{1}, \ldots, \pmb{y}_{t-1}, \pmb{\xi}) \in \Theta \subset \mathbf{R}^{r}$ is a vector of time varying parameters which fully characterize $ p(\cdot) $ and are functions of past data and possibly time invariant parameters $ \pmb{\xi} $.<br /><br /> What distinguishes GAS models from the rest of the literature is that dynamics in $ \pmb{\theta}_{t} $ are driven by an autoregressive mechanism augmented with the score of the conditional distribution of $ p(\cdot) $. In particular, $$ \pmb{\theta}_{t+1} = \pmb{\omega} + \pmb{A}\pmb{s}_{t} + \pmb{B}\pmb{\theta}_{t} $$ where $ \pmb{\omega}, \pmb{A}, $ and $ \pmb{B} $ are matrix coefficients collected in $ \pmb{\xi} $, and $ \pmb{s}_{t} $ is a vector proportional to the score of $ p(\cdot) $: $$ \pmb{s}_{t} = \pmb{S}_{t}(\pmb{\theta}_{t}) \pmb{\nabla}_{t}(\pmb{y}_{t}, \pmb{\theta}_{t}) $$ Above, $ \pmb{S}_{t} $ is an $ r\times r $ positive definite scaling matrix known at time $ t $, and $$ \pmb{\nabla}_{t}(\pmb{y}_{t}, \pmb{\theta}_{t}) \equiv \frac{\partial \log p(\pmb{y}_{t}; \pmb{\theta}_{t})}{\partial \pmb{\theta}_{t}}$$ It turns out that different choices of $ \pmb{S}_{t} $ produce different GAS models. For instance, setting $ \pmb{S}_{t} $ to some power $ \gamma > 0 $ of the information matrix of $ \pmb{\theta}_{t} $ will change how the variance of $ \pmb{\nabla}_{t} $ impacts the model. In particular, consider: $$ \pmb{S}_{t} = \pmb{\mathcal{I}}_{t}(\pmb{\theta}_{t})^{-\gamma} $$ where $$ \pmb{\mathcal{I}}_{t}(\pmb{\theta}_{t}) = E_{t-1}\left\{ \pmb{\nabla}_{t}(\pmb{y}_{t}, \pmb{\theta}_{t}) \pmb{\nabla}_{t}(\pmb{y}_{t}, \pmb{\theta}_{t})^{\top} \right\} $$ Typical choices for $ \gamma $ are 0, 1/2, and 1. For instance, if $ \gamma=0 $, $ \pmb{S}_{t} = \pmb{I} $ and no scaling occurs. Alternatively, when $ \gamma = 1/2 $, the scaling results in $ Var_{t-1}(\pmb{s}_{t}) = \pmb{I} $; in other words, standardization occurs.<br /><br /> Regardless of the choice of $ \gamma $, $ \pmb{s}_{t} $ is a martingale difference with respect to the distribution $ p(\cdot) $, and $ E_{t-1}\left\{ \pmb{s}_{t} \right\} = 0 $ for all $ t $. This latter property further implies that $ \pmb{\theta}_{t} $ is in fact a stationary process with long-term mean value $ (\pmb{I}_{t} - \pmb{B})^{-1}\pmb{\omega} $, whenever the spectral radius of $ \pmb{B} $ is less than one. Thus, $ \pmb{\omega} $ and $ \pmb{B} $ are respectively responsible for controlling the level and the persistence of $ \pmb{\theta}_{t} $, whereas $ \pmb{A} $ controls for the impact of $ \pmb{s}_{t} $. In other words, $ \pmb{s}_{t} $ denotes the direction of updating $ \pmb{\theta}_{t} $ to $ \pmb{\theta}_{t+1} $, acting as the the steepest ascent algorithm for improving the model's local fit.<br /><br /> With the above frameowrk established, Creal, Koopman, and Lucas (2013) show that various choices for $ p(\cdot) $ and $ \pmb{S}_{t} $ lead to various GAS specifications, some of which reduce to very familiar and well established existing models. For instance, let $ y_{t} = \sigma_{t}\epsilon_{t} $, and suppose $ \epsilon_{t} $ is a Gaussian random variable with mean zero and unit variance. It is readily shown that setting $ S_{t} = \mathcal{I}_{t}^{-1} $ and $ \theta_{t} = \sigma_{t}^{2} $, the GAS updating equation reduces to: $$ \theta_{t+1} = \omega + A(y_{t}^{2} - \theta_{t}) + B\theta_{t} $$ which is equivalent to the standard GARCH(1,1) model $$ \sigma_{t+1}^{2} = \alpha + \beta y_{t}^{2} + \eta \sigma_{t}^{2} $$ where $ \alpha = \omega $, $ \beta = A $, and $ \eta = B - A $. There is of course a number of other examples and configurations, and we refer the reader to the original texts for more details.<br /><br /> <h3 id="sec2">Example Description</h3> Our objective here is to communicate between EViews and Python to estimate a GAS model in Python and compare the results back in EViews. In particular, we will work with U.S. monthly civil unemployment rate, defined as the number of unemployed as a percentage of the labor force -- <i>Labor force data are restricted to people 16 years of age and older, who currently reside in 1 of the 50 states or the District of Columbia, who do not reside in institutions (e.g., penal and mental facilities, homes for the aged), and who are not on active duty in the Armed Forces.</i> See the FRED database at <a href="https://fred.stlouisfed.org/series/UNRATE">https://fred.stlouisfed.org/series/UNRATE</a>) -- to which we will fit a GARCH(1,1) model using the traditional method as well as the GAS approach.<br /><br /> It is well known that unemployment rates are typically very volatile and persistent, particularly in contractionary economic cycles. This is because major firm decisions, such as workforce expansions and contractions, are often accompanied by large sunk costs (e.g. job advertisements, screening, training), and are usually irreversible in the immediate short term (e.g. wage frictions such as labour contracts and dismissal costs). Thus, in contractionary periods, firms typically prefer to defer hiring decisions until more favourable conditions return, resulting in strong unemployment persistence known as <i>spells</i>. On the other hands, these periods are often characterized by frequent labour force transitions and increased search activities, both of which contribute to unemployment volatility.<br /><br /> In light of the above, measuring the volatility of unemployment requires the use of econometric models which are designed to capture both volatility and persistence. While several such models exist in the literature, here we focus on perhaps the most well known such model proposed by Engle (1982) and Bollerslev (1986), the generalized autoregressive conditional heteroskedasticity (GARCH) model described earlier. In particular, if we let $ y_{t} $ denote the monthly unemployment rate, we are interested in obtaining an estimate $ \widehat{\sigma}_{t} $ of $ \sigma_{t} $, at each point in time, effectively tracing the evolution of unemployment volatility for the period under consideration. Since the GAS model above reduces to the GARCH model when the conditional distribution $ p(\cdot) $ is Gaussian and the time varying parameter is the volatility of the process, we would like to compare the estimates from the GAS model to those generated by EViews' internal GARCH estimation. Note here that while EViews can estimate numerous (G)ARCH models, it cannot yet natively estimate GAS models. Accordingly, we will fit a GARCH model in EViews, transfer our data over to Python, and estimate a GAS model using the Python package <b>PyFlux</b>. We will then compare our findings.<br /><br /> <h3 id="sec3">Preparatory Work</h3> Before getting started, please make sure that you have Python 3 installed from <a href="https://www.python.org/downloads/release/python-368/">https://www.python.org/downloads/release/python-368/</a> on your system, and that you also have the following Python packages installed: <ol> <li>NumPy <li>Pandas <li>Matplotlib <li>Seaborn <li>PyFlux </ol> One (certainly not the only) way to install said packages, is to open up a command prompt on your system and navigate to the directory where Python was installed; this is usually <code>C:\Users\USER_NAME\AppData\Local\Programs\Python\Python36_64</code> if you have a 64-bit version. From there, issue the following commands: <pre><br /> python -m pip install --upgrade pip<br /> python -m pip install PACKAGE_NAME<br /></pre> Next, make sure that the path to Python is specified in your EViews options. Specifically, in EViews, go to <b>Options/General Options...</b> and on the left tree select <b>External program interface</b> and ensure that <b>Home Path</b> is correctly pointing to the directory where Python is installed. Usually, you will not have to touch this setting since EViews populates this field by searching your system for the install directory.<br /><br /> Finally, please note that as of writing, the analysis that follows was tested with Python version 3.6.8 and PyFlux version 0.4.15.<br /><br /> <h3 id="sec4">Data Analysis in EViews</h3> Turning to data analysis, in EViews, create a new monthly workfile. To do so, click on <b>File/New/Workfile</b>. Under <b>Frequency</b> select <b>Monthly</b>, and set the <b>Start date</b> to <b>2006M12</b> and the <b>End date</b> to <b>2013M12</b>, and hit <b>OK</b>. Next, fetch the unemployment rate data from the FRED database by clicking on <b>File/Open/Database...</b>. From here, select <b>FRED Database</b> from the <b>Database/File Type</b> dropdown, and hit <b>OK</b>. This opens the FRED database window. To get the series of interest from here, click on the <b>Browse</b> button. This opens a new window with a folder-like overview. Here, click on <b>All Series Search</b> and then type <b>UNRATE</b> in the <b>Search For</b> textbox. This will list a series called <i>Civilian Unemployment Rate (M,SA,%)</i>. Drag the series over to the workfile to make it available for analysis. This will fetch the series <b>UNRATE</b> from the FRED database and place it in the workfile. In particular, we are grabbing data from the period of December 2006 to December 2013 -- effectively the recessionary period characterized by the recent housing loan crisis in the United States. <table> <tbody> <tr> <td> <!-- :::::::::: FIGURE 1A :::::::::: --> <center> <a href="https://lh3.googleusercontent.com/-Q-8R_IidAy4/XL9_4pjYlVI/AAAAAAAAAwE/fIApBqd5JaM6BUKSTbtyWoVebZ9H3o-6gCLcBGAs/s1600/workfiledlg.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-Q-8R_IidAy4/XL9_4pjYlVI/AAAAAAAAAwE/fIApBqd5JaM6BUKSTbtyWoVebZ9H3o-6gCLcBGAs/s1600/workfiledlg.jpg" title="Workfile Dialog" width="320" /></a><br /> </center> <!-- :::::::::: FIGURE 1A :::::::::: --> </td> <td> <!-- :::::::::: FIGURE 1B :::::::::: --> <center> <a href="https://lh3.googleusercontent.com/-l_NxAegKlPA/XL9_1x9o3jI/AAAAAAAAAvk/Ti-KspaNYvcxFHOTFnf01N-fAQwYGr5kwCLcBGAs/s1600/dbasedlg.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-l_NxAegKlPA/XL9_1x9o3jI/AAAAAAAAAvk/Ti-KspaNYvcxFHOTFnf01N-fAQwYGr5kwCLcBGAs/s1600/dbasedlg.jpg" title="Database Dialog" width="320" /></a><br /> </center> <!-- :::::::::: FIGURE 1B :::::::::: --> </td> </tr> <tr> <td> <center> <small>Figure 1A: Workfile Dialog</small><br /><br /> </center> </td> <td> <center> <small>Figure 1B: Database Dialog</small><br /><br /> </center> </td> </tr> <tr> <td> <!-- :::::::::: FIGURE 1C :::::::::: --> <center> <a href="https://lh3.googleusercontent.com/-ZXhI23HsbNg/XL-dET3-9WI/AAAAAAAAAxQ/VO7Ei3YNsZo323-Lki9uF8X9gQomoK8-gCLcBGAs/s1600/fredqry.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-ZXhI23HsbNg/XL-dET3-9WI/AAAAAAAAAxQ/VO7Ei3YNsZo323-Lki9uF8X9gQomoK8-gCLcBGAs/s1600/fredqry.jpg" title="FRED Browse" width="320" /></a><br /> </center> <!-- :::::::::: FIGURE 1C :::::::::: --> </td> <td> <!-- :::::::::: FIGURE 1D :::::::::: --> <center> <a href="https://lh3.googleusercontent.com/-Oe-7rPF-3B4/XL-dBX3UPMI/AAAAAAAAAxM/CbHRLkPXNZUlu8Bk3NDpY_XJXJFh-x4IwCLcBGAs/s1600/fredqry2.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-Oe-7rPF-3B4/XL-dBX3UPMI/AAAAAAAAAxM/CbHRLkPXNZUlu8Bk3NDpY_XJXJFh-x4IwCLcBGAs/s1600/fredqry2.jpg" title="FRED Search" width="320" /></a><br /> </center> <!-- :::::::::: FIGURE 1D :::::::::: --> </td> </tr> <tr> <td> <center> <small>Figure 1C: FRED Browse</small><br /><br /> </center> </td> <td> <center> <small>Figure 1C: FRED Search</small><br /><br /> </center> </td> </tr> </tbody></table> Also, restrict the sample to the period from January 2007 to December 2013. Why we do this will become apparent later. To do so, issue the following command in EViews: <pre><br /> smpl 2007M01 @last<br /></pre> To see what the data looks like, double click on a <b>UNRATE</b> in the workfile to open the series object. Next, click on <b>View/Graph...</b>. This will open a graph options window. We will stick with the defaults so click on <b>OK</b>. The output is reproduced below.<br /><br /> <!-- :::::::::: FIGURE 2 :::::::::: --><center> <a href="https://lh3.googleusercontent.com/-uHw8WuakTR4/XL9_5hRhC0I/AAAAAAAAAwI/9fNMnaORB0s1VtiXiNExZk2F1lhJtaTogCLcBGAs/s1600/unrategrph.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-uHw8WuakTR4/XL9_5hRhC0I/AAAAAAAAAwI/9fNMnaORB0s1VtiXiNExZk2F1lhJtaTogCLcBGAs/s1600/unrategrph.jpg" title="Time Series Plot of UNRATE" width="320" /></a><br /> <small>Figure 2: Time Series Plot of UNRATE</small><br /><br /></center><!-- :::::::::: FIGURE 2 :::::::::: --> We will now estimate a basic GARCH model on <b>UNRATE</b>. To do this, click on <b>Quick/Estimate Equation...</b>, and under <b>Method</b> choose <b>ARCH - Autoregressive Conditional Heteroskedasticity</b>. In the <b>Mean Equation</b> text box type <b>UNRATE</b> and leave everything else as their default values. Click on <b>OK</b>. <table> <tbody> <tr> <td> <!-- :::::::::: FIGURE 3A :::::::::: --> <center> <a href="https://lh3.googleusercontent.com/-hhVBxWIvrWQ/XL9_2nSKhHI/AAAAAAAAAvw/4XEXNVqQYKoSJsz-o07VYotr_chidFdBwCLcBGAs/s1600/garchdlg.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-hhVBxWIvrWQ/XL9_2nSKhHI/AAAAAAAAAvw/4XEXNVqQYKoSJsz-o07VYotr_chidFdBwCLcBGAs/s1600/garchdlg.jpg" title="GARCH Estimation Dialog" width="320" /></a><br /> </center> <!-- :::::::::: FIGURE 3A :::::::::: --> </td> <td> <!-- :::::::::: FIGURE 3B :::::::::: --> <center> <a href="https://lh3.googleusercontent.com/-qOWoCw0lzlY/XL9_3H7NTOI/AAAAAAAAAv8/s8OAJ2cGWBg2xYkfgrFI9QA9KHsQ51A6ACLcBGAs/s1600/garchoutput.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-qOWoCw0lzlY/XL9_3H7NTOI/AAAAAAAAAv8/s8OAJ2cGWBg2xYkfgrFI9QA9KHsQ51A6ACLcBGAs/s1600/garchoutput.jpg" title="GARCH Estimation Output" width="320" /></a><br /> </center> <!-- :::::::::: FIGURE 3B :::::::::: --> </td> </tr> <tr> <td> <center> <small>Figure 3A: GARCH Estimation Dialog</small><br /><br /> </center> </td> <td> <center> <small>Figure 3B: GARCH Estimation Output</small><br /><br /> </center> </td> </tr> </tbody></table> From the estimation output we can see that model parameters have the following estimates: <ol> <li>$ \alpha = 1.068302 $ <li>$ \beta = 1.236277 $ <li>$ \eta = -0.247753 $ </ol> We can also see the path of the volatility process by clicking on <b>View/Garch Graph/Conditional Variance</b>. This produces a plot of $ \widehat{\sigma}^{2}_{t} $. In fact, we will also create a series object from the data points used to produce the GARCH conditional variance. To do this, from the GARCH conditional variance window, click on <b>Proc/Make GARCH Variance Series...</b> and in the <b>Conditional Variance</b> textbox enter <b>EVGARCH</b> and hit <b>OK</b>. This produces a series object called <b>EVGARCH</b> and places it in the workfile. We will use it a bit later.<br /><br /> <table> <tbody> <tr> <td> <!-- :::::::::: FIGURE 4A :::::::::: --> <center> <a href="https://lh3.googleusercontent.com/-8DKMKkkml08/XL9_2yebDjI/AAAAAAAAAv0/JcgxOH_kCo0jlbZwBZS8CwfZsx_7wQguQCLcBGAs/s1600/garchcondvar.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-8DKMKkkml08/XL9_2yebDjI/AAAAAAAAAv0/JcgxOH_kCo0jlbZwBZS8CwfZsx_7wQguQCLcBGAs/s1600/garchcondvar.jpg" title="GARCH Conditional Variance of UNRATE" width="320" /></a><br /> </center> <!-- :::::::::: FIGURE 4A :::::::::: --> </td> <td> <!-- :::::::::: FIGURE 4B :::::::::: --> <center> <a href="https://lh3.googleusercontent.com/-MXO4qlsX3nY/XL9_2ixTAAI/AAAAAAAAAvs/9dCFnnP_0EEG6AA0QbFfG1ecA_BsNhiYgCLcBGAs/s1600/garchcondvardlg.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-MXO4qlsX3nY/XL9_2ixTAAI/AAAAAAAAAvs/9dCFnnP_0EEG6AA0QbFfG1ecA_BsNhiYgCLcBGAs/s1600/garchcondvardlg.jpg" title="GARCH Conditional Variance of Proc" width="320" /></a><br /> </center> <!-- :::::::::: FIGURE 4B :::::::::: --> </td> </tr> <tr> <td> <center> <small>Figure 4A: GARCH Conditional Variance of UNRATE</small><br /><br /> </center> </td> <td> <center> <small>Figure 4B: GARCH Conditional Variance Proc</small><br /><br /> </center> </td> </tr> </tbody></table> <h3 id="sec5">Data Analysis in Python</h3> To estimate the GAS equivalent of this model we must first transfer our data over to Python. To do so, issue the following command in EViews: <pre><br /> xopen(p)<br /></pre> This tells EViews to open an instance of Python within EViews and open up bi-directional communication. In fact you should see a new command window appear, titled <b>Log: Python Output</b>. Here you can issue commands into Python directly as if you had opened a Python instance at any command prompt. You can also send commands to Python using EViews command prompt. In fact, we will use the latter approach to import packages into our Python instance as follows: <pre><br /> xrun "import numpy as np"<br /> xrun "import pandas as pd"<br /> xrun "import pyflux as pf"<br /> xrun "import matplotlib.pyplot as plt"<br /></pre> For instance, the first command above tells eviews to issue the command <i>import numpy as np</i> in the open Python instance, thereby importing the NumPy package. In fact, all results will be echoed in the Python instance.<br /><br /> <!-- :::::::::: FIGURE 5 :::::::::: --><center> <a href="https://lh3.googleusercontent.com/-GCNeE9hdtPk/XL-dH7SByDI/AAAAAAAAAxU/R2vsxQiCvucN9mvTsxJcr2Gys_PH2YNvACLcBGAs/s1600/pythondlg.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-GCNeE9hdtPk/XL-dH7SByDI/AAAAAAAAAxU/R2vsxQiCvucN9mvTsxJcr2Gys_PH2YNvACLcBGAs/s1600/pythondlg.jpg" title="Python Output Log" width="320" /></a><br /> <small>Figure 5: Python Output Log</small><br /><br /></center><!-- :::::::::: FIGURE 5 :::::::::: --> Next, transfer the <b>UNRATE</b> series over to Python by issuing the following command in EViews: <pre><br /> xput(ptype=dataframe) unrate<br /></pre> The command above sends the series <b>UNRATE</b> to Python and transforms that data into a Pandas DataFrame object.<br /><br /> We now follow the PyFlux documentation and estimate the GAS model by issuing the following commands from EViews: <pre><br /> xrun "model = pf.GAS(ar=1, sc=1, data=unrate, family=pf.Normal())"<br /> xrun "fit = model.fit('MLE')"<br /> xrun "fit.summary()"<br /></pre> The first command above tells PyFlux to create a GAS model object that has one autoregressive and one scaling parameter, sets $ p(\cdot) $ to the Gaussian distribution, and uses the series <b>UNRATE</b> as $ y_{t} $. In other words, the autoregressive and scaling parameters respectively corresponds to the coefficients $ A $ and $ B $ in the first section of this document. The second command tells Python to create a variable <b>FIT</b> which will hold the output from an estimated GAS model which uses maximum likelihood as the estimation technique. We display the output of this estimation by invoking the third command. In particular, we have the following estimates: <ol> <li>$ \omega = 0.0027 $ <li>$ A = 1.2973 $ <li>$ B = 0.9994 $ </ol> In fact, we can also obtain a distributional plot of the autoregressive coefficient $ B $ across the period of estimation. To do this, invoke the following command within EViews: <pre><br /> xrun "model.plot_z([1], figsize=(15,5))"<br /></pre> The latter command tells Python to plot the distribution of the 2nd estimated coefficient (the AR coefficient) and to display a figure which is of size $ 15\times 5 $ inches. This is the distribution of the evolution of $ B $ and is <b>not</b> the time path of the estimated coefficient. <br /><br /> <!-- :::::::::: FIGURE 6 :::::::::: --><center> <a href="https://lh3.googleusercontent.com/-jDmWcghD7Ac/XL9_3PK7ppI/AAAAAAAAAv4/qsfrDrSvkG4Ho9epb73PaOgKz3kCI4HrQCLcBGAs/s1600/pyar1.png"><img height="auto" src="https://lh3.googleusercontent.com/-jDmWcghD7Ac/XL9_3PK7ppI/AAAAAAAAAv4/qsfrDrSvkG4Ho9epb73PaOgKz3kCI4HrQCLcBGAs/s1600/pyar1.png" title="Python GAS Distribution of AR Parameter" width="320" /></a><br /> <small>Figure 6: Python GAS Distribution of AR Parameter</small><br /><br /></center><!-- :::::::::: FIGURE 6 :::::::::: --> While we can obtain a distribution of the estimated parameters, unfortunately, PyFlux does not offer a way to extract the time path as a Python data object. Thankfully, we can recreate it manually and easily as a series in EViews.<br /><br /> <h3 id="sec6">Back To EViews</h3> To create the time path of the estimated GAS coefficient, we first need to transfer the coefficients from the estimated GAS model back into EViews. To do this, we invoke the following command in EViews: <pre><br /> xget(name=gascoefs, type=vector) fit.results.x[0:3]<br /></pre> This tells Python to send the first three estimated coefficients back to EViews, and saves the result as a vector called <b>GASCOEFS</b>.<br /><br /> Next, create a new series in the workfile called <b>GASGARCH</b> by issuing the following command in the EViews: <pre><br /> series gasgarch<br /></pre> Also, since this is an autoregressive process, we need to set an initial value for <b>GASGARCH</b>. We do this by setting the December 2006 observation to 0.7 -- the default value EViews uses to initialize its internal GARCH estimation. We do this by typing the following commands in EViews: <pre><br /> smpl 2006M12 2006M12<br /> gasgarch = 0.7<br /></pre> Next, we set the sample back to the period of interest and fill the values of <b>GASGARCH</b> using the GARCH formula with the coefficients from the GAS model. To do this, issue the following commands in EViews again: <pre><br /> smpl 2007M01 @last<br /> gasgarch = gascoefs(1) + gascoefs(3)*(unrate(-1)^2 - gasgarch(-1)) + gascoefs(2)*gasgarch(-1)<br /></pre> At last, we plot the GARCH conditional variance path from the internal estimation, <b>EVGARCH</b> along with the newly created series <b>GASGARCH</b>. We can do this programatically by issuing the following commands in EViews: <pre><br /> plot evgarch gasgarch<br /></pre> <!-- :::::::::: FIGURE 7 :::::::::: --><center> <a href="https://lh3.googleusercontent.com/-4S2DBUuONKQ/XL-DnGwwZSI/AAAAAAAAAw0/eDTMCzULzPI4fnjOvqIxLQ2ZojzxB5i5QCLcBGAs/s1600/garchgascompare.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-4S2DBUuONKQ/XL-DnGwwZSI/AAAAAAAAAw0/eDTMCzULzPI4fnjOvqIxLQ2ZojzxB5i5QCLcBGAs/s1600/garchgascompare.jpg" title="GARCH Conditional Variance Comparison with GAS" width="320" /></a><br /> <small>Figure 7: GARCH Conditional Variance Comparison with GAS</small><br /><br /></center><!-- :::::::::: FIGURE 7 :::::::::: --> It is clear that the two estimation techniques produce the same path despite having different estimates for the coefficients. At last, note that while GARCH models are estimated using maximum likelihood procedures, parameter estimates are typically numerically unstable and often fail to converge. This often requires a re-specification of the convergence criterion and / or a change in starting values. These drawbacks are also an issue with GAS models.<br /><br /> <h3 id="sec7">Files</h3>The workfile and program files can be downloaded here.<br /><br /> <ul> <li> <a href="http://www.eviews.com/blog/pygas/pygas.WF1">seasuroot.WF1</a> <li> <a href="http://www.eviews.com/blog/pygas/pygas.prg">seasuroot.prg</a> </ul><br /><br /> <hr /><h3 id="sec8">References</h3> <table> <tr valign="top"> <td align="right" class="bibtexnumber"> <a name="bollerslev-1986">1</a> </td> <td class="bibtexitem"> Tim Bollerslev. Generalized autoregressive conditional heteroskedasticity. <em>Journal of econometrics</em>, 31(3):307--327, 1986. [ <a href="references_bib.html#bollerslev-1986">bib</a> ] </td> </tr> <tr valign="top"> <td align="right" class="bibtexnumber"> <a name="creal-2013">2</a> </td> <td class="bibtexitem"> Drew Creal, Siem Jan Koopman, and André Lucas. Generalized autoregressive score models with applications. <em>Journal of Applied Econometrics</em>, 28(5):777--795, 2013. [ <a href="references_bib.html#creal-2013">bib</a> ] </td> </tr> <tr valign="top"> <td align="right" class="bibtexnumber"> <a name="engle-1982">3</a> </td> <td class="bibtexitem"> Robert F Engle. Autoregressive conditional heteroscedasticity with estimates of the variance of united kingdom inflation. <em>Econometrica: Journal of the Econometric Society</em>, pages 987--1007, 1982. [ <a href="references_bib.html#engle-1982">bib</a> ] </td> </tr> <tr valign="top"> <td align="right" class="bibtexnumber"> <a name="harvey-2013">4</a> </td> <td class="bibtexitem"> Andrew C Harvey. <em>Dynamic models for volatility and heavy tails: with applications to financial and economic time series</em>, volume 52. Cambridge University Press, 2013. [ <a href="references_bib.html#harvey-2013">bib</a> ] </td> </tr> </table> </span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com0tag:blogger.com,1999:blog-6883247404678549489.post-86737930382696770012019-04-23T07:48:00.000-07:002019-04-23T07:48:17.503-07:00Seasonal Unit Root Tests<script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { lb: "{\\left(}", rb: "{\\right)}", bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"></script> <span style="font-family: "verdana" sans-serif"> <i>Author and guest post by Nicolas Ronderos</i><br /><br /> In this blog entry we will offer a brief discussion on some aspects of seasonal non-stationarity and discuss two popular seasonal unit root tests. In particular, we will cover the Hylleberg, Engle, Granger, and Yoo (1990) and Canova and Hansen (1995) tests and demonstrate practically using EViews how the latter can be used to detect the presence of seasonal unit roots in a US macroeconomic time series. All files used in this exercise can be downloaded at the end of the entry.<br /><br /><a name='more'></a> <h3>Deterministic vs Stochastic Seasonality</h3> When we talk about the concept of seasonality in time series, we usually refer to the idea of <i>"... systematic, although not necessarily regular, intra-year movement caused by changes of the weather, the calendar, and timing of decisions..."</i> (Hans Franses). Naturally, macroeconomic data observed with high periodicity (sampled more than once a year) usually exhibit this behavior.<br /><br /> Seasonality can be modelled in two ways: deterministically or stochastically. The former arises form systematic cycles such as calendar effects or climatic phenomena and can be removed from data by the seasonal adjustment procedures -- in other words, by including seasonal dummy variables. Formally, this implies deterministic seasonality evolves as:<br /><br /> $$ y_{t} = \mu + \sum_{s=1}^{S-1}\delta_{s}D_{s,t} + e_{t} $$ where $ S $ is the total number of period cycles, $ D_{s,t} $ are seasonal dummy variables which equal 1 in season $ s $ and 0 otherwise, and $ e_{t} $ are the usual innovations. For example, in the case of quarterly data $ (S=4) $, one could postulate that seasonality evolves as:<br /><br /> $$ y_{t} = 15 - D_{1,t} - 4D_{2,t} - 6D_{3,t} + e_{t}$$ The process is visualized below:<br /><br /> <!-- :::::::::: FIGURE 1 :::::::::: --> <center> <a href="https://lh3.googleusercontent.com/-Rcyh4rXs9xk/XL3zX9LeHbI/AAAAAAAAAtc/TNaYbumDBwko5GG2503X6x6NuwUJZLHSQCEwYBhgL/s1600/ds.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-Rcyh4rXs9xk/XL3zX9LeHbI/AAAAAAAAAtc/TNaYbumDBwko5GG2503X6x6NuwUJZLHSQCEwYBhgL/s1600/ds.jpg" title="Deterministic Seasonality" width="320" /></a><br /> <small>Figure 1: Deterministic Seasonality</small><br /><br /> </center> Notice here that the optimal $ h $-period ahead forecast of $ y_{t} $ in season $ s $, is given by:<br /><br /> $$ \widehat{y}_{S(t+h)-s} = \widehat{\mu} + \widehat{\delta}_{s} $$ where $ s = S-1, \ldots, 0 $. In other words, the optimal forecast of $ y_{t} $ in season $ s $ is the same at each future point in time for said season. It is precisely this property which formalizes the notion of systematic cyclicality.<br /><br /> On the other hand, stochastic seasonality describes nearly systematic cycles which evolve as seasonal ARMA$(p,q)$ processes of the form:<br /><br /> $$ (1 - \eta_{1}L^{S} - \eta_{2}L^{2S} - \ldots - \eta_{p}L^{pS})y_{t} = (1 + \xi_{1}L^{S} + \xi_{2}L^{2S} + \ldots + \xi_{q}L^{qS})e_{t}$$ where $ L $ denotes the usual lag operator. In particular, when $ p = 1 $ and $ q = 0 $, the seasonal AR(1) model with $ \eta_{1} = 0.75 $ is visualized as follows:<br /><br /> <!-- :::::::::: FIGURE 2 :::::::::: --> <center> <a href="https://lh3.googleusercontent.com/-frISG5yc5Vs/XL3z2QMIM6I/AAAAAAAAAtk/VrmiJrXJp6oFRUAZpTcJBMk-F1dr7ilIACEwYBhgL/s1600/ss.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-frISG5yc5Vs/XL3z2QMIM6I/AAAAAAAAAtk/VrmiJrXJp6oFRUAZpTcJBMk-F1dr7ilIACEwYBhgL/s1600/ss.jpg" title="Stochastic Seasonality" width="320" /></a><br /> <small>Figure 2: Stochastic Seasonality</small><br /><br /> </center> Unlike the deterministic seasonal model however, the $ h $-period ahead forecast of the stochastic seasonal model is not constant. In particular, for the seasonal AR(1) model, the forecast $ h $-periods ahead is given by:<br /><br /> $$ \widehat{y}_{S(t+h)-s} = \widehat{\eta}_{1}^{h}y_{St-s} $$ In other words, the forecast in any given season is a function of past data values, and is therefore considered to be <i>stochastic</i>.<br /><br /> So how does one identify whether a series exhibits deterministic or stationary seasonality? One useful tool is the <i>periodogram</i> which produces a decomposition of the dominant frequencies (cycles) of a time series. As it turns out, there are at most $ S $ frequencies in a time series exhibiting $ S $ period cycles. Formally, these are identified in conjugate pairs as follows:<br /><br /> $$ \omega \in \left\{0, \left(\frac{2\pi}{S}, 2\pi-\frac{2\pi}{S}\right), \left(\frac{4\pi}{S}, 2\pi-\frac{4\pi}{S}\right), \ldots, \pi \right\} $$ if $ S $ is even, and<br /><br /> $$ \omega \in \left\{0, \left(\frac{2\pi}{S}, 2\pi-\frac{2\pi}{S}\right), \left(\frac{4\pi}{S}, 2\pi-\frac{4\pi}{S}\right), \ldots, \left(\frac{\lfloor S/2 \rfloor\pi}{S}, 2\pi-\frac{\lfloor S/2\rfloor\pi}{S}\right) \right\} $$ if $ S $ is odd.<br /><br /> Thus, given a stationary time series with $ S $ period cycles, we expect the periodogram to protrude at the non-zero frequencies. In particular, we present the periodogram for deterministic and stochastic seasonal processes below:<br /><br /> <table> <tbody> <tr> <td> <!-- :::::::::: FIGURE 3A :::::::::: --> <center> <a href="https://lh3.googleusercontent.com/-nfQ9gfzbaV8/XL34GxM97eI/AAAAAAAAAt8/zwgbTwDu3MU8-tkF7OdUDvSBvA7j5bCUACEwYBhgL/s1600/dsprdgrm.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-nfQ9gfzbaV8/XL34GxM97eI/AAAAAAAAAt8/zwgbTwDu3MU8-tkF7OdUDvSBvA7j5bCUACEwYBhgL/s1600/dsprdgrm.jpg" title="Deterministic Seasonality Periodogram" width="320" /></a><br /> </center> <!-- :::::::::: FIGURE 3A :::::::::: --> </td> <td> <!-- :::::::::: FIGURE 3B :::::::::: --> <center> <a href="https://lh3.googleusercontent.com/-n6oWWlny_30/XL34Gg5AJHI/AAAAAAAAAt4/1hmGndwDR20hVcGhUrZbirTE_uEbAWpmwCEwYBhgL/s1600/ssprdgrm.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-n6oWWlny_30/XL34Gg5AJHI/AAAAAAAAAt4/1hmGndwDR20hVcGhUrZbirTE_uEbAWpmwCEwYBhgL/s1600/ssprdgrm.jpg" title="Stochastic Seasonality Periodogram" width="320" /></a><br /> </center> <!-- :::::::::: FIGURE 3B :::::::::: --> </td> </tr> <tr> <td> <center> <small>Figure 3A: Deterministic Seasonality Periodogram</small><br /><br /> </center> </td> <td> <center> <small>Figure 3B: Stochastic Seasonality Periodogram</small><br /><br /> </center> </td> </tr> </tbody> </table> We can see from the periodograms that the spectrum of deterministic seasonal processes exhibits sharp peaks at the seasonal frequencies $ \omega $, whereas that of stochastic seasonal processes exhibits a window of sharp peaks centered around seasonal frequencies $ \omega $. In case of stochastic seasonality, the fact that the spectrum spreads around principal frequencies and is not a single peak reaffirms the notion that cycles are stochastically distributed around said frequencies.<br /><br /> <h3>Seasonal Unit Roots</h3> A particularly important form of stochastic seasonality manifests in the form of unit roots at some or all of the frequencies $ \omega $. In particular, consider the following process:<br /><br /> $$ y_{t} = \eta y_{t-S} + e_{t} $$ and note that the characteristic equation associated with the process is defined as:<br /><br /> \begin{align} 1 - \eta z^{S} = 0 \quad \text{or} \quad z^{S} = 1/\eta \label{eq1} \end{align} Analogous to the case of classical unit root processes, when $ |\eta|=1 $ or $ |z| = 1^{1/S} = 1 $, $ y_{t} $ is in fact non-stationary. In contrast to the classical unit root case however, $ y_{t} $ can possess not one, but upto $ S $ unique unit roots. To see this, note that any complex number $ z = a + ib $ can be written in polar form as:<br /><br /> $$ z = \sqrt{a^{2} + b^{2}}(\cos(\theta) + i\sin(\theta)) = r(\cos(\theta) + i\sin(\theta)) $$ where $ r = |z|$ is called the magnitude of $ z $, but is also the radius of the circle in polar coordinates. Accordingly, when $ |\eta | = 1 $ or $ |z|=1 $, $ z $ lies on a circle with radius $ r = 1 $. In other words, $ y_{t} $ is a unit root process. Next, recall Euler's formula:<br /><br /> $$ e^{ix} = \cos(x) + i \sin(x) $$ Clearly, any complex number $ z $ with magnitude $ r=1 $ satisfies Euler's formula. In other words, $ z = e^{i\theta} $. Since Euler's formula also implies that:<br /><br /> $$ e^{2\pi i k} = 1 \quad \text{for} \quad k=0,1,2,\ldots$$ when $ \eta=1 $ or $ |z|=1 $, the characteristic equation \eqref{eq1} can be expressed as:<br /><br /> \begin{align*} z = e^{i\omega} &= 1^{1/S} \notag\\ &= (e^{2\pi i k})^{1/S}\notag\\ &= e^{\frac{2\pi i k}{S}} \end{align*} where the relations above evidently hold for all $ k=0,1,2,\ldots, S-1 $ since the solutions begin to cycle when $ k \geq S $. Now, taking logarithms of both sides, it is clear that:<br /><br /> \begin{align} \omega = \frac{2\pi k}{S} \quad \text{for} \quad k=0,1,2,\ldots, S-1 \label{eq2} \end{align} In other words, the characteristic equation \eqref{eq1} has $ S $ unique solutions identified by the $ S $ relationships in \eqref{eq2}. These solutions are equally (by $ 2\pi k/S $ degrees) spaced on the unit circle, with two real solutions associated with $ \omega = 0 $ and $ \omega = \pi $, and the remaining $ S-2 $ imaginary solutions organized in harmonic pairs.<br /><br /> Thus, when we identify $ S $ with a temporal frequency, namely a week, month, quarter, and so on, the problem of identifying roots of the characteristic equation \eqref{eq1} extends the classical unit root literature in which $ S=1 $ (or annual frequency), to that of identifying $ S > 1 $ possible roots on the unit circle.<br /><br /> In fact, like the classical unit-root literature in which unchecked unit roots are known to have severe inferential consequences, the presence of unit roots at seasonal frequencies can also give rise to similar inferential inaccuracies and concerns. Accordingly, identifying the presence of unit roots at one or more seasonal frequencies is the subject of the battery of tests known as <i>seasonal unit root tests</i>.<br /><br /> <h3>Seasonal Unit Root Tests</h3> Historically, the first test for a seasonal unit root was proposed by Dickey, Hasza and Fuller (1984) (DHF). In its simplest form, the test is based on running the regression:<br /><br /> $$ (1-L^{S})y_{t} = \eta y_{t-s} + e_{t} $$ and testing the null hypothesis $ H_{0}: \eta = 0 $ against the one-sided alternative $ H_{A}: \eta < 0 $. The test is carried out using the familiar Student's-$ t $ statistic on statistical significance for $ \eta $, and analogous to the classic augmented Dickey-Fuller (ADF) test, exhibits a non-standard asymptotic distribution under the null. Nevertheless, the DHF test is very restrictive. Whereas the test imposes the existence of a unit root at all $ S $ seasonal frequencies simultaneously, in reality, a process may exhibit a seasonal unit root at some seasonal frequencies but not others.<br /><br /> <h4>HEGY Seasonal Unit Root Test</h4> To correct for the shortcomings of the DHF test, Hylleberg, Engle, Granger and Yoo (1990) (HEGY) proposed a test for the determination of unit roots at each of the $ S $ seasonal frequencies individually, or collectively. In particular, following the notation in Smith and Taylor (1999), in its simplest form, the HEGY test is based on regressions of the form:<br /><br /> \begin{align*} (1-L^{s})y_{St-s} &= \mu + \pi_{0}L\left(1 + L + \ldots + L^{S-1}\right)y_{St-s}\\ &+ L\sum_{k=1}^{S^{\star}}\left( \pi_{k,1}\sum_{j=0}^{S-1}\cos\left((j+1)\frac{2\pi k}{S}\right)L^{j} - \pi_{k,2}\sum_{j=0}^{S-1}\sin\left((j+1)\frac{2\pi k}{S}\right)L^{j} \right)y_{St-s}\\ &+ \pi_{S/2}L\left(1 - L + L^{2} - \ldots - L^{S-1}\right)y_{St-s} + e_{t}\\ &\equiv \mu + \pi_{0}y_{St-s-1, 0} + \sum_{k=1}^{S^{\star}}\pi_{k,1}y_{St-s-1,k,1} + \sum_{k=1}^{S^{\star}}\pi_{k,2}y_{St-s-1,k,2} + \pi_{S/2}y_{St-s-1, S/2} +e_{t} \end{align*} where $ S^{\star} = (S/2) - 1 $ if $ S $ is even and $ S^{\star} = \lfloor S/2 \rfloor $ if $ S $ is odd, and as before, $ s = S-1, \ldots, 1, 0 $.<br /><br /> In particular, when data is quarterly with $ S=4 $ and therefore $ S^{\star} = 1 $, then:<br /><br /> \begin{align*} y_{4t-s, 0} &= (1+L+L^{2}+L^{3})y_{4t-s}\\ y_{4t-s, 1,1} &= -L(1-L^{2})y_{4t-s}\\ y_{4t-s, 1,2} &= -(1-L^{2})y_{4t-s}\\ y_{4t-s, 2} &= -(1-L+L^{2}-L^{3})y_{4t-s} \end{align*} Here, $ y_{4t-s, 0} $ is in fact the series $ y_{4t-s} $ filtered by the 0 frequency filter, $ y_{4t-s, 1,1} $ is the series $ y_{4t-s} $ filtered by the $ \pi/2 $ frequency filter, $ y_{4t-s, 1,2} $ is the series $ y_{4t-s} $ filtered by the $ 3\pi/2 $ frequency filter, and $ y_{4t-s, 2} $ is the series $ y_{4t-s} $ filtered by the $ \pi $ frequency filter.<br /><br /> To visualize the frequency filters, consider the spectral filter functions associated with each of the processes above. The latter are computed as $ |\phi(e^{i\theta})| $ where $ \phi(\cdot) $ is the lag polynomial applied to $ y_{St-s} $, and $ \theta \in [0, 2\pi) $. For instance, in case of quarterly data, the 0 frequency filter is computed as $ |1 + e^{i\theta} + e^{i2\theta} + e^{i3\theta} + e^{i4\theta}| $, and so on.<br /><br /> <!-- :::::::::: FIGURE 4 :::::::::: --> <center> <a href="https://lh3.googleusercontent.com/-fdsyB7HGBTM/XL4p5kYpm5I/AAAAAAAAAuY/Wr2Pj5D42L4NV178cYYzKjRtq1afj7f7wCLcBGAs/s1600/filters.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-fdsyB7HGBTM/XL4p5kYpm5I/AAAAAAAAAuY/Wr2Pj5D42L4NV178cYYzKjRtq1afj7f7wCLcBGAs/s1600/filters.jpg" title="HEGY Seasonal Filters" width="320" /></a><br /> <small>Figure 4: HEGY Seasonal Filters</small><br /><br /> </center> Like the DHF test, the HEGY test also reduces to verifying parameter significance in the regression equation. Nevertheless, in contrast to DHF, HEGY tests can detect isolated effect of each seasonal frequency independently. In the case of quarterly data, for instance, a $ t-$test on coefficient significance for $ \pi_{1} = 0 $ is in fact a test for a unit root in the $ \omega = 0 $ frequency, a $ t- $test on coefficient significance for $ \pi_{2} = 0 $ is a test for the presence of a unit root at the $ \omega = \pi $ frequency, and an $ F- $test for the joint parameter significance of $ \pi_{1,1} = 0 $ and $ \pi_{1,2} = 0 $, is in fact a joint test for the presence of a unit root at the harmonic conjugate pair of frequencies $ (\pi/2, 3\pi/2) $.<br /><br /> It should also be noted here that while we have focused on the simplest form, the HEGY test can accommodate various deterministic specifications in the form of seasonal dummies, constants, and trends. Moreover, in the presence of serial correlation in the innovation process, the HEGY test can also be augmented with lags of the dependent variable as additional regressors to the principal equation presented above, in order to mitigate the effect.<br /><br /> In fact, the HEGY test is very similar to the ADF test which is effectively a unit root test at the 0-frequency alone. Whereas the latter proceeds as a regression of a differenced series against its lagged level, the former proceeds as a regression of a seasonally differenced series against the lagged levels at each of the constituent seasonal frequencies. In this regard, the HEGY test is considered an extension of the ADF test in the direction of non-zero frequencies. As such, it also suffers from the same shortcomings as the ADF test, and can exhibit low statistical power when the individual frequencies are in fact stationary, but exhibit near-unit root behaviour.<br /><br /> <h4>Canova-Hansen Seasonal Unit Root Test</h4> One response to the low power of ADF tests in the presence of near unit root stationarity was the test of Kwiatkowski, Phillips, Schmidt, and Shin (1992) (KPSS), which is in fact a test for stationarity at the 0-frequency alone. The analogous development in the seasonal unit root literature was the test of Canova and Hansen (1995) (CH). Like the KPSS test, the CH test is also a test for stationarity but extends to non-zero seasonal frequencies.<br /><br /> The idea behind the CH test is to suppose that seasonality manifests in the process mean. In other words, given a process $ y_{t} $, if seasonal effects are present, then $ y_{t} $ will exhibit a seasonally dependent average. Traditionally, this is formalized using seasonal dummy variables as:<br /><br /> $$ y_{t} = \sum_{s=0}^{S-1}\delta_{s}D_{s,t} + e_{t} $$ Nevertheless, it is well known that an equivalent representation using discrete Fourier expansions exists in terms of sine and cosine functions. In particular,<br /><br /> $$ y_{t} = \sum_{k=0}^{S^{\star}}\left(\delta_{k,1}\cos\left(\frac{2\pi k t}{S}\right) + \delta_{k,2}\sin\left(\frac{2\pi k t}{S}\right)\right) + e_{t} $$ where $ S^{\star} $ was defined earlier, and $ \delta_{k,1} $ and $ \delta_{k,2}$ are referred to as <i>spectral intercept</i> coefficients. In either case, the expression can be expressed in vector notation as follows:<br /><br /> \begin{align} y_{t} = \pmb{Z}_{t}^{\top}\pmb{\gamma}_{t} + e_{t} \label{eq3} \end{align} where $ \pmb{Z}_{t} = \left(1, \pmb{z}_{1,t}^{\top}, \ldots, \pmb{z}_{S^{\star},t}^{\top} \right) $ (or $ \pmb{Z}_{t} = \left(1, D_{1,t}, \ldots, D_{S-1,t}\right) $) and $ \pmb{\gamma}_{t} = \left(\gamma_{1,t}, \ldots, \gamma_{S,t}\right) $ is a an $ S\times 1 $ vector of coefficients, and $ \pmb{z}_{k,t} = \left(\cos\left(\frac{2\pi k t}{S}\right), \sin\left(\frac{2\pi k t}{S}\right)\right) $ for $ j=1,\ldots, S^{\star} $, with the convention $ \pmb{z}_{S^{\star},t} \equiv \cos(\pi t) = (-1)^{t} $ when $ S $ is even.<br /><br /> Next, to distinguish between stationary and non-stationary seasonality, CH assume that the coefficient vector $ \pmb{\gamma}_{t} $ evolves as the following AR(1) model:<br /><br /> \begin{align*} \pmb{\gamma}_{t} &= \pmb{\gamma}_{t-1} + u_{t}\\ u_{t} &\sim IID(\pmb{0}, \pmb{G})\\ \pmb{G} &= \text{diag}(\theta_{1}, \ldots, \theta_{S}) \end{align*} Observe that when $ \theta_{k} > 0 $, then $ \gamma_{k,t} $ follows a random walk. On the other hand, when $ \theta_{k} = 0 $, then $ \gamma_{k,t} = \gamma_{k, t-1} = \gamma_{k} $, a fixed constant for all $ t $. In other words, when $ \theta_{k} > 0 $, the process $ y_{t} $ exhibits a seasonal unit root at the harmonic frequency pair $ (\frac{2\pi k}{S}, 2\pi - \frac{2\pi k}{S}) $ for $ 1\leq k < \lfloor S/2 \rfloor $, and the frequency $ \frac{2\pi k}{S} $ if $ k=0 $ or $ k = \lfloor S/2 \rfloor $. In this regard, to test the null hypothesis that $ y_{t} $ exhibits at most deterministic seasonality at certain (possibly all) frequencies, against the alternative hypothesis that $ y_{t} $ exhibits a seasonal unit root at certain (possibly all) frequencies, define $ \pmb{A}_1$ and $ \pmb{A}_2 $ as mutually orthogonal, full column-rank, $(S \times a_1)-$ and $(S \times a_2)$-matrices which respectively constitute $1 \leq a_1 \leq S$ and $a_2 = S - a_1$ sub-columns from the order-$S$ identity matrix $\pmb{I}_s$.<br /><br /> For instance, if one wishes to test whether a seasonal unit root exists at frequency $ \pi $, one would set $ \pmb{A}_{1} = (0,\ldots, 0,1)^{\top} $. Alternatively, if testing for a seasonal unit root at the frequency pair $ \left(\frac{2\pi}{S}, 2\pi - \frac{2\pi}{S}\right) $, then one would set:<br /><br /> $$ \pmb{A}_{1} = \begin{bmatrix} 0 & 0 \\ 1 & 0 \\ 0 & 1 \\ 0 & 0 \\ \vdots & \vdots \\ 0 & 0 \end{bmatrix} $$ Note further that one can further rewrite \eqref{eq3} as follows:<br /><br /> $$ y_{t} = \pmb{Z}_{t}^{\top}\pmb{A}_{1}\pmb{A}_{1}^{\top}\pmb{\gamma}_{t} + \pmb{Z}_{t}^{\top}\pmb{A}_{2}\pmb{A}_{2}^{\top}\pmb{\gamma}_{t} + e_{t} $$ Next, define $ \pmb{\Theta} = \left(\theta_{1}, \ldots, \theta_{S}\right)^{\top} $ and observe that the CH hypothesis battery reduces to:<br /><br /> \begin{align*} H_{0}: \text{}\pmb{A}_{1}^{\top}\pmb{\Theta} = \pmb{0}\\ H_{A}: \text{}\pmb{A}_{1}^{\top}\pmb{\Theta} > 0 \end{align*} where in addition to $ H_{0} $, it is implicitly maintained that $H_{M}:\text{} \pmb{A}_{2}^{\top}\pmb{\Theta} = \pmb{0} $. In particular, notice that when both $ H_{0} $ and $ H_{M} $ hold, equation \eqref{eq3} reduces to:<br /><br /> \begin{align} y_{t} = \pmb{Z}_{t}^{\top}\pmb{\gamma} + e_{t} \label{eq4} \end{align} where $ \pmb{\gamma} $ is now constant across time. In other words, $ y_{t} $ exhibits at most deterministic (stationary) seasonality. In this regard, holding $ H_{M} $ implicitly true, Canova and Hansen (1995) propose a consistent test for $ H_{0} $ versus $ H_{A} $, using the statistic:<br /><br /> \begin{align*} \mathcal{L} = T^{-2} \text{tr}\left(\left(\pmb{A}_{1}^{\top}\widehat{\pmb{\Omega}}\pmb{A}_{1}\right)^{-1}\pmb{A}_{1}^{\top}\left(\sum_{t=1}^{T}\widehat{F}_{t}\widehat{F}_{t}^{\top}\right)\pmb{A}_{1}^{\top}\right) \end{align*} where $ \text{tr}(\cdot) $ is the trace operator, $ \widehat{e}_{t} $ are the OLS residuals from regression \eqref{eq4}, $ \widehat{F}_{t} = \sum_{t=1}^{T} \widehat{e}_{t}\pmb{Z}_{1,t} $, and the HAC estimator<br /><br /> $$ \widehat{\pmb{\Omega}} = \sum_{j=-T+1}^{T-1}\kappa\left(\frac{j}{h}\right)\widehat{\pmb{\Gamma}}(j) $$ Above, $ \kappa(\cdot) $ is the kernel function, $ h $ is the bandwidth parameter, and $ \widehat{\pmb{\Gamma}}(j) $ is the autocovariance (at level $ j $ ) estimator<br /><br /> $$ \widehat{\pmb{\Gamma}}(j) = T^{-1} \sum_{t=j+1}^{T} \widehat{e}_{t}\pmb{Z}_{t}\widehat{e}_{t-j}\pmb{Z}_{t-j}^{\top} $$ Naturally, we reject the null hypothesis when $ \mathcal{L} $ is larger than some critical value which depends on the rank of $ \pmb{A}_{1} $.<br /><br /> <h4>Unattended Unit Roots</h4> A well-known problem with the CH test concerns the issue of <i>unattended unit roots</i>. In particular, CH tests the null hypothesis $ H_{0} $ while imposing $ H_{M} $, where the latter lies in the complementary space to that generated by the former. In practice however, one does not know which spectral frequency exhibits a unit root. If one did know, the exercise of testing for their presence would be nonsensical. In this regard, if $ H_{0} $ is imposed but $ H_{M} $ is violated, then, Taylor (2003) shows that the CH test is severely undersized. To overcome the shortcoming, Taylor (2003) suggests filtering the regression equation \eqref{eq3} to reduce the order of integration at all spectral frequencies identified in $ \pmb{A}_{2} $. In particular, consider the filter:<br /><br /> $$ \nabla_{2} = \frac{1 - L^{S}}{\nabla_{1}} $$ where $ \nabla_{1} $ reduces, by one, the order of integration at each frequency identified in $ \pmb{A}_{1} $. For instance, if $ \pmb{A}_{1} $ identifies the 0-frequency, then $ \nabla_{1} = (1 - L) $ and $ \nabla_{2} = \frac{1-L^{S}}{1-L} = 1 + L + \ldots + L^{S-1} $. Alternatively, if $ \pmb{A}_{1} $ identifies the harmonic frequency pair $ \left(\frac{2\pi k}{S}, 2\pi - \frac{2\pi k}{S}\right) $, then $ \nabla_{1} = 1 - 2\cos\left(\frac{2\pi k}{S}\right)L + L^{2} $, and so on. Accordingly, if we assume $ \pmb{\gamma}_{t} = \pmb{\gamma}_{t-1} + u_{t} $, it is clear that $ \nabla_{2}y_{t} $ will not admit unit root behaviour at any of the frequencies identified in $ \pmb{A}_{2} $ and the maintained hypothesis $ H_{M} $ will hold. See Taylor (2003) and Busetti and Taylor (2003) for further details.<br /><br /> Furthermore, since $ \nabla_{2} $ acts only on frequencies identified in $ \pmb{A}_{2} $, it can also be formally shown that the regressors $ \nabla_{2}\pmb{Z}_{t}^{\top}\pmb{A}_{1}$ span a space identical to the space spanned by $ \pmb{Z}_{t}^{\top}\pmb{A}_{1}$. Accordingly, the strategy in Taylor (2003) is to run the regression:<br /><br /> \begin{align*} \nabla_{2}y_{t} &= \nabla_{2}\pmb{Z}_{t}^{\top}\pmb{A}_{1}\pmb{A}_{1}^{\top}\pmb{\gamma}_{t} + \nabla_{2}\pmb{Z}_{t}^{\top}\pmb{A}_{2}\pmb{A}_{2}^{\top}\pmb{\gamma}_{t} + \nabla_{2}e_{t} \\ &= \pmb{Z}_{t}^{\top}\pmb{A}_{1}\pmb{A}_{1}^{\top}\pmb{\gamma}_{t} + e_{t}^{\star} \end{align*} where $ e_{t}^{\star} = \nabla_{2}\pmb{Z}_{t}^{\top}\pmb{A}_{2}\pmb{A}_{2}^{\top}\pmb{\gamma}_{t} + \nabla_{2}e_{t} $. Naturally, the modified test statistic is now given by:<br /><br /> \begin{align*} \mathcal{L}^{\star} = T^{-2} \text{tr}\left(\left(\pmb{A}_{1}^{\top}\widehat{\pmb{\Omega}}^{\star}\pmb{A}_{1}\right)^{-1}\pmb{A}_{1}^{\top}\left(\sum_{t=1}^{T}\widehat{F}_{t}^{\star}\widehat{F}_{t}^{\star^{\top}}\right)\pmb{A}_{1}^{\top}\right) \end{align*} where $ \widehat{F}_{t}^{\star} = \sum_{t=1}^{T} \widehat{e}_{t}^{\star}\pmb{Z}_{1,t} $ and $ \widehat{\pmb{\Omega}}^{\star} $ is computed analogous to $ \widehat{\pmb{\Omega}} $ upon replacing $ \widehat{e}_{t} $ with $ \widehat{e}_{t}^{\star} $.<br /><br /> <h3>Seasonal Unit Root Test in EViews</h3> Starting with version 11 of EViews, a battery of tests aimed at diagnosing unit roots in the presence seasonality are now supported natively. These tests include the most famous Hylleberg, Engle, Granger, and Yoo (1990) (HEGY) test as well its Smith and Taylor (1999) likelihood ratio variant, the Canova and Hansen (1995) (CH) test, and the Taylor (2005) variance ratio test.<br /><br /> Here, we will apply the HEGY and CH tests to detect the presence of seasonal unit roots in quarterly U.S. government consumption expenditures and gross investment data running from 1947 to 2018. We have named the series object containing the data as <b>USCONS</b>. The latter can either be opened from the workfile associated with this blog, or by running a fetch procedure to grab the data directly from the FRED database. In case of the latter, in EViews, issue the following commands in the command window:<br /><br /> <pre><br /> wfcreate q 1947q1 2018q4<br /> fetch(d=fred) NA000333Q<br /> rename NA000333Q uscons<br /> </pre> We begin with a plot of the data. To do so, double click on a <b>USCONS</b> in the workfile to open the series object. Next, click on <b>View/Graph...</b>. This will open a graph options window. We will stick with the defaults so click on <b>OK</b>. The output is reproduced below.<br /><br /> <!-- :::::::::: FIGURE 5 :::::::::: --> <center> <a href="https://lh3.googleusercontent.com/-GlpQtAZiLJY/XL4xMFbOTqI/AAAAAAAAAuw/J4eehkrtXBciIbp4EZlmBNtiba2-YA58QCLcBGAs/s1600/usconsgrph.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-GlpQtAZiLJY/XL4xMFbOTqI/AAAAAAAAAuw/J4eehkrtXBciIbp4EZlmBNtiba2-YA58QCLcBGAs/s1600/usconsgrph.jpg" title="Time Series Plot of USCONS" width="320" /></a><br /> <small>Figure 5: Time Series Plot of USCONS</small><br /><br /> </center> A visual analysis indicates data is trending with very prominent seasonal effects. To determine statistically whether these seasonal effects exhibit unit roots, we click on <b>View/Unit Root Tests/Seasonal Unit Root Tests...</b> to open the seasonal unit root test window.<br /><br /> <!-- :::::::::: FIGURE 6 :::::::::: --> <center> <a href="https://lh3.googleusercontent.com/-IOV0OdjWwG8/XL4xeK3zTvI/AAAAAAAAAu4/NyzUQuwil9YtlTojfaaVzTSRq5HoDzjlQCEwYBhgL/s1600/hegydlg.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-IOV0OdjWwG8/XL4xeK3zTvI/AAAAAAAAAu4/NyzUQuwil9YtlTojfaaVzTSRq5HoDzjlQCEwYBhgL/s1600/hegydlg.jpg" title="HEGY Test Dialog" width="320" /></a><br /> <small>Figure 6: HEGY Test Dialog</small><br /><br /> </center> We will start with the HEGY test, which is the default test. Here, EViews has already filled out the periodicity with 4 to match the cyclicality of the data. Nevertheless, if you wish to test the data under a different periodicity, you may manually adjust this to one of the following supported values: 2, 4, 5, 6, 7, 12. Since our data is trending, we will change the <b>Non-Seasonal deterministics</b> dropdown from <b>None</b> to <b>Intercept and trend</b> and leave the <b>Seasonal Deterministics</b> dropdown unchanged.<br /><br /> As discussed earlier, in case of serially correlated errors, the HEGY test can be augmented by lags of the dependent variable added as additional regressors to the HEGY regression. To determine the precise number of lags to add, EViews offers both automatic and manual methods. The default is automatic lag selection with Akaike Information Criterion and maximum of 12 lags. The details can be changed of course, or, if automatic selection is undesired, a <b>User Selected</b> value can be specified. We will stick with the defaults. Hit <b>OK</b>.<br /><br /> <!-- :::::::::: FIGURE 7 :::::::::: --> <center> <a href="https://lh3.googleusercontent.com/-tVC52Adt14g/XL4xkQhZMmI/AAAAAAAAAvQ/cJjgkQnFn4shqvETTxtULjPMQ4emhCGbQCEwYBhgL/s1600/hegytbl.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-tVC52Adt14g/XL4xkQhZMmI/AAAAAAAAAvQ/cJjgkQnFn4shqvETTxtULjPMQ4emhCGbQCEwYBhgL/s1600/hegytbl.jpg" title="HEGY Test Output" width="320" /></a><br /> <small>Figure 7: HEGY Test Output</small><br /><br /> </center> Looking at the output, EViews provides a table, the top portion of which summarizes the testing procedure, whereas the lower summarizes the regression output upon which the test is conducted. In particular, EViews computes the HEGY test statistic for each of the 0, harmonic pairs, and $ \pi $ frequencies, in addition to the joint test for all seasonal frequencies -- a joint test for all frequencies other than 0 -- and a joint test for all frequencies including the frequency 0. As in traditional unit root tests, the null hypothesis postulates the existence of a unit root at the seasonal frequencies under consideration and rejection of the null requires the absolute value of the test statistic to exceed the absolute value of a critical value associated with the limiting distribution. In this regard, EViews summarizes the 1\%, 5\%, and 10\% critical values derived from simulation for sample sizes ranging from 20 to 480 in intervals of 20. To adjust for the actual sample size used in the HEGY regression, EViews also offers an interpolated version of the critical values. Here, it is clear that we will not reject the null hypothesis at any of the individual or harmonic pair frequencies, nor at the two joint tests. The overwhelming conclusion is that <b>USCONS</b> exhibits a unit root at each of the quarterly spectral frequencies individually and jointly.<br /><br /> Consider next the CH test applied to the same data. To bring up the CH test options, from the series object, once again click on <b>View/Unit Root Tests/Seasonal Unit Root Tests...</b> and under the <b>Test type</b> dropdown, select <b>Canova and Hansen</b>. As before, we will leave the <b>Periodicity</b> unchanged and will change the <b>Non-Seasonal Deterministics</b> to <b>Intercept and trend</b>. Note here that the traditional Canova and Hansen (1995) paper does not allow for the inclusion of deterministic trends. However, as noted in Busetti and Harvey (2003), we can relax ``the conditions of CH by showing that the distribution is unaffected when a deterministic trend is included in the model''.<br /><br /> <!-- :::::::::: FIGURE 8 :::::::::: --> <center> <a href="https://lh3.googleusercontent.com/-9HBvymMw6yc/XL4xkG00nHI/AAAAAAAAAvM/tghX_6dEDrk0l3j98eBozXn0XZlQ5Y5MACEwYBhgL/s1600/chdlg.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-9HBvymMw6yc/XL4xkG00nHI/AAAAAAAAAvM/tghX_6dEDrk0l3j98eBozXn0XZlQ5Y5MACEwYBhgL/s1600/chdlg.jpg" title="CH Test Dialog" width="320" /></a><br /> <small>Figure 8: CH Test Dialog</small><br /><br /> </center> Next, change the <b>Seasonal Deterministics</b> dropdown from <b>Seasonal dummies</b> to <b>Seasonal intercepts</b>. Notice that when we do this the <b>Restriction selection</b> box changes to reflect that restrictions are no longer on seasonal dummies, but on seasonal intercepts. Note that we can multi-select which frequencies we would like to test. This is equivalent to specifying the entries of the matrix $ \pmb{A}_{1} $ we considered earlier. If no restrictions are selected, which is the default, then EViews will test all available restrictions. Here we will not select anything.<br /><br /> We will also leave the <b>Include lag of dep. variable</b> untouched. As noted in Canova and Hansen (1995), the inclusion of a lagged dependent variable in the CH regression ``will reduce this serial correlation (we can think of this as a form of pre-whitening), yet not pose a danger of extracting a seasonal root''. At last, note the <b>HAC Options</b> button which opens a set of options associated with how the long-run variance is computed and gives users the option to customize which kernel and bandwidths are used, and whether further residual whitening is desired. We stick with default values and simply click on <b>OK</b> to execute the test.<br /><br /> <!-- :::::::::: FIGURE 9 :::::::::: --> <center> <a href="https://lh3.googleusercontent.com/-rqSM5wpijrs/XL4xkB7TNYI/AAAAAAAAAvI/T5unGNvBNbIn-TuiEZ41RPWOZ0fS2ipsgCEwYBhgL/s1600/chtbl.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-rqSM5wpijrs/XL4xkB7TNYI/AAAAAAAAAvI/T5unGNvBNbIn-TuiEZ41RPWOZ0fS2ipsgCEwYBhgL/s1600/chtbl.jpg" title="CH Test Output" width="320" /></a><br /> <small>Figure 9: CH Test Output</small><br /><br /> </center> Turning to the output, EViews divides the analysis into four sections. The first is a table summarizing the joint test for all elements in $ \pmb{A}_{1} $. In the example at hand, we have 3 restrictions -- 2 associated with the harmonic pair $ (\frac{\pi}{2}, \frac{3\pi}{2}) $ , and one associated with the frequency $ \pi $. Since the null hypothesis is that no unit root exists at the specified frequencies and the test statistic 4.53631 is larger than either of the 1\%, 5\%, or 10\% critical values, we conclude that the joint test rejects the null hypothesis.<br /><br /> The next table presents a detailed look at the harmonic pair test. Although we did not explicitly ask for this test, EViews presents a breakdown of the joint test requested into its constituent restrictions. These are harmonic pair tests in which the restriction matrix $ \pmb{A}_{1} $ would be $ S\times 2 $. In this case, the test for no seasonal unit root at the harmonic pair is 2.968384 which is clearly larger than any of the critical values associated with the limiting distribution. In other words, we reject the null and conclude that there's evidence of a unit root at the harmonic pair frequencies. Notice also that in addition to the CH test statistic EViews also offers an additional test statistic marked by an asterisk for differentiation. This is in fact the test statistic that corresponds to the Taylor (2003) version of the CH test robustified to the possible violation of the maintained hypothesis $ H_{M} $ discussed earlier.<br /><br /> The table beneath the harmonic pair tests is a table summarizing CH tests corresponding to the individual breakdown of all frequencies under consideration. In other words, these are individual tests in which the restriction matrix $ \pmb{A}_{1} $ would be $ S\times 1 $. Since the frequency $ \pi $ was requested as part of the joint test, it is reported here. Clearly, with the test statistic equaling 3.842780, we reject the null hypothesis and conclude in favor of evidence supporting the existence of a unit root at the frequency $ \pi $. As before, note here that below the test statistic associated with the $ \pi $ frequency is an additional statistic differentiated by an asterisk. This, as before, is the Taylor (2003) version of the CH test robustified to unattended unit roots.<br /><br /> At last, the final table presents the CH regression. The residuals from this regression are used in the computation of the CH test statistics.<br /><br /> <h3>Conclusion</h3> In this entry we gave a brief introduction into the subject of seasonal unit root tests. We highlighted the need to distinguish between deterministic and stochastic cyclicality and discussed several statistical methods designed to do so. Among these, our focus was on the HEGY tests, which is effectively an extension of the ADF test in the direction of non-zero seasonal frequencies, and the CH test, which is the analogue of the KPSS test in the direction of non-zero seasonal frequencies. We also looked at some of the mathematical details which underly these methods. At last, we closed with a brief application of both tests to the US Government consumption expenditure and investment data, sampled quarterly from 1947 to 2018. Both tests overwhelmingly supported evidence of unit roots at both individual and joint frequencies.<br /><br /> <h3>Files</h3> The workfile and program files can be downloaded here.<br /><br /> <ul> <li> <a href="http://www.eviews.com/blog/seasuroot/seasuroot.WF1">seasuroot.WF1</a> <li> <a href="http://www.eviews.com/blog/seasuroot/seasuroot.prg">seasuroot.prg</a> </ul> <br /><br /> <hr /> <h3> References</h3><table> <tr valign="top"><td align="right" class="bibtexnumber"><a name="busetti-2003">1</a></td><td class="bibtexitem">Fabio Busetti and AM Robert Taylor. Testing against stochastic trend and seasonality in the presence of unattended breaks and unit roots. <em>Journal of Econometrics</em>, 117(1):21--53, 2003. [ <a href="references_bib.html#busetti-2003">bib</a> ] </td></tr> <tr valign="top"><td align="right" class="bibtexnumber"><a name="busetti-2003a">2</a></td><td class="bibtexitem">Fabio Busetti and Andrew Harvey. Seasonality tests. <em>Journal of Business & Economic Statistics</em>, 21(3):420--436, 2003. [ <a href="references_bib.html#busetti-2003a">bib</a> ] </td></tr> <tr valign="top"><td align="right" class="bibtexnumber"><a name="canova-1995">3</a></td><td class="bibtexitem">Fabio Canova and Bruce E Hansen. Are seasonal patterns constant over time? a test for seasonal stability. <em>Journal of Business & Economic Statistics</em>, 13(3):237--252, 1995. [ <a href="references_bib.html#canova-1995">bib</a> ] </td></tr> <tr valign="top"><td align="right" class="bibtexnumber"><a name="hylleberg-1990">4</a></td><td class="bibtexitem">Svend Hylleberg, Robert F Engle, Clive WJ Granger, and Byung Sam Yoo. Seasonal integration and cointegration. <em>Journal of econometrics</em>, 44(1-2):215--238, 1990. [ <a href="references_bib.html#hylleberg-1990">bib</a> ] </td></tr> <tr valign="top"><td align="right" class="bibtexnumber"><a name="kwiatkowski-1992">5</a></td><td class="bibtexitem">Denis Kwiatkowski, Peter CB Phillips, Peter Schmidt, and Yongcheol Shin. Testing the null hypothesis of stationarity against the alternative of a unit root: How sure are we that economic time series have a unit root? <em>Journal of econometrics</em>, 54(1-3):159--178, 1992. [ <a href="references_bib.html#kwiatkowski-1992">bib</a> ] </td></tr> <tr valign="top"><td align="right" class="bibtexnumber"><a name="smith-1999">6</a></td><td class="bibtexitem">Richard J Smith and AM Robert Taylor. Likelihood ratio tests for seasonal unit roots. <em>Journal of Time Series Analysis</em>, 20(4):453--476, 1999. [ <a href="references_bib.html#smith-1999">bib</a> ] </td></tr> <tr valign="top"><td align="right" class="bibtexnumber"><a name="taylor-2003">7</a></td><td class="bibtexitem">Robert AM Taylor. Robust stationarity tests in seasonal time series processes. <em>Journal of Business & Economic Statistics</em>, 21(1):156--163, 2003. [ <a href="references_bib.html#taylor-2003">bib</a> ] </td></tr> <tr valign="top"><td align="right" class="bibtexnumber"><a name="taylor-2005">8</a></td><td class="bibtexitem">AM Robert Taylor. Variance ratio tests of the seasonal unit root hypothesis. <em>Journal of Econometrics</em>, 124(1):33--54, 2005. [ <a href="references_bib.html#taylor-2005">bib</a> ] </td></tr></table></span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com2tag:blogger.com,1999:blog-6883247404678549489.post-41589892192379248992019-02-01T11:21:00.000-08:002019-02-01T11:21:04.635-08:00Time varying parameter estimation with Flexible Least Squares and the tvpuni add-in<span style="font-family: "verdana" , sans-serif;"><i>Author and guest post by <a href="https://www.linkedin.com/in/eren-ocakverdi-9b673924" target="_blank">Eren Ocakverdi</a></i></span><br /><span style="font-family: "verdana" , sans-serif;"><br /></span><span style="font-family: "verdana" , sans-serif;">Professional life of a researcher who follows or responsible from an emerging market can become so miserable when things suddenly change and the past experience does not hold anymore. As a practitioner you can get used to it over time, but it’s a whole different story when it comes to identifying empirical relationships between market indicators as part of your job.</span><br /><span style="font-family: "verdana" , sans-serif;"><br /></span><span style="font-family: "verdana" , sans-serif;">History can be a really good gauge to understand how such indicators are linked to one another only if you look through a proper glass. Abrupt changes, structural breaks or transition periods may alter such relationships so much that they would be misidentified with those traditional methods where the underlying structure is assumed fixed over the full sample.</span><br /><span style="font-family: "verdana" , sans-serif;"></span><br /><a name='more'></a><span style="font-family: "verdana" , sans-serif;"><br /></span><span style="font-family: "verdana" , sans-serif;">EViews already has nice built-in features or add-ins to deal with such cases. Here, I will add another one to this bundle: Meet the tvpuni add-in, which implements “Flexible Least Squares” approach of Kabala and Tesfatsion (1989). </span><br /><span style="font-family: "verdana" , sans-serif;">One way to look at the parameter stability is to allow coefficients to change over time. A well-known approach in this case is treating these parameters as random walk coefficients and estimate them within a state space framework via Kalman filter. However, estimation of such models can be troublesome in practice due to various reasons and may become a very frustrating experience if you have to deal with convergence problems .</span><br /><span style="font-family: "verdana" , sans-serif;"><br /></span><span style="font-family: "verdana" , sans-serif;">Flexible least squares emerges as a useful alternative, since it makes fewer assumptions than Kalman filter and allows us to determine the degree of smoothness. Help file explains the use of this add-in, so I’ll proceed with demonstrating its abilities through an actual case study.</span><br /><span style="font-family: "verdana" , sans-serif;"><br /></span><span style="font-family: "verdana" , sans-serif;">Turkey’s disinflation process since the aftermath of 2001 crisis was interrupted from time to time due to shocks and stresses originating from different sources. Raw materials constitute more than 70% of the total imports (or 20% of GDP) in Turkish economy making her especially vulnerable to developments in exchange rates and prices of imported goods (i.e. crude oil). Although Turkey has been an (explicit) inflation targeter since 2006, frequently overshooting the target has made it very difficult for central bank to anchor expectations and weakened its hand in fight against inflation persistence.</span><br /><span style="font-family: "verdana" , sans-serif;"><br /></span><span style="font-family: "verdana" , sans-serif;">Following example considers an augmented version of Phillips curve for exploring the determinants of inflation dynamics.</span><br /><span style="font-family: "verdana" , sans-serif;"><br /></span><span style="color: #6aa84f; font-family: "courier new" , "courier" , monospace;">'create a workfile</span><br /><span style="font-family: "courier new" , "courier" , monospace;">wfcreate m 2003 2018</span><br /><span style="font-family: "courier new" , "courier" , monospace;"><br /></span><span style="color: #6aa84f; font-family: "courier new" , "courier" , monospace;">'get the data (retrieve from Bloomberg or open :\tvpuni_data.wf1)</span><br /><span style="font-family: "courier new" , "courier" , monospace;">dbopen(type=bloom) index <span style="color: #6aa84f;">'open database</span></span><br /><span style="font-family: "courier new" , "courier" , monospace;">copy index::<span style="color: #cc0000;">"tucxue index"</span> corecpi <span style="color: #6aa84f;">'Core Consumer Price Index (2003=100)</span></span><br /><span style="font-family: "courier new" , "courier" , monospace;">copy index::<span style="color: #cc0000;">"tues01eu index"</span> infexp12 <span style="color: #6aa84f;">'Inflation expectations over the next 12 months</span></span><br /><span style="font-family: "courier new" , "courier" , monospace;">copy index::<span style="color: #cc0000;">"trtfimvi index"</span> imprice <span style="color: #6aa84f;">'Foreign trade import unit value index (2010=100)</span></span><br /><span style="font-family: "courier new" , "courier" , monospace;">copy index::<span style="color: #cc0000;">"tuiosa"</span> ipi <span style="color: #6aa84f;">'Industrial Production Index (SA, 2015=100)</span></span><br /><span style="font-family: "courier new" , "courier" , monospace;">copy index::<span style="color: #cc0000;">"usdtry curncy"</span> usdtry <span style="color: #6aa84f;">'Exchange rate</span></span><br /><span style="font-family: "courier new" , "courier" , monospace;"><br /></span><span style="color: #6aa84f; font-family: "courier new" , "courier" , monospace;">‘dependent variable</span><br /><span style="font-family: "courier new" , "courier" , monospace;">series coreinf = @pcy(corecpi) <span style="color: #6aa84f;">'core inflation (excl. unprocessed food, alcoholic beverages and tobacco)</span></span><br /><span style="font-family: "courier new" , "courier" , monospace;"><br /></span><span style="color: #6aa84f; font-family: "courier new" , "courier" , monospace;">‘generate some regressors</span><br /><span style="font-family: "courier new" , "courier" , monospace;">series impinf = @pcy(imprice*usdtry) <span style="color: #6aa84f;">'inflationary pressure from import prices (converted to local currency)</span></span><br /><span style="font-family: "courier new" , "courier" , monospace;">hpf(power=4) log(ipi)*100 trend @ gap <span style="color: #6aa84f;">'output gap proxy</span></span><br /><br /><span style="color: #6aa84f; font-family: "courier new" , "courier" , monospace;">'simple fixed parameter estimation</span><br /><span style="font-family: "courier new" , "courier" , monospace;">equation fixed.ls coreinf infexp12 coreinf(-1) gap impinf</span><br /><span style="font-family: "verdana" , sans-serif;"><br /></span><span style="font-family: "verdana" , sans-serif;">Results suggest that backward indexation matters more than forward looking in price setting. Output gap and import prices both have expected signs. All the coefficients are significant at conventional alpha levels. Explanatory power of the model is more than satisfactory, but we are interested in the stability of this relationship.</span><br /><span style="font-family: "verdana" , sans-serif;"><br /></span><span style="color: #6aa84f; font-family: "courier new" , "courier" , monospace;">'time varying parameter estimation with flexible least squares</span><br /><span style="font-family: "courier new" , "courier" , monospace;">fixed.tvpuni(method=<span style="color: #cc0000;">"1"</span>,lambda=<span style="color: #cc0000;">"100"</span>,savem)</span><br /><span style="color: #6aa84f; font-family: "courier new" , "courier" , monospace;">'plot results</span><br /><br /><span style="font-family: "courier new" , "courier" , monospace;">grbetam.line(m)</span><br /><div><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-Ci1gNckiTkQ/XFSZ1NgbGEI/AAAAAAAAAr8/djyX6jeFbBceXcOhTdBqzXZ2oeOhAaFJACEwYBhgL/s1600/graph01.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1003" data-original-width="1600" height="400" src="https://4.bp.blogspot.com/-Ci1gNckiTkQ/XFSZ1NgbGEI/AAAAAAAAAr8/djyX6jeFbBceXcOhTdBqzXZ2oeOhAaFJACEwYBhgL/s640/graph01.png" width="640" /></a></div><div class="separator" style="clear: both; text-align: center;"></div><br /><div class="separator" style="clear: both; text-align: center;"></div><br /></div><div class="separator" style="clear: both; text-align: center;"></div><br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"></div><div><div><span style="font-family: "verdana" , sans-serif;">Results suggest that the coefficient of forward looking has risen, whereas the coefficient of backward indexation has fallen over time and they have become more-or-less equal. Fluctuation around zero makes the coefficient of output gap unreliable and difficult to interpret. Passthrough from import prices, on the other hand, seems to be on the rise since 2016.</span></div><div><span style="font-family: "verdana" , sans-serif;"><br /></span></div><div><span style="font-family: "verdana" , sans-serif;">Behavioral change in coefficients around 2008 should be an easy one as it can be attributed to global financial crisis. However, it may not be that straightforward to explain the dynamics after the end-2010. This era until the first half of 2018 denotes when Central Bank of Turkey implemented an unconventional monetary policy (i.e. an asymmetric and wide interest rate corridor). </span></div><div><span style="font-family: "verdana" , sans-serif;"><br /></span></div><div><span style="font-family: "verdana" , sans-serif;">An approximating model of flexible least squares approach within a state space framework is possible and may be preferable depending on the case at hand. Although the results would not be the same due to different assumptions behind these frameworks, you can get smoothed estimates of coefficients along with their associated confidence bands.</span></div><div><span style="font-family: "verdana" , sans-serif;"><br /></span></div><div><span style="color: #6aa84f; font-family: "courier new" , "courier" , monospace;">'flexible least squares estimation with Kalman filter</span></div><div><span style="font-family: "courier new" , "courier" , monospace;">fixed.tvpuni(method=<span style="color: #cc0000;">"3"</span>,lambda=<span style="color: #cc0000;">"100"</span>,savem,saves)</span></div><div><span style="font-family: "verdana" , sans-serif;"><br /></span></div><div><span style="font-family: "verdana" , sans-serif;">We can plot the results manipulating the output saved in to the workfile with a little bit effort:</span><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-o320YAiSbXY/XFSZ3cMMRfI/AAAAAAAAAsE/BOdeW6FQFCYcoGMJ4XxgV28PiFcGQm_-gCEwYBhgL/s1600/graph02.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1153" data-original-width="1600" height="460" src="https://3.bp.blogspot.com/-o320YAiSbXY/XFSZ3cMMRfI/AAAAAAAAAsE/BOdeW6FQFCYcoGMJ4XxgV28PiFcGQm_-gCEwYBhgL/s640/graph02.png" width="640" /></a></div><span style="font-family: "verdana" , sans-serif;"><br /></span></div></div><div><span style="font-family: Verdana, sans-serif;">Note that the confidence band around the coefficient of output gap reveals the insignificance of this parameter as suspected.</span><br /><span style="font-family: Verdana, sans-serif;"><br /></span><span style="font-family: Verdana, sans-serif;">Add-in also allows you to migrate your original model to state space and to estimate each parameter as a random walk via Kalman filter.</span><br /><span style="font-family: Verdana, sans-serif;"><br /></span><span style="color: #6aa84f; font-family: Courier New, Courier, monospace;">'state space estimation with Kalman filter</span><br /><span style="font-family: Courier New, Courier, monospace;">fixed.tvpuni(method=<span style="color: #cc0000;">"4"</span>,savem,saves)</span><br /><span style="font-family: Verdana, sans-serif;"><br /></span><span style="font-family: Verdana, sans-serif;">Again, we can compare estimated parameters if we organize our output: </span></div><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-AfnlMzlY3SU/XFSZ3aX40EI/AAAAAAAAAsM/bfn5U2tINtcKb0G7PGyAveEpxkuk8Q3PgCEwYBhgL/s1600/graph03.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1110" data-original-width="1600" height="442" src="https://3.bp.blogspot.com/-AfnlMzlY3SU/XFSZ3aX40EI/AAAAAAAAAsM/bfn5U2tINtcKb0G7PGyAveEpxkuk8Q3PgCEwYBhgL/s640/graph03.png" width="640" /></a></div><div><span style="font-family: Verdana, sans-serif;">Results from all three approaches portray similar patterns and therefore yield similar inferences. </span></div><div><span style="font-family: Verdana, sans-serif;"><br /></span></div><div><span style="font-family: Verdana, sans-serif;"><b><i>References</i></b></span></div><div><span style="font-family: Verdana, sans-serif;">Kalaba, R. and Tesfatsion, L., 1989. "Time Varying Linear Regression via Flexible Least Squares", Computers and Mathematics with Applications, Vol. 17, pp. 1215-1245</span></div>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com5tag:blogger.com,1999:blog-6883247404678549489.post-57763061590942396862018-12-11T10:20:00.000-08:002018-12-11T10:20:45.694-08:00Panel Structural VARs and the PSVAR add-in<script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { lb: "{\\left(}", rb: "{\\right)}", bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"></script> <i><span style="font-family: "verdana" , sans-serif;">Author and guest blog by Davaajargal Luvsannyam</span></i><br /><i><span style="font-family: "verdana" , sans-serif;"><br /></span></i><span style="font-family: "verdana" , sans-serif;">Panel SVARs have been used to address a variety of issues of interest to policymakers and applied economists. Panel SVARs are particularly suitable to analyze the transmission of idiosyncratic shocks across units and time. For example, <a href="https://www.sciencedirect.com/science/article/pii/S0165188912000942" target="_blank">Canova et al. (2012)</a> have studied how U.S. interest rate shocks are propagated to 10 European economies, 7 in the Euro area and 3 outside of it, and how German shocks are transmitted to the remaining nine economies. </span><br /><span style="font-family: "verdana" , sans-serif;"></span><br /><a name='more'></a><span style="font-family: "verdana" , sans-serif;"><br /></span><span style="font-family: "verdana" , sans-serif;">Panel SVARs have also been often used to estimate average effects – possibly across heterogeneous groups of units - and to describe unit specific differences relative to the average. For example, researcher may analyze if monetary policy is more countercyclical, on average, in countries or states. Researcher may also be interested in knowing whether inflation dynamics in states may depend on political, geographical, cultural or institutional features, or on whether monetary and fiscal interactions are related. </span><br /><span style="font-family: "verdana" , sans-serif;"><br /></span><span style="font-family: "verdana" , sans-serif;">Alternative potential use of panel SVARs is in studying the importance of interdependencies, and in checking whether reactions are generalized or only involve certain pairs of units. Therefore, some researchers want to implement a panel SVARs to evaluate certain exogeneity assumptions or to test the small open economy assumption, often made in the international economics literature.</span><br /><span style="font-family: "verdana" , sans-serif;"><br /></span><span style="font-family: "verdana" , sans-serif;"></span><br /><span style="font-family: "verdana" , sans-serif;">In this blog, we describe the econometric estimation and implementation of the Panel SVAR of <a href="https://pdfs.semanticscholar.org/d97e/9fb8b0243975feb6df7364765fed9eb7b5e9.pdf" target="_blank">Pedroni (2013)</a>. The key to Pedroni (2013) estimation and identification method will be the assumption that structural shocks can be decomposed into both common and idiosyncratic structural shocks, which are mutually orthogonal.</span><br /><div><br /></div><div><br /></div><div><h3>Structural shock representation</h3>Associated with the $M\times1$ vector of demeaned panel data, $z_{it}$, let $\xi_{it} = \left(\bar{\epsilon}_t^\prime, \tilde{\epsilon}_{it}^\prime\right)^\prime$ where $\bar{\epsilon}_t^\prime$ and $\tilde{\epsilon}_{it}^\prime$ are $M\times 1$ vectors of common and idiosyncratic white noise shocks, respectively. Let $\Lambda_i$ be and $M\times M$ diagonal matrix such that the diagonal elements are the loading coefficients $\lambda_{i,m}$, where $m=1,\ldots, M$. Then the composite white noise errors, \begin{equation} \epsilon_{it} = \Lambda_i \bar{\epsilon}_t + \tilde{\epsilon}_{it} \end{equation} where $E\left[ \xi_{it}\xi_{it}^\prime \right] = \text{diag} \left\{ \Omega_{i, \bar{\epsilon}}, \Omega_{i, \tilde{\epsilon}} \right\}, \forall i,t$. Moreover, $E\left[\xi_{it}\right] = 0, \forall i,t$, $E\left[\xi_{is}\xi_{it}^\prime\right] = 0, \forall i,s\neq t$, and $E\left[\tilde{\epsilon}_{it}\tilde{\epsilon}_{it}^\prime\right] = 0, \forall i\neq j, t$.</br></br> <h3>Relationships between reduced forms and structural forms</h3> \begin{align*} &\text{Shocks:} \quad \mu_{it} = A_i(0)\epsilon_{it}\\ &\text{Responses:} \quad F_{i}(L)A_i(0) = A_i(L)\\ &\text{Steady states:} \quad F_{i}(1)A_i(0) = A_i(1) \end{align*} where $\mu_{it}$ is the reduced form residuals ($R_i(L) \Delta z_{it} = \mu_{it}$), $F_i(L) = R_i(L)^{-1}$, and $\epsilon_{it}$ are the structural shocks ($\delta z_{it} = A_i(L)\epsilon_{it}$).</br></br> <h3>Typical structural identifying restrictions on dynamics</h3> \begin{align*} &A(0) \text{ decompositions:} \quad \Omega_{\mu,i} = A_i(0)A_i(0)^\prime\\ &\text{Short-run restrictions:} \quad \Omega_{\mu,i} = B_i(0)^{-1}B_i(0)^{-1^\prime}\\ &\text{Long-run restrictions:} \quad \Omega_{\mu,i}(1) = A_i(1)A_i(1)^\prime \end{align*} The adding-up of constraints with re-normalization implies that equation (1) can be rewritten as $$\epsilon_{it} = \Lambda_i \bar{\epsilon}_{it} + (I - \Lambda_i\Lambda_i^\prime)^{1/2} \tilde{\epsilon}_{it}^\star$$ Finally, we can use this re-scaled form to decompose the impulse responses into the common and idiosyncratic shocks as: $$ A_i(L) = \bar{A}_i(L) + \tilde{A}_i(L)$$ where $\bar{A}_i(L)$ is the member specific response to the common shocks ($\bar{A}_i(L) = A_i(L)\Lambda_i$), and $\tilde{A}_i(L)$ is the member specific response to the idiosyncratic shocks ($\tilde{A}_i(L) = A_i(L)(I - \Lambda_i\Lambda_i^\prime)^{1/2}$) such that the two responses sum to the total member specific response to the composite shocks.</br></br> The following is a summary of the estimation algorithm for an unbalanced panel $\Delta z_{i,t}$ with dimensions $i = 1, \ldots, N$(member), $t=1, \ldots T_i$(time), and $m=1, \ldots, M)$(variable): <ol><li> Compute the time effects, $\Delta \bar{z}_t = N_t^{-1}\sum_{i=1}^{N_t}\Delta z_{it}$ and use these along with $\Delta z_{it}$ to estimate the reduced form VARs, $\bar{R}(L)\Delta \bar{z}_t = \bar{mu}_t$ and $R_i(L)\Delta z_{it} = \mu_{it}$ for each member $i$, using an information criterion to fit an appropriate member specific lag truncation, $P_i$. <li> Use appropriate identifying restrictions such as short-run (Cholesky) or long-run (BQ) identification method to obtain structural shock estimates for $\epsilon_{it}$(composite) and $\bar{\epsilon}_{t}$(common). <li> Compute diagonal elements of the loading matrix, $\Lambda_i$, as correlations between $\epsilon_{it}$ and $\bar{\epsilon}_t$ for each member, $i$, and compute idiosyncratic shock, $\tilde{\epsilon}_{it}$, using equation $\epsilon_{it} = \Lambda_i \bar{\epsilon}_t + \tilde{\epsilon}_{it}$. <li> Compute member-specific impulse responses to unit shocks: $A_i(L) = \bar{A}_i(L) + \tilde{A}_i(L)$, where $\bar{A}_i(L) = A_i(L)\Lambda_i$ and $\tilde{A}_i(L) = A_i(L)(I - \Lambda_i\Lambda_i^\prime)^{1/2}$ <li> Use sample distribution of estimated $A_i(L), \bar{A}_i(L)$, and $\tilde{A}_i(L)$ responses to describe properties of the confidence interval quantiles. </ol></div><div><span style="font-family: "verdana" , sans-serif;"><br /></span></div><div><span style="font-family: "verdana" , sans-serif;">Now we turn to the implementation of the psvar add-in. First, we need to open the data file named as pedroni_ppp.wf1 which is located in the installation folder. </span></div><div><span style="font-family: "courier new" , "courier" , monospace;">wfopen pedroni_ppp.wf1</span></div><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-hO0FJvpWZJQ/XA_mbh5p5cI/AAAAAAAAAqg/TK6gL9BtsNkOAgvq8dUeBoinzN1yQaedgCLcBGAs/s1600/workfile.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="455" data-original-width="506" height="358" src="https://3.bp.blogspot.com/-hO0FJvpWZJQ/XA_mbh5p5cI/AAAAAAAAAqg/TK6gL9BtsNkOAgvq8dUeBoinzN1yQaedgCLcBGAs/s400/workfile.png" width="400" /></a></div><div><span style="font-family: "courier new" , "courier" , monospace;"><br /></span></div><div><div style="font-family: Verdana, sans-serif;">For testing purpose, we use this panel data. The sample size for the data is 4920 (1973m06 to 1993m11 x 20) </div><div style="font-family: Verdana, sans-serif;"><br /></div><div><span style="font-family: "verdana" , sans-serif;">Next, we generate variable, </span><span style="font-family: "courier new" , "courier" , monospace;">ereal</span><span style="font-family: "verdana" , sans-serif;">, and take the logarithm of series </span><span style="font-family: "courier new" , "courier" , monospace;">ereal</span><span style="font-family: "verdana" , sans-serif;">, </span><span style="font-family: "courier new" , "courier" , monospace;">cpi </span><span style="font-family: "verdana" , sans-serif;">and </span><span style="font-family: "courier new" , "courier" , monospace;">ae</span><span style="font-family: "verdana" , sans-serif;">. You don’t need take the first difference of variables. The add-in will do it for you. </span></div><div style="font-family: Verdana, sans-serif;"><br /></div><div><span style="font-family: "courier new" , "courier" , monospace;">series ereal = ae*uscpi/cpi</span></div><div><span style="font-family: "courier new" , "courier" , monospace;">series logereal = log(Ereal) </span></div><div><span style="font-family: "courier new" , "courier" , monospace;">series logcpi = log(cpi) </span></div><div><span style="font-family: "courier new" , "courier" , monospace;">series logae = log(ae)</span></div><div style="font-family: Verdana, sans-serif;"><br /></div><div style="font-family: Verdana, sans-serif;">Then we apply the psvar add-in to this panel data. We can do this either by command line or menu driven interface. </div><div style="font-family: Verdana, sans-serif;"><br /></div><div><span style="font-family: "courier new" , "courier" , monospace;">psvar(ident=2, horizon=24) 18 @ logereal logcpi logae</span></div><div style="font-family: Verdana, sans-serif;"><br /></div><div style="font-family: Verdana, sans-serif;">or</div><div style="font-family: Verdana, sans-serif;"><br /></div><div><span style="font-family: "courier new" , "courier" , monospace;">psvar(ident=2, horizon=24, ci=0.5, length=5, average=mean, sample=”1976m06 1993 m11”, save=1) 18 @ logereal logcpi logae</span></div><div style="font-family: Verdana, sans-serif;"><br /></div><div style="font-family: Verdana, sans-serif;">Please see the document for the detailed description of the command options. The resulting output will be three graph objects that contains 3x3 charts similar to those produced by EViews’ VAR object: </div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://3.bp.blogspot.com/--q6h1zZvgv8/XA_rpx3FFZI/AAAAAAAAAq0/eVgC7eMqJigAuBfSsg-QeI0Hn0JZBr8KACLcBGAs/s1600/figure1.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="460" data-original-width="720" height="408" src="https://3.bp.blogspot.com/--q6h1zZvgv8/XA_rpx3FFZI/AAAAAAAAAq0/eVgC7eMqJigAuBfSsg-QeI0Hn0JZBr8KACLcBGAs/s640/figure1.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 1: Response Estimates to Composite Shocks</td></tr></tbody></table><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-ArIzG2vLfao/XA_rp7cP74I/AAAAAAAAAqw/RJGXL5vELvI5wbCiiKrzX_GKvisaPOvtACLcBGAs/s1600/figure2.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="460" data-original-width="720" height="408" src="https://4.bp.blogspot.com/-ArIzG2vLfao/XA_rp7cP74I/AAAAAAAAAqw/RJGXL5vELvI5wbCiiKrzX_GKvisaPOvtACLcBGAs/s640/figure2.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 2: Response Estimates to Common Shocks</td></tr></tbody></table><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-czeBa7xyEJM/XA_rpybBxwI/AAAAAAAAAqs/YyNvF7s-Cr80hn-Rf29TTDY4a30cEkxhwCLcBGAs/s1600/figure3.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="460" data-original-width="720" height="408" src="https://1.bp.blogspot.com/-czeBa7xyEJM/XA_rpybBxwI/AAAAAAAAAqs/YyNvF7s-Cr80hn-Rf29TTDY4a30cEkxhwCLcBGAs/s640/figure3.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 3: Response Estimates to Idiosyncratic Shocks</td></tr></tbody></table><div><span style="font-family: "verdana" , sans-serif;">Alternatively, you can implement the psvar add-in by the menu driven interface.</span></div></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-Trm7NzFukOc/XA_sDQqHRRI/AAAAAAAAArE/Gs03BzEgzQUE3CkVaifDh9DYb3Az_7a7wCLcBGAs/s1600/diag.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="384" data-original-width="451" height="340" src="https://3.bp.blogspot.com/-Trm7NzFukOc/XA_sDQqHRRI/AAAAAAAAArE/Gs03BzEgzQUE3CkVaifDh9DYb3Az_7a7wCLcBGAs/s400/diag.png" width="400" /></a></div><div><span style="font-family: "verdana" , sans-serif;"></span><br /><div><span style="font-family: "verdana" , sans-serif;"><br /></span></div><span style="font-family: "verdana" , sans-serif;"><div>The first box lets you specify the endogenous variable (logereal, logcpi, logae) for panel SVAR while the second box specify the number of maximum lags (18). Next you can select the shock identification of panel SVAR by the radio box. For example, here chooses the long-run identification. The identification scheme is nonsensical for this particular data and does not correspond to any existing study. For lag length criteria box, we choose GTOS (General to specific). The three main information criteria are the AIC, SBC(BIC) and HQ. However the default lag length criteria is GTOS according to Pedroni (2013)’s suggestion. Like the information criteria, this starts with a large number of lags, but rather than minimizing across all choices for p, it does a sequence of tests for p vs p-1. Lags are dropped as long as they test insignificant. Other boxes specify some optional and self-explanatory inputs. </div><div><br /></div></span></div>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com2tag:blogger.com,1999:blog-6883247404678549489.post-86579187588266862322018-12-04T17:18:00.000-08:002018-12-05T07:54:22.729-08:00Nowcasting GDP on a Daily Basis<i><span style="font-family: "verdana" , sans-serif;">Author and guest blog by Michael Anthonisz, Queensland Treasury Corporation.</span></i><br /><i><span style="font-family: "verdana" , sans-serif; font-size: x-small;">In this blog post, Michael demonstrates the use of MIDAS in EViews to nowcast Australian GDP growth on a daily basis.</span></i><br /><i><span style="font-family: "verdana" , sans-serif; font-size: x-small;"><br /></span></i><span style="font-family: "verdana" , sans-serif;">"Nowcasts" are forecasts of the here and now ("now" + "forecast" = "nowcast"). They are forecasts of the </span><span style="font-family: "verdana" , sans-serif;">present, the near future or the recent past. Specifically, nowcasts allow for real-time tracking or </span><span style="font-family: "verdana" , sans-serif;">forecasting of a lower frequency variable based on other series which are released at a similar or higher </span><span style="font-family: "verdana" , sans-serif;">frequency.</span><br /><span style="font-family: "verdana" , sans-serif;"></span><br /><a name='more'></a><span style="font-family: "verdana" , sans-serif;"><br /></span><span style="font-family: "verdana" , sans-serif;">For example, one could try to forecast the outcome for the current quarter GDP release using a </span><span style="font-family: "verdana" , sans-serif;">combination of daily, weekly, monthly and quarterly data. In this example, the nowcast could be updated </span><span style="font-family: "verdana" , sans-serif;">on a daily basis – the highest frequency of explanatory data – as new releases for the series being used to </span><span style="font-family: "verdana" , sans-serif;">explain GDP came in. That is, as the daily, weekly, monthly and quarterly data used to explain GDP is </span><span style="font-family: "verdana" , sans-serif;">released, the nowcast for current quarter GDP is updated in real-time on a daily basis.</span><br /><span style="font-family: "verdana" , sans-serif;"><br /></span><span style="font-family: "verdana" , sans-serif;">The ability to update one's forecast incrementally in real-time in response to incoming information is an </span><span style="font-family: "verdana" , sans-serif;">attractive feature of nowcasting models. Forecasting in this manner will lower the likelihood of one's </span><span style="font-family: "verdana" , sans-serif;">forecasts becoming "stale". Indeed, nowcasts have been found to be more accurate:</span><br /><br /><ul><li><span style="font-family: "verdana" , sans-serif;">at short-term horizons.</span></li><li><span style="font-family: "verdana" , sans-serif;">as the period of interest (eg, the current quarter) goes on.</span></li><li><span style="font-family: "verdana" , sans-serif;">than traditional forecasting approaches at these horizons.</span></li></ul><div><span style="font-family: "verdana" , sans-serif;">Other key findings in relation to nowcasts are that:</span></div><div><ul><li><span style="font-family: "verdana" , sans-serif;">they also perform similarly to private sector forecasters who are able to also incorporate information in real-time.</span></li><li><span style="font-family: "verdana" , sans-serif;">there are mixed findings as to relative gains from including high frequency financial data.</span></li><li><span style="font-family: "verdana" , sans-serif;">"soft data"<a href="#1" name="top1"><sup>1</sup></a> is most useful early on in the nowcasting cycle and "hard data"<a href="#2" name="top2"><sup>2</sup></a> is of more use later on.</span></li></ul><div><span style="font-family: "verdana" , sans-serif;">There are a number of approaches that can be used to prepare a nowcast including:</span></div></div><div><ul><li><span style="font-family: "verdana" , sans-serif;">Bayesian vector autoregressions (for example, <a href="https://www.newyorkfed.org/medialibrary/media/research/staff_reports/sr830.pdf" target="_blank">Bok et al 2017</a>).</span></li><li><span style="font-family: "verdana" , sans-serif;">Factor-augmented autoregressive models (for example, <a href="https://bank.gov.ua/doccatalog/document?id=62251312" target="_blank">Grui & Lysenko, 2017</a>).</span></li><li><span style="font-family: "verdana" , sans-serif;">Mixed Frequency VARs (for example, <a href="http://dept.ku.edu/~empirics/Courses/Econ844/papers/Nowcasting%20GDP.pdf" target="_blank">Giannone, Reichlin & Small, 2008</a>).</span></li><li><span style="font-family: "verdana" , sans-serif;">MIDAS (Mixed Data Sampling) (for example,<a href="http://webspace.qmul.ac.uk/aferreira/jbes08.pdf" target="_blank"> Clements & Galvao, 2007</a>).</span></li><li><span style="font-family: "verdana" , sans-serif;">Accounting-based tracking models<a href="#3" name="top3"><sup>3</sup></a> (for example, <a href="https://www.frbatlanta.org/-/media/documents/research/publications/wp/2014/wp1407.pdf" target="_blank">Higgins, 2014</a><span id="goog_402796007"></span><a href="https://www.blogger.com/"></a><span id="goog_402796008"></span>).</span></li><li><span style="font-family: "verdana" , sans-serif;">Bridge equations<a href="#4" name="top4"><sup>4</sup></a> (for example, <a href="https://www.ecb.europa.eu/pub/conferences/shared/pdf/20180618_forecasting/Paper_Ferrara_et_al.pdf" target="_blank">Ferrara & Simoni, 2018</a>).</span></li></ul><div><span style="font-family: "verdana" , sans-serif;"></span><br /><div><span style="font-family: "verdana" , sans-serif;">Through its broad functionality EViews is able to facilitate the use of all of these approaches. For the purposes of this blog entry and in recognition of its availability from EViews 9.5 <a href="http://www.eviews.com/EViews9/ev95midas.html" target="_blank">onwards</a> as well as its ease of use, MIDAS regressions will be used to provide a daily nowcast of quarterly trend Australian real GDP growth<a href="#5" name="top5"><sup>5</sup></a>. MIDAS models are perfectly suited to handle the nowcasting problem, which at its essence, relates to how to use data for explanatory variables which are released at different frequencies to explain the dependent variable<a href="#6" name="top6"><sup>6</sup></a>.</span></div><span style="font-family: "verdana" , sans-serif;"><div><br /></div><div><div>In this example, the series used in the MIDAS model to nowcast GDP are not just regular economic or financial time series, however. To capture as broad a variety of influences on the dependent variable as possible, as well as to ensure a parsimonious specification, principal components analysis ("PCA") is used<a href="#7" name="top7"><sup>7</sup></a>. This allows us to extract a common trend from a large number of series. Using this approach will enable us to cut down on "noise" and hopefully use more "signal" to estimate GDP.</div></div><div><br /></div><div><div>The data series used to derive these common factors are compiled on a monthly and quarterly basis and are released in advance of, during and following the completion of the current quarter of interest with respect to GDP. The common factors are calculated at the lowest frequency of the underlying data (quarterly) and are complemented in the model by daily financial data which may have some explanatory power over the quarterly change in Australian GDP (for example, the trade weighted exchange rate and the three-year sovereign bond yield).</div></div><div><br /></div><div><div>An outline of the steps required to do this sort of MIDAS-based nowcast is below. Keep in mind the helpful <a href="http://www.eviews.com/help/helpintro.html#page/content/midas-MIDAS_Estimation_in_EViews.html" target="_blank">point and click</a> as well as <a href="http://www.eviews.com/help/helpintro.html#page/content/commandcmd-midas.html" target="_blank">command language </a>instructions published by EViews which provide more detail.</div></div><div><ul><li>Create separate tabs in the workfie which correspond to the different frequencies of underlying data you are using.</li><li>Import the underlying data and normalize to be in Z Score form (that is, mean of zero and variance of one) <a href="https://www.researchgate.net/post/What_is_the_best_way_to_scale_parameters_before_running_a_Principal_Component_Analysis_PCA" target="_blank">before running the PCA</a>.</li><li>Have the common factors created from the PCA appear on the relevant tab in the workfile<a href="#8" name="top8"><sup>8</sup></a>.</li><li>Clean the data to get rid of any N/A values for data that has not yet been published.<a href="#9" name="top9"><sup>9</sup></a></li><li>Re-run the PCA to reflect that you now have data for the underlying series for the full sample period.</li></ul><div>It is important to note that the variable being nowcast must actually be forecast with the same periodicity as its release. In this instance, GDP is released quarterly so our forecasts of it will be quarterly as well. This means all the work at this stage of the estimation will be done on the quarterly page. We are aiming</div><div>to produce forecasts of a quarterly variable which are updated on a more real-time (that is, daily basis) but are not actually producing a forecast of daily GDP.</div><div><br /></div><div>An illustration of the rolling process might make this clearer. For instance:</div></div><div><ul><li>Let's imagine it is currently 1 July 2018.</li><li>We’re interested in forecasting Q3 2018 GDP using one period lags of GDP and the common factors estimated earlier via PCA. These are quarterly representations of conditions with respect to labour markets and capital investment as well as measures of current and future economic activity. We’ll also using bond yields and the trade-weighted exchange rate, both of which are available on a daily basis.</li><li>In our MIDAS model, quarterly GDP is the dependent variable and the aforementioned other variables are independent variables. The model is estimated using historical data from Q2 1993 until Q2 2018 (as it is 1 July we have data to 30 June).</li><li>As we want to forecast Q3, and have data on our daily variables until the end of Q2 2018, we can specify the equation as each quarter’s GDP growth is a function of the previous quarter’s outcomes for the quarterly variable and of (say) the last 45 days’ worth of values for bond yields and the exchange rate ending on the last day of the previous quarter.</li><li>Having estimated the model, we can use the 45 daily values for bond yields and the exchange rate from May to June 2018 to forecast Q3 GDP.</li><li>Now, assume the calendar has turned over and it is now 2 July 2018. We have one more observation for the daily series. We can update the forecast of GDP by estimating a new model on historical data that used 44 days from the previous quarter and the first day from the current quarter, and then forecast Q3 GDP.</li><li>Then, assume it is 3 July 2018. We can now update our forecast by estimating on 43 days of the previous quarter and the first 2 days from the current quarter. And so on.</li><li>We will end up with a forecast of quarterly GDP that is updated daily. That doesn't make it a forecast of daily GDP as it is a quarterly variable. We're just able to forecast it using current (now) data and update this forecast continuously on a daily basis.</li></ul><div><div>For our concrete example using Australian macroeconomic variables, we will estimate a MIDAS model where the dependent variable is the quarterly change in the trend measure of Australian real GDP.</div><div><br /></div><div>The independent variables of the model can be seen in Figure 1:</div></div></div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-aPxGtXdpZOM/XAV2Ny8xb_I/AAAAAAAAAos/x6HWyaECIqE-_1o8eUlDtRD3PPCLH32EACPcBGAYYCw/s1600/variables.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1482" data-original-width="1384" height="640" src="https://4.bp.blogspot.com/-aPxGtXdpZOM/XAV2Ny8xb_I/AAAAAAAAAos/x6HWyaECIqE-_1o8eUlDtRD3PPCLH32EACPcBGAYYCw/s640/variables.png" width="595" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 1: Independent variables used in MIDAS estimation (click to enlarge)</td></tr></tbody></table><div><div>All data are sourced from the Bloomberg and Thomson Reuters Datastream databases, accessible via EViews.</div><div><br /></div><div>The specific equation in EViews is estimated using the Equation object with the method set to MIDAS, and with variable names of:</div><div><ul><li>gdp_q_trend_3m_chg = quarterly change in the trend measure of Australian GDP.</li><li>gdp_q_trend_3m_chg(-1) = one quarter lag of the quarterly change in the trend measure of Australian GDP.</li><li>activity_current(-1) = one quarter lag of a PCA derived factor representing current economic activity in Australia.</li><li>activity_leading(-1) = one quarter lag of a PCA derived factor representing future economic activity in Australia.</li><li>investment(-1) = one quarter lag of a PCA derived factor representing capital investment in Australia.</li><li>labour_market(-1) = one quarter lag of a PCA derived factor representing labour market conditions in Australia.</li><li>au_midas_daily\atwi_final(-1) = the lag of the trade-weighted Australia Dollar where this data is located on a page with a daily frequency.</li><li>au_midas_daily\gacgb3_final(-1) = the lag of the three-year Australian sovereign bond yield where this data is located on a page with a daily frequency.</li></ul><div><div>In this example we will estimate the dependent variable using historical data from Q2 1993 until Q2 2018. From this we can then do forecasts for the current quarter (in this case Q3 2018) whereby the dependent variable is a function of the previous quarter’s outcomes for the quarterly independent variables and of the last 45 days’ worth of values for bond yields and the exchange rate. The MIDAS equation estimation window that reflects this would be as follows:</div></div></div></div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://4.bp.blogspot.com/-Hf45IdrNSyk/XAV7DRHdxqI/AAAAAAAAApE/1nx9yXWAnmslpNpAWk6RB0uftBaqpANTgCPcBGAYYCw/s1600/EstDlg.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="459" data-original-width="460" height="397" src="https://4.bp.blogspot.com/-Hf45IdrNSyk/XAV7DRHdxqI/AAAAAAAAApE/1nx9yXWAnmslpNpAWk6RB0uftBaqpANTgCPcBGAYYCw/s400/EstDlg.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 2: Estimation specification (click to enlarge)</td></tr></tbody></table><div><br /></div><div>Running the MIDAS model results in the following estimation output:</div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-ySkBQDo7WJs/XAV-AR2CezI/AAAAAAAAApc/BjSvBk2wGO0rAIN01VbNH5fSPad0yL9EQCPcBGAYYCw/s1600/EstOut.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="692" data-original-width="540" height="640" src="https://2.bp.blogspot.com/-ySkBQDo7WJs/XAV-AR2CezI/AAAAAAAAApc/BjSvBk2wGO0rAIN01VbNH5fSPad0yL9EQCPcBGAYYCw/s640/EstOut.png" width="497" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 3: Estimation output (click to enlarge)</td></tr></tbody></table><div><div>This individual estimation gives us a single forecast for GDP based upon the most current data available. Specifically, this estimation uses data up to:</div><div><ul><li>2018Q2 for our dependent variable.</li><li>2018Q1 for our quarterly independent variables (since they are all lagged one period).</li><li>May 30th for our daily independent variables (a one day lag from the last day of Q2). Also note that since we are using 45 daily periods for each quarter, the 2018Q2 data point is estimated using data from March 29th - May 30th (we are dealing with regular 5-day data).</li></ul></div><div>From this equation we can then produce a forecast of the 2018Q3 value of GDP by clicking on the Forecast button:</div></div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-PmC6u0FmJdw/XAajCaYzEoI/AAAAAAAAAp0/4j-mI9JB6Fk4MDRV88JgQZqn39DlNp9lgCPcBGAYYCw/s1600/ForcDlg.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="601" data-original-width="630" height="381" src="https://1.bp.blogspot.com/-PmC6u0FmJdw/XAajCaYzEoI/AAAAAAAAAp0/4j-mI9JB6Fk4MDRV88JgQZqn39DlNp9lgCPcBGAYYCw/s400/ForcDlg.PNG" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 4: Forecast dialog (click to enlarge)</td></tr></tbody></table><div><div>This single quarter forecast uses data from:</div><div><ul><li>2018Q2 for our quarterly independent variables (since they are all lagged one period).</li><li>July 30th 2018 - September 28th 2018 for our daily independent variables (45 days ending on the last day of Q3 2018 - September 29th/30th are a weekend, so not included in our workfile).</li></ul></div><div>To produce an updated forecast the following day, we could re-estimate our equation using the same data, but with the daily independent variables shifted forwards one day (removing the one day lag on their specification), and then re-forecasting.</div><div><br /></div><div>Or, if we wanted an historical view on how our forecasts would have performed previously, we can re-estimate for the previous day (shifting our daily variables back by one day by increasing their lag to 2) and then re-forecast.</div><div><br /></div><div>Indeed we could repeat the historical procedure going back each day for a number of years, giving us a series of daily updated forecast values. Performing this action manually is a little cumbersome, but an EViews program can make the task simple. A rough example of such a program may be downloaded <a href="http://www.eviews.com/blog/AusMIDAS/midasprg.prg">here</a>.</div><div><br /></div><div>Once the series of daily forecasts is created, you can produce a good picture of the accuracy of this procedure:</div></div><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://2.bp.blogspot.com/-dPpqt9h4axc/XAavEyJkOQI/AAAAAAAAAqM/6Nkv-cBXctQnfKyVei8wvq4It8H7KG-eQCPcBGAYYCw/s1600/ForcGraph.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="676" data-original-width="993" height="434" src="https://2.bp.blogspot.com/-dPpqt9h4axc/XAavEyJkOQI/AAAAAAAAAqM/6Nkv-cBXctQnfKyVei8wvq4It8H7KG-eQCPcBGAYYCw/s640/ForcGraph.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Figure 5: Daily updated forecast of Australian GDP Trend (click to expand)</td></tr></tbody></table><div><br /></div></span></div></div> <hr width="80%"><p><span class="Apple-style-span" style="font-size: x-small;"><br /><a name="1"><b>1 </b></a>Such as consumer or business surveys<a href="#top1"><sup>↩</sup></a><br /><a name="2"><b>2 </b></a>Such a retail spending, housing or labour market data<a href="#top2"><sup>↩</sup></a><br /><a name="3"><b>3 </b></a>As GDP, for example, is essentially an accounting identity that represents the sum of different income, expenditure or production measures, it can be calculated using a ‘bottom-up’ approach in which series that proxy for the various components of GDP are used to construct an estimate of it using an accounting type approach.<a href="#top3"><sup>↩</sup></a><br /><a name="4"><b>4 </b></a>Bridge equations are regressions which relate low frequency variables (e.g. quarterly GDP) to higher frequency variables (eg, the unemployment rate) where the higher frequency observations are aggregated to the quarterly frequency. It is often the case that some but not all of the higher frequency variables are available at the end of the quarter of interest. Therefore, the monthly variables which aren’t as yet available are forecasted using auxiliary models (eg, ARIMA). <a href="#top4"><sup>↩</sup></a><br /><a name="5"><b>5 </b></a>Papers using a daily frequency in mixed frequency regression analyses include <a href="https://www.dept.aueb.gr/sites/default/files/Kourtellos24-5-12.pdf">Andreou, Ghsels & Kourtellos, 2010</a>, <a href="https://ink.library.smu.edu.sg/cgi/viewcontent.cgi?article=1948&context=soe_research">Tay, 2006</a> and <a href="https://onlinelibrary.wiley.com/doi/pdf/10.1111/1475-4932.12181">Sheen, Truck & Wang, 2015.</a><a href="#top5"><sup>↩</sup></a><br /><a name="6"><b>6 </b></a>MIDAS models use distributed lags of explanatory variables which are sampled at an equivalent or higher frequency to the dependent variable. A distributed lag polynomial is used to ensure a parsimonious specification. There are different types of lag polynomial structures available in EViews. <a href="http://uu.diva-portal.org/smash/get/diva2:783891/FULLTEXT01.pdf">Lindgren & Nilson, 2015</a> discuss the forecasting performance of the different polynomial lag structures.<a href="#top6"><sup>↩</sup></a><br /><a name="7"><b>7 </b></a>See <a href="https://sites.google.com/site/econometricsacademy/econometrics-models/principal-component-analysis">here</a> and <a href="http://blog.eviews.com/2018/10/principal-component-analysis-part-i.html">here</a> for background and <a href="http://www.eviews.com/help/helpintro.html#page/content/groups-Principal_Components.html">here</a> and <a href="http://blog.eviews.com/2018/11/principal-component-analysis-part-ii.html">here</a> for how to do in EViews.<a href="#top7"><sup>↩</sup></a><br /><a name="8"><b>8 </b></a>For example, underlying data on a monthly and quarterly basis will generate a common factor that is on a quarterly basis. This should therefore go on a quarterly workfile tab.<a href="#top8"><sup>↩</sup></a><br /><a name="9"><b>9 </b></a>For example, if there was an NA then you could choose to use the previous value for the latest date instead. For example, X_full series = @recode(X =na, X(-1), X)<a href="#top9"><sup>↩</sup></a><br /></span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com17tag:blogger.com,1999:blog-6883247404678549489.post-4335981619631134492018-11-26T14:35:00.000-08:002018-11-28T10:11:15.117-08:00Principal Component Analysis: Part II (Practice)<script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { lb: "{\\left(}", rb: "{\\right)}", bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"></script> <span style="font-family: "verdana" , sans-serif;"> In <a href="http://blog.eviews.com/2018/10/principal-component-analysis-part-i.html">Part I</a> of our series on <b>Principal Component Analysis</b> (PCA), we covered a theoretical overview of fundamental concepts and disucssed several inferential procedures. Here, we aim to complement our theoretical exposition with a step-by-step practical implementation using EViews. In particular, we are motivated by a desire to apply PCA to some dataset in order to identify its most important features and draw any inferential conclusions that may exist. We will proceed in the following steps:<a name='more'></a> <ol> <li> Summarize and describe the dataset under consideration. <li> Extract all principal (important) directions (features). <li> Quantify how much variation (information) is explained by each principal direction. <li> Determine how much variation each variable contributes in each principal direction. <li> Reduce data dimensionality. <li> Identify which variables are correlated and which correlations are more principal. <li> Identify which observations are correlated with which variables. </ol> The links to the workfile and program file can be found at the end.</br></br> <h3>Principal Component Analysis of US Crime Data</h3> We will use PCA to study US crime data. In particular, our dataset summarizes the number of arrests per 100,000 residents in each of the 50 US states in 1973. The data contains four variables, three of which pertain to arrests associated with (and naturally named) <b>MURDER</b>, <b>ASSAULT</b>, and <b>RAPE</b>, whereas the last, named <b>URBANPOP</b>, contains the percentage of the population living in urban centers.</br></br> <h4>Data Summary</h4> To understand our data, we will first create a <b>group</b> object with the variables of interest. We can do this by selecting all four variables in the workfile by clicking on each while holding down the <b>Ctrl</b> button, right-clicking on any of the highlighted variables, moving the mouse pointer over <b>Open</b> in the context menu, and finally clicking on <b>as Group</b>. This will open a group object in a spreadsheet with the four variables placed in columns. The steps are reproduced in Figures 1a and 1b.</br></br> <table> <tbody> <tr> <td> <!-- :::::::::: FIGURE 1A :::::::::: --> <center> <a href="https://2.bp.blogspot.com/-vLsLi-3fz_Y/W_wusCZp91I/AAAAAAAAAmQ/cEiij6CUgYQ1xGLEQcW1xL6E5nXz8nmpgCLcBGAs/s1600/pcademo1.jpg"><img src="https://2.bp.blogspot.com/-vLsLi-3fz_Y/W_wusCZp91I/AAAAAAAAAmQ/cEiij6CUgYQ1xGLEQcW1xL6E5nXz8nmpgCLcBGAs/s1600/pcademo1.jpg" title="Open Group" width="320" height="auto" /></a><br /><br /> </center> <!-- :::::::::: FIGURE 1A :::::::::: --> </td> <td> <!-- :::::::::: FIGURE 1B :::::::::: --> <center> <a href="https://2.bp.blogspot.com/-rGbiwuah8PI/W_wuujGRY_I/AAAAAAAAAmw/MGBP75MEEpg0ORZw0zE9nYfygv71Lt-xwCLcBGAs/s1600/pcademo2.jpg"><img src="https://2.bp.blogspot.com/-rGbiwuah8PI/W_wuujGRY_I/AAAAAAAAAmw/MGBP75MEEpg0ORZw0zE9nYfygv71Lt-xwCLcBGAs/s1600/pcademo2.jpg" title="Group Window" width="320" height="auto" /></a><br /><br /> </center> <!-- :::::::::: FIGURE 1B :::::::::: --> </td> </tr> <tr> <td><center><small>Figure 1A: Open Group</small><br /><br /></center></td> <td><center><small>Figure 1B: Group Window</small><br /><br /></center></td> <br /><br /> </tr> </tbody> </table> From here, we can derive the usual summary statistics by clicking on <b>View</b> in the group window, moving the mouse over <b>Descriptive Stats</b> and clicking on <b>Common Sample</b>. This produces a spreadsheet with various statistics of interest. We reproduce the steps and output in Figures 2a and 2b.</br></br> <table> <tbody> <tr> <td> <!-- :::::::::: FIGURE 2A :::::::::: --> <center> <a href="https://4.bp.blogspot.com/-12NTMMcAqAs/W_wuvmtfLeI/AAAAAAAAAns/OcHBLa3PhxYjbbp4nwnSM0WxtYgtInXcgCPcBGAYYCw/s1600/pcademo3.jpg"><img src="https://4.bp.blogspot.com/-12NTMMcAqAs/W_wuvmtfLeI/AAAAAAAAAns/OcHBLa3PhxYjbbp4nwnSM0WxtYgtInXcgCPcBGAYYCw/s1600/pcademo3.jpg" title="Descriptive Stats Menu" width="320" height="auto" /></a><br /><br /> </center> <!-- :::::::::: FIGURE 2A :::::::::: --> </td> <td> <!-- :::::::::: FIGURE 2B :::::::::: --> <center> <a href="https://4.bp.blogspot.com/-fbnQuF4naTA/W_wuv3uOCTI/AAAAAAAAAno/bQasdqH9EbwEAibaSxuk_-yaynSqfE-EwCPcBGAYYCw/s1600/pcademo4.jpg"><img src="https://4.bp.blogspot.com/-fbnQuF4naTA/W_wuv3uOCTI/AAAAAAAAAno/bQasdqH9EbwEAibaSxuk_-yaynSqfE-EwCPcBGAYYCw/s1600/pcademo4.jpg" title="Descriptive Stats Output" width="320" height="auto" /></a><br /><br /> </center> <!-- :::::::::: FIGURE 2B :::::::::: --> </td> </tr> <tr> <td><center><small>Figure 2A: Descriptive Stats Menu</small><br /><br /></center></td> <td><center><small>Figure 2B: Descriptive Stats Output</small><br /><br /></center></td> <br /><br /> </tr> </tbody> </table> We can also plot each of the series to get a better visual sense for the data. In particular, from the group window, click on <b>View</b> and click on <b>Graph</b>. This brings up the <b>Graph Options</b> window. Here, from the <b>Multiple Series</b> dropdown menu, select <b>Multiple Graphs</b> and click on <b>OK</b>. We summarize the sequence in Figures 3a and 3b.</br></br> <table> <tbody> <tr> <td> <!-- :::::::::: FIGURE 3A :::::::::: --> <center> <a href="https://2.bp.blogspot.com/-UgCZXK2VOMQ/W_wuvuaCKDI/AAAAAAAAAnw/N6f03xO84FwC7XWzYsufGF8HH7OKsKaNACPcBGAYYCw/s1600/pcademo5.jpg"><img src="https://2.bp.blogspot.com/-UgCZXK2VOMQ/W_wuvuaCKDI/AAAAAAAAAnw/N6f03xO84FwC7XWzYsufGF8HH7OKsKaNACPcBGAYYCw/s1600/pcademo5.jpg" title="Graph Options" width="320" height="auto" /></a><br /><br /> </center> <!-- :::::::::: FIGURE 3A :::::::::: --> </td> <td> <!-- :::::::::: FIGURE 3B :::::::::: --> <center> <a href="https://2.bp.blogspot.com/-vmjw95PwcJo/W_w_-fncmGI/AAAAAAAAAoU/zKznvyjfHxUGPXgI87XOKFW9h7H1ZwqugCPcBGAYYCw/s1600/pcademo6.jpg"><img src="https://2.bp.blogspot.com/-vmjw95PwcJo/W_w_-fncmGI/AAAAAAAAAoU/zKznvyjfHxUGPXgI87XOKFW9h7H1ZwqugCPcBGAYYCw/s1600/pcademo6.jpg" title="Multiple Graphs" width="320" height="auto" /></a><br /><br /> </center> <!-- :::::::::: FIGURE 3B :::::::::: --> </td> </tr> <tr> <td><center><small>Figure 3A: Graph Options</small><br /><br /></center></td> <td><center><small>Figure 3B: Multiple Graphs</small><br /><br /></center></td> <br /><br /> </tr> </tbody> </table> At last, we can get a sense for information redundancy (see section <i>Variance Decomposition</i> in <a href="http://blog.eviews.com/2018/10/principal-component-analysis-part-i.html">Part I</a> of this series) by studying correlation patterns. In this regard, we can produce a correlation matrix by clicking on <b>View</b> in the group window and clicking on <b>Covariance Analysis...</b>. This opens a window with further options. Here, deselect (click) the checkbox next to <b>Covariance</b> and select (click) the box next to <b>Correlation</b>. This ensures that EViews will only produce the correlation matrix without any other statistics. Furthermore, in the <b>Layout</b> dropbox, select <b>Single table</b>, and finally click on <b>OK</b>. Figures 4a and 4b reproduce these steps.</br></br> <table> <tbody> <tr> <td> <!-- :::::::::: FIGURE 4A :::::::::: --> <center> <a href="https://3.bp.blogspot.com/-nWgmG7204lA/W_wuv_6KYmI/AAAAAAAAAnk/82lJnwReVzwSdgThQiUGHy_06ioWcLA2QCPcBGAYYCw/s1600/pcademo7.jpg"><img src="https://3.bp.blogspot.com/-nWgmG7204lA/W_wuv_6KYmI/AAAAAAAAAnk/82lJnwReVzwSdgThQiUGHy_06ioWcLA2QCPcBGAYYCw/s1600/pcademo7.jpg" title="Covariance Analysis" width="320" height="auto" /></a><br /><br /> </center> <!-- :::::::::: FIGURE 4A :::::::::: --> </td> <td> <!-- :::::::::: FIGURE 4B :::::::::: --> <center> <a href="https://4.bp.blogspot.com/-FUOB1S3ayG8/W_wuwVlKrUI/AAAAAAAAAn8/Rc2r51LaMDIOpU6fXUp9pX8e-QsPhRHXACPcBGAYYCw/s1600/pcademo8.jpg"><img src="https://4.bp.blogspot.com/-FUOB1S3ayG8/W_wuwVlKrUI/AAAAAAAAAn8/Rc2r51LaMDIOpU6fXUp9pX8e-QsPhRHXACPcBGAYYCw/s1600/pcademo8.jpg" title="Correlation Table" width="320" height="auto" /></a><br /><br /> </center> <!-- :::::::::: FIGURE 4B :::::::::: --> </td> </tr> <tr> <td><center><small>Figure 4A: Covariance Analysis</small><br /><br /></center></td> <td><center><small>Figure 4B: Correlation Table</small><br /><br /></center></td> <br /><br /> </tr> </tbody> </table> A quick interpretation of the correlation structure indicates that murder is highly correlated with assault, whereas the latter exhibits a strong positive correlation with rape. Moreover, whereas murder is nearly uncorrelated with larger urban centers, among the three causes for arrest, rape generally favours larger communities. Intuitively, this is in line with conventional wisdom. Murders are rarely observed on professional levels and typically involve assault as a precursor. Furthermore, due to higher costs of crime visibility and cleanup, murder generally does not favour larger population areas where police presence and witness visibility is generally more pronounced. On the other hand, rape favours larger urban centers due to the fact that there are simply more people and the cost to covering or denying the crime is notoriously very low. Furthermore, victims of rape in smaller communities are typically shamed into staying quiet since connection circles are naturally tighter in such surroundings.</br></br> <h3>Principal Component Analysis of Crime Data</h3> Doing PCA in EViews is trivial. From our group object window, click on <b>View</b> and click on <b>Principal Components...</b>. This opens the main PCA dialog. See Figure 5a and 5b below.</br></br> <table> <tbody> <tr> <td> <!-- :::::::::: FIGURE 5A :::::::::: --> <center> <a href="https://4.bp.blogspot.com/-sPzh2S3skJM/W_wusI_ZpxI/AAAAAAAAAn8/9v0jUSNfYkUykUDHdg-wDZy3UJoDvd01wCPcBGAYYCw/s1600/pcademo12.jpg"><img src="https://4.bp.blogspot.com/-sPzh2S3skJM/W_wusI_ZpxI/AAAAAAAAAn8/9v0jUSNfYkUykUDHdg-wDZy3UJoDvd01wCPcBGAYYCw/s1600/pcademo12.jpg" title="Initiating the PCA dialog" width="320" height="auto" /></a><br /><br /> </center> <!-- :::::::::: FIGURE 5A :::::::::: --> </td> <td> <!-- :::::::::: FIGURE 5B :::::::::: --> <center> <a href="https://1.bp.blogspot.com/-0uK7YlGUkJ0/W_wuwfUubDI/AAAAAAAAAn4/SImjRV0TleUhGnXKo3fXbR5Sr2g0Kf3EwCPcBGAYYCw/s1600/pcademo9.jpg"><img src="https://1.bp.blogspot.com/-0uK7YlGUkJ0/W_wuwfUubDI/AAAAAAAAAn4/SImjRV0TleUhGnXKo3fXbR5Sr2g0Kf3EwCPcBGAYYCw/s1600/pcademo9.jpg" title="Main PCA dialog" width="320" height="auto" /></a><br /><br /> </center> <!-- :::::::::: FIGURE 5B :::::::::: --> </td> </tr> <tr> <td><center><small>Figure 5A: Initiating the PCA dialog</small><br /><br /></center></td> <td><center><small>Figure 5B: Main PCA Dialog</small><br /><br /></center></td> <br /><br /> </tr> </tbody> </table> From here, EViews offers users the ability to apply several tools and protocols readily encountered in the literature on PCA.</br></br> <h4>Summary of Fundamentals</h4> As a first step, we are interested in summarizing PCA fundamentals. In particular, we seek an overview of eigenvalues and eigenvectors that result from applying the principal component decomposition to the covariance or correlation matrix associated with our variables of interest. To do so, consider the <b>Display</b> group, and select <b>Table</b>. The latter produces three tables summarizing the covariance (correlation) matrix, and the associated eigenvectors and eigenvalues.</br></br> Associated to this output are several important options under the <b>Component selection</b> group. These include: <ul> <li> <b>Maximum number</b>: This defaults to the theoretical maximum number of eigenvalues possible, which is the total number of variables in the group under consideration. In our case, this number is 4. <li> <b>Minimum eigenvalue</b>: This defaults to 0. Nevertheless, selecting a positive value requests that all eigenvectors associated with eigenvalues less than this value are not displayed. <li> <b>Cumulative proportion</b>: This defaults to 1. Choosing a value $\alpha < 1$ however, requests that only the most principal $ k $ eigenvalues and eigenvectors associated with explaining $ \alpha*100 \% $ of the variation are retained. Naturally, choosing $ \alpha=1 $ requests that all eigenvalues are displayed. See section <i>Dimension Reduction</i> in <a href="http://blog.eviews.com/2018/10/principal-component-analysis-part-i.html">Part I</a> of this series for further details. </ul> Since we are interested in a global summary, we will leave the <b>Component selection</b> options at their default values.</br></br> Furthermore, consider momentarily the <b>Calculation</b> tab. Here, the <b>Type</b> dropdown offers the choice to apply the principal component decomposition either to the correlation or covariance matrix. For details, see sections <i>Variance Decomposition</i> and <i>Change of Basis</i> in <a href="http://blog.eviews.com/2018/10/principal-component-analysis-part-i.html">Part I</a> of this series. The choice essentially reduces to whether or not the variables under consideration exhibit similar scales. In other words, if variances of the underlying variables of interest are similar, then conducting PCA on the covariance matrix is certainly justified. Nevertheless, if the variances are widely different, then selecting the correlation matrix is more appropriate if interpretability and comparability are desired. EViews errs on the side of caution and defaults to using the correlation matrix. Since the table of summary statistics we produced in figure 3b clearly shows a lack of uniformity in standard deviations across the four variables of interest, we will stick with the default and use the correlation matrix. Hit <b>OK</b>.</br></br> <!-- :::::::::: FIGURE 6 :::::::::: --> <center> <a href="https://3.bp.blogspot.com/-Y8HLv3vXMNU/W_wusIBKn9I/AAAAAAAAAn0/rAFDt602C6YxX3iYKMEeTVo0ScIZmli4ACPcBGAYYCw/s1600/pcademo13.jpg"><img src="https://3.bp.blogspot.com/-Y8HLv3vXMNU/W_wusIBKn9I/AAAAAAAAAn0/rAFDt602C6YxX3iYKMEeTVo0ScIZmli4ACPcBGAYYCw/s1600/pcademo13.jpg" title="PCA Table Output" width="320" height="auto" /></a><br /><br /> <small>Figure 6: PCA Table Output</small><br /><br /> </center> <!-- :::::::::: FIGURE 6 :::::::::: --> The resulting output, which is summarized Figure 6 above, consists of three tables. The first table summarizes the information on eigenvalues. The latter are sorted in order of principality (importance), measured as the proportion of information explained by each principal direction. Refer to section <i>Principal Directions</i> in <a href="http://blog.eviews.com/2018/10/principal-component-analysis-part-i.html">Part I</a> of this series for more details. In particular, we see that the first principal direction explains roughly 62% of the information contained in the underlying correlation matrix, the second, roughly 25%, and so on. Furthermore, the cumulative proportion of information explained by the first two principal directions is roughly 87(62 + 25)%. In other words, if dimensionality reduction is desired, our analysis indicates that we can half the underlying dimensionality of the problem from 4 to 2, while retaining nearly 90% of the original information. This is evidently a profitable trade-off. For theoretical details, see section <i>Dimension Reduction</i> in <a href="http://blog.eviews.com/2018/10/principal-component-analysis-part-i.html">Part I</a> of this series. At last, observe that EViews reports that the average of the 4 eigenvalues is 1. This will in fact always be the case when extracting eigenvalues from a correlation matrix.</br></br> The second (middle) table summarizes the eigenvectors associated with each of the principal eigenvalues. Naturally, the eigenvectors are also arranged in order of principality. Furthermore, whereas the eigenvalues highlight how much of the overall information is extracted in each principal direction, the eigenvectors reveal how much weight each variable has in each direction.</br></br> Recall from <a href="http://blog.eviews.com/2018/10/principal-component-analysis-part-i.html">Part I</a> of this series that all eigenvectors have length unity. Accordingly, the relative importance of any variable in a given principal direction is effectively the proportion of the eigenvector length (unity) attributed to that variable. For instance, in the case of the first eigenvector, $ [0.535899, 0.583184, 0.543432, 0.278191]^{\top} $, <b>MURDER</b> accounts for $ 0.535899^{2} \times 100\% = 0.287188\% $ of the overall direction length. Similarly, <b>ASSAULT</b> accounts for 0.340103% of the direction, and <b>RAPE</b> contributes 0.295318%. Evidently, the least important variable in the first principal direction is <b>URBANPOP</b>, which accounts for only 0.077390% of the direction length.</br></br> On the other hand, in the second principal direction, it is <b>URBANPOP</b> that carries most weight, contributing $ 0.872806 \times 100\% = 0.761790\% $ to the direction length. Accordingly, if feature extraction is the goal, it is clear (and rather obvious) that the first principal direction is roughly equally dominated by <b>MURDER</b>, <b>ASSAULT</b>, and <b>RAPE</b>, whereas the second principal direction is almost entirely governed by <b>URBANPOP</b>. For a theoretical exposition, see section <i>Principal Components</i> in <a href="http://blog.eviews.com/2018/10/principal-component-analysis-part-i.html">Part I</a> of this series.</br></br> At last, the third table is just the correlation matrix to which the eigen-decomposition is applied. The latter, while important, is provided only as a reference.</br></br> <h4>Eigenvalue Plots and Dimensionality</h4> Now that we have a rough picture of PCA fundamentals associated with our dataset, it is natural to ask whether we can proceed with dimensionality reduction in a more formal manner. One such way (albeit arbitrary, but widely popular) is to look at several eigenvalue plots and visually identify how many eigenvalues to retain.</br></br> From the previous PCA output, click again on <b>View</b>, then <b>Principal Components...</b>, and select <b>Eigenvalue Plots</b> under the <b>Display</b> group. This is summarized in Figure 7 below.</br></br> <!-- :::::::::: FIGURE 7 :::::::::: --> <center> <a href="https://3.bp.blogspot.com/-ehsCXXsh38E/W_wusvmkQGI/AAAAAAAAAns/xxGhVsV3uKoHyO9kejwn2TSh9UtWTJZwQCPcBGAYYCw/s1600/pcademo14.jpg"><img src="https://3.bp.blogspot.com/-ehsCXXsh38E/W_wusvmkQGI/AAAAAAAAAns/xxGhVsV3uKoHyO9kejwn2TSh9UtWTJZwQCPcBGAYYCw/s1600/pcademo14.jpg" title="PCA Dialog: Eigenvalue Plots" width="320" height="auto" /></a><br /><br /> <small>Figure 7: PCA Dialog: Eigenvalue Plots</small><br /><br /> </center> <!-- :::::::::: FIGURE 7 :::::::::: --> Here, EViews offers several graphical representations for the underlying eigenvalues. The latter includes the scree plot, the differences between successive eigenvalues plot, as well as the cumulative proportion of information associated with the first $ k $ eigenvalues plot. Go ahead and select all three. As before, we will leave the default values under the <b>Component Selection</b> group. Hit <b>OK</b>. Figure 8 summarizes the output.</br></br> <!-- :::::::::: FIGURE 8 :::::::::: --> <center> <a href="https://3.bp.blogspot.com/-rT3soZNWiPQ/W_wuspsZLcI/AAAAAAAAAnw/yQlwxrPW9jMaCpMpz8GP3aU5lUprK2gcQCPcBGAYYCw/s1600/pcademo15.jpg"><img src="https://3.bp.blogspot.com/-rT3soZNWiPQ/W_wuspsZLcI/AAAAAAAAAnw/yQlwxrPW9jMaCpMpz8GP3aU5lUprK2gcQCPcBGAYYCw/s1600/pcademo15.jpg" title="Eigenvalue Plots Output" width="320" height="auto" /></a><br /><br /> <small>Figure 8: Eigenvalue Plots Output</small><br /><br /> </center> <!-- :::::::::: FIGURE 8 :::::::::: --> EViews now produces three graphs. The first is the scree plot - a line graph of eigenvalues arranged in order of principality. Superimposed on this graph is a red dotted horizontal line with a value equal to the average of the eigenvalues, which, as we mentioned earlier, in our case is 1. The idea here is to look for a kink point, or an elbow, and retain all eigenvalues, and by extension their associated eigenvectors, that form the first portion of the kink, and discard the rest. From the plot, it is evident that a kink occurs at the 2nd eigenvalue, indicating that we should retain the first two eigenvalues.</br></br> A slightly more numeric approach discards all eigenvalues significantly below the eigenvalue average. Referring to the first table in Figure 6, we see that the average of the eigenvalues is 1, and the 2nd eigenvalue is in fact just below this cutoff. Since the 2nd value is so close to this average, while using the visual support we mentioned in the previous paragraph, it is safe to conclude that the scree plot analysis indicates that only the first two eigenvalues ought to be retained.</br></br> The second graph plots a line graph of the differences between successive eigenvalues. Superimposed on this graph is another horizontal line, this time with a value equal to the average of the differences of successive eigenvalues. Although EViews does not report this number, using the top table in Figure 6, it is not difficult to show that the average in question is $ (1.490476+0.633202+0.183133)/3 = 0.768937 $. The idea here is to retain all eigenvalues whose differences are above this threshold. Clearly, only the first two eigenvalues satisfy this criterion.</br></br> The final graph is a line graph of the cumulative proportion of information explained by successive principal eigenvalues. Superimposed on this graph is a line with a slope equal to the average of the eigenvalues, namely 1. The idea here is to retain those eigenvalues that form segments of the cumulative curve whose slopes are at least as steep as the line with slope 1. In our case, only two eigenvalues seem to form such a segment: eigenvalues 1 and 2.</br></br> All three graphical approaches indicate that one ought to retain the first two eigenvalues and their associated eigenvectors. There is however an entirely data driven methodology adapted from Bai and Ng (2002). We discussed this approach in section <i>Dimension Reduction</i> in <a href="http://blog.eviews.com/2018/10/principal-component-analysis-part-i.html">Part I</a> of this series. Nevertheless, EViews currently doesn't support its implementation via dialogs and it must be programmed manually. In this regard, we temporarily move away from our dialog-based exposition, and offer a code snippet which implements the aforementioned protocol.</br></br> <PRE><br /> ' --- Bai and Ng (2002) Protocol ---<br /> group crime murder assault rape urbanpop ' create group with all 4 variables<br /> !obz = murder.@obs ' get number of observations<br /> !numvar = @columns(crime) ' get number of variables<br /> equation eqjr ' equation object to hold regression<br /> matrix(!numvar, !numvar) SSRjr' matrix to store SSR from each regression eqjr<br /><br /> crime.makepcomp(cov=corr) s1 s2 s3 s4 ' get all score series<br /><br /> for !j = 1 to !numvar<br /> for !r = 1 to !numvar<br /> %scrstr = "" ' holds score specification to extract<br /><br /> ' generate string to specify which scores to use in regression<br /> for !r2 = 1 to !r<br /> %scrstr = %scrstr + " s" + @str(!r2)<br /> next<br /><br /> eqjr.ls crime(!j) {%scrstr} ' estimate regression<br /><br /> SSRjr(!j, !r) = (eqjr.@ssr)/!obz ' take average of SSR<br /> next<br /> next<br /> ' get column means of SSRjr. namely, get r means, averaging across regressions j.<br /> vector SSRr = @cmean(SSRjr)<br /><br /> vector(!numvar) IC ' stores critical values<br /> for !r = 1 to !numvar<br /> IC(!r) = @log(SSRr(!r)) + !r*(!obz + !numvar)/(!obz*!numvar)*@log(!numvar)<br /> next<br /><br /> ' take the index of the minimum value of IC as number of principal components to retain<br /> scalar numpc = @imin(IC)<br /> </PRE> Unlike our graphical analysis, the protocol above suggests that the number of retained eigenvalues is 1. Nevertheless, for sake of greater analytical exposition below, we will stick with the original suggestion of retaining the first two principal directions instead.</br></br> <h4>Principal Direction Analysis</h4> The next step in our analysis is to look at what, if any, meaningful patterns emerge by studying the principal directions themselves. To do so, we again bring up the main principal component dialog and this time select <b>Variable Loading Plots</b> under the <b>Display</b> group. See Figure 9 below.</br></br> <!-- :::::::::: FIGURE 9 :::::::::: --> <center> <a href="https://1.bp.blogspot.com/-RkhgeidLzVU/W_wus3VgIcI/AAAAAAAAAns/NI3ayYMGOG0F5Kcx_-vQuyukO8iJDBWlACPcBGAYYCw/s1600/pcademo16.jpg"><img src="https://1.bp.blogspot.com/-RkhgeidLzVU/W_wus3VgIcI/AAAAAAAAAns/NI3ayYMGOG0F5Kcx_-vQuyukO8iJDBWlACPcBGAYYCw/s1600/pcademo16.jpg" title="PCA Dialog: Variable Loading Plots" width="320" height="auto" /></a><br /><br /> <small>Figure 9: PCA Dialog: Variable Loading Plots</small><br /><br /> </center> <!-- :::::::::: FIGURE 9 :::::::::: --> Variable loading plots produce ``$ XY $ ''-pair plots of loading vectors. See section <i>Loading Plots</i> in <a href="http://blog.eviews.com/2018/10/principal-component-analysis-part-i.html">Part I</a> of this series for further details. The user specifies which loading vectors to compare and selects one among the following loading (scaling) protocols: <ul> <li> <i>Normalize Loadings</i>: In this case, scaling is unity and loading vectors are in fact the eigenvectors themselves. <li> <i>Normalize Scores</i>: Here, the scaling factor is the square root of the eigenvalue vector. In other words, the $ k^{\text{th}} $ element of the $ i^{\text{th}} $ loading vector is the $ k^{\text{th}} $ element of the $ i^{\text{th}} $ eigenvector, multiplied by the square root of the $ k^{\text{th}} $ eigenvalue. <li> <i>Symmetric Weights</i>: In this scenario, the scaling factor is the quartic (fourth) root of the eigenvalue vector. Namely, the $ k^{\text{th}} $ element of the $ i^{\text{th}} $ loading vector is the $ k^{\text{th}} $ element of the $ i^{\text{th}} $ eigenvector, multiplied by the fourth root of the $ k^{\text{th}} $ eigenvalue. <li> <i>User Loading Weight</i>: If $ 0 \leq \omega \leq 1 $ denotes the user defined scaling factor, then the loading vectors are formed by scaling the $ k^{\text{th}} $ element of the corresponding eigenvector by the $ k^{\text{th}} $ eigenvalue raised to the power $ \omega/2 $ . </ul> For the time being, stick with all default values. That is, we will look at the loading plots across the first two principal directions, and we will use the <b>Normalize Loadings</b> scaling protocol. In other words, we will plot the true eigenvectors since scaling is unity. Note that the choice of looking at only the first two principal directions is, among other things, motivated by our previous analysis on dimension reduction where we decided to retain only the first two principal eigenvalues and discard the rest. Go ahead and click on <b>OK</b>. Figure 10 summarizes the output.</br></br> <!-- :::::::::: FIGURE 10 :::::::::: --> <center> <a href="https://4.bp.blogspot.com/--7MFx125ijM/W_wutjX3pxI/AAAAAAAAAno/K0eIADkrkZgN7K9Hvxc_JRWg46zJMJ61ACPcBGAYYCw/s1600/pcademo17.jpg"><img src="https://4.bp.blogspot.com/--7MFx125ijM/W_wutjX3pxI/AAAAAAAAAno/K0eIADkrkZgN7K9Hvxc_JRWg46zJMJ61ACPcBGAYYCw/s1600/pcademo17.jpg" title="Variable Loading Plots Output" width="320" height="auto" /></a><br /><br /> <small>Figure 10: Variable Loading Plots Output</small><br /><br /> </center> <!-- :::::::::: FIGURE 10 :::::::::: --> As discussed in section <i>Loading Plots</i> in <a href="http://blog.eviews.com/2018/10/principal-component-analysis-part-i.html">Part I</a> of this series, the angle between the vectors in a loading plot is related to the correlation between the original variables to which the loading vectors are associated. Accordingly, we see that <b>MURDER</b> and <b>ASSAULT</b> are moderately positively correlated, as are <b>ASSAULT</b> and <b>RAPE</b>, although the latter two less so than the former two. Moreover, it is clear that <b>RAPE</b> and <b>URBANPOP</b> are positively correlated, whereas <b>MURDER</b> and <b>URBANPOP</b> are nearly uncorrelated since they form a near 90 degree angle. In other words, we have a two-dimensional graphical representation of the four-dimensional correlation matrix in Figure 4b. This ability to represent higher dimensional information in a lower dimensional space is arguably the most useful feature of PCA.</br></br> Furthermore, all three variables, <b>MURDER</b>, <b>ASSAULT</b>, and <b>RAPE</b>, are strongly correlated with the first principal direction, whereas <b>URBANPOP</b> is strongly correlated with the second principal direction. In fact, looking at vector lengths, we can also see that <b>MURDER</b>, <b>ASSAULT</b>, and <b>RAPE</b> are roughly equally dominant in the first direction, whereas <b>URBANPOP</b> is significantly more dominant than either of the former three, albeit in the second direction. Of course, this simply confirms our preliminary analysis of the middle table in Figure 6.</br></br> Above, we started with the basic loading vector with scale unity. We could have, of course, resorted to other scaling options such as normalizing to the score vectors, using symmetric weights, or using some other custom weighting. Since each of these would yield a different but similar perspective, we won't delve further into details. Nevertheless, as an exercise in exhibiting the steps involved, we provide below small snippets of code to manually generate loading vectors using only the eigenvalues and eigenvectors associated with the underlying correlation matrix. This is done for each of the four scaling protocols. These manually generated vectors are then compared to the loading vectors generated by EViews' internal code and shown to be identical.</br></br> <PRE><br /> ' --- Verify Loading Plot Vectors ---<br /> group crime murder assault rape urbanpop ' create group with all 4 variables<br /><br /> ' make eigenvalues and eigenvectors based on the corr. matrix<br /> crime.pcomp(eigval=eval, eigvec=evec, cov=corr)<br /><br /> 'normalize loadings<br /> crime.makepcomp(loading=load, cov=corr) s1 s2 s3 s4 ' EViews generated loading vectors<br /> matrix evaldiag = @makediagonal(eval) ' create diagonal matrix of eigenvalues<br /> matrix loadverify = evec*evaldiag ' manually create loading vector with scaling unity<br /> matrix loaddiff = loadverify - load ' get difference between custom and eviews output<br /> show loaddiff ' display results<br /><br /> 'normalize scores<br /> crime.makepcomp(scale=normscores, loading=load, cov=corr) s1 s2 s3 s4<br /> loadverify = evec*@epow(evaldiag, 0.5)<br /> loaddiff = loadverify - load<br /> show loaddiff<br /><br /> 'symmetric weights<br /> crime.makepcomp(scale=symmetrics, loading=load, cov=corr) s1 s2 s3 s4<br /> loadverify = evec*@epow(evaldiag, 0.25)<br /> loaddiff = loadverify - load<br /> show loaddiff<br /><br /> 'user weights<br /> crime.makepcomp(scale=0.36, loading=load, cov=corr) s1 s2 s3 s4<br /> loadverify = evec*@epow(evaldiag, 0.18)<br /> loaddiff = loadverify - load<br /> show loaddiff<br /> </PRE> <h4>Score Analysis</h4> Whereas loading vectors reveal information on which variables dominate (and by how much) each principal direction, it is only when they are used to create the principal component vectors (score vectors) that they are truly useful in a data exploratory sense. In this regard, we again open the main principal component dialog and select <b>Component scores plots</b> in the <b>Display</b> group of options. We capture this in Figure 11 below.</br></br> <!-- :::::::::: FIGURE 11 :::::::::: --> <center> <a href="https://2.bp.blogspot.com/-x39Nb4R9EFE/W_wutkF2RPI/AAAAAAAAAns/1ZD_HLUKwbQT6hsJOGvThjzeOdTsPhhuACPcBGAYYCw/s1600/pcademo18.jpg"><img src="https://2.bp.blogspot.com/-x39Nb4R9EFE/W_wutkF2RPI/AAAAAAAAAns/1ZD_HLUKwbQT6hsJOGvThjzeOdTsPhhuACPcBGAYYCw/s1600/pcademo18.jpg" title="PCA Dialog: Component Scores Plots" width="320" height="auto" /></a><br /><br /> <small>Figure 11: PCA Dialog: Component Scores Plots</small><br /><br /> </center> <!-- :::::::::: FIGURE 11 :::::::::: --> Analogous to the loading vector plots, here, EViews produces ``$ XY $ ''-pair plots of score vectors. As in the case of loading plots, the user specifies which score vectors to compare, and selects one among the following loading (scaling) protocols: <ul> <li> <i>Normalize Loadings</i>: Score vectors are scaled by unity. In other words, no scaling occurs. <li> <i>Normalize Scores</i>: The $ k^{\text{th}} $ score vector is scaled by the inverse of the square root of the $ k^{\text{th}} $ eigenvalue. <li> <i>Symmetric Weights</i>: The $ k^{\text{th}} $ score vector is scaled by the inverse of the quartic root of the $ k^{\text{th}} $ eigenvalue. <li> <i>User Loading Weight</i>: If $ 0 \leq \omega \leq 1 $ denotes the user defined scaling factor, the $ k^{\text{th}} $ score vector is scaled by the $ k^{\text{th}} $ eigenvalue raised to the power $ -\omega/2 $. </ul> Furthermore, if outlier detection is desired, EViews allows users to specify a p-value as a detection threshold. See sections <i>Score Plots</i> and <i>Outlier Detection</i> in <a href="http://blog.eviews.com/2018/10/principal-component-analysis-part-i.html">Part I</a> of this series for further details. Since we are currently interested in interpretive exercises, we will forgo outlier detection and choose to display all observations. To do so, under the <b>Graph options</b> group of options, change the <b>Obs. Labels</b> to <b>Label all obs.</b> and hit <b>OK</b>. We replicate the output in Figure 12.</br></br> <!-- :::::::::: FIGURE 12 :::::::::: --> <center> <a href="https://4.bp.blogspot.com/--kE3u3wBndY/W_wut8kI3yI/AAAAAAAAAnw/5KcYOP_fyNAdPDYSDat65ynDyNSGQn8gQCPcBGAYYCw/s1600/pcademo19.jpg"><img src="https://4.bp.blogspot.com/--kE3u3wBndY/W_wut8kI3yI/AAAAAAAAAnw/5KcYOP_fyNAdPDYSDat65ynDyNSGQn8gQCPcBGAYYCw/s1600/pcademo19.jpg" title="Component Scores Plots Output" width="320" height="auto" /></a><br /><br /> <small>Figure 12: Component Scores Plots Output</small><br /><br /> </center> <!-- :::::::::: FIGURE 12 :::::::::: --> The output produced is a scatter plot of principal component 1 (score vector 1) vs. principal component 2 (score vector 2). There are several important observations to be made here.</br></br> First, the further east of the zero vertical axis a state is located, the more positively correlated it is with the first principal direction. Since the latter is dominated positively (east of the zero vertical axis) by the three crime categories <b>MURDER</b>, <b>ASSAULT</b>, and <b>RAPE</b> (see Figure 11), we conclude that such states are positively correlated with said crimes. Naturally, converse conclusions hold as well. In particular, we see that <b>CALIFORNIA</b>, <b>NEVADA</b>, and <b>FLORIDA</b> are most positively correlated with the three crimes under consideration. If this is indeed the case, then it is little surprise that most Hollywood productions typically involve crime thrillers set in these three states. Conversely, <b>NORTH DAKOTA</b> and <b>VERMONT</b> are typically least associated with the crimes under consideration.</br></br> Second, the further north of the zero horizontal axis a state is located, the more positively correlated it is with the second principal direction. Since the latter is dominated positively (north of the zero horizontal axis) by the variable <b>URBNAPOP</b> (see Figure 11), we conclude that such states are positively correlated with urbanization. Again, the converse conclusions hold as well. In particular, <b>HAWAII</b>, <b>CALIFORNIA</b>, <b>RHODE ISLAND</b>, <b>MASSACHUSETTS</b>, <b>UTAH</b>, <b>NEW JERSEY</b> are states most positively associated with urbanization, whereas those least so are <b>SOUTH CAROLINA</b>, <b>NORTH CAROLINA</b>, and <b>MISSISSIPPI</b>.</br></br> Lastly, it is worth recalling that like loading vectors, score vectors can also be scaled. In this regard, we provide code snippets below to show how to manually compute scaled score vectors, exposing the algorithm that EViews uses to do same in its internal computations.</br></br> <PRE><br /> ' --- Verify Score Vectors ---<br /> ' make eigenvalues and eigenvectors based on the corr. matrix<br /> crime.pcomp(eigval=eval, eigvec=evec, cov=corr)<br /><br /> matrix evaldiag = @makediagonal(eval) ' create diagonal matrix of eigenvalues<br /><br /> stom(crime, crimemat) ' create matrix from crime group<br /> vector means = @cmean(crimemat) ' get column means<br /> vector popsds = @cstdevp(crimemat) ' get population standard deviations<br /><br /> ' initialize matrix for normalized crimemat<br /> matrix(@rows(crimemat), @columns(crimemat)) crimematnorm<br /><br /> ' normalize (remove mean and divide by pop. s.d.) every column of crimemat<br /> for !k = 1 to @columns(crimemat)<br /> colplace(crimematnorm,(@columnextract(crimemat,!k) - means(!k))/popsds(!k),!k)<br /> next<br /><br /> 'normalize loadings<br /> crime.makepcomp(cov=corr) s1 s2 s3 s4 ' get score series<br /> group scores s1 s2 s3 s4 ' put scores into group<br /> stom(scores, scoremat) ' put scores group into matrix<br /> matrix scoreverify = crimematnorm*evec ' create custom score matrix<br /> matrix scorediff = scoreverify - scoremat ' get difference between custom and eviews output<br /> show scorediff<br /><br /> 'normalize scores<br /> crime.makepcomp(scale=normscores, cov=corr) s1 s2 s3 s4<br /> group scores s1 s2 s3 s4<br /> stom(scores, scoremat)<br /> scoreverify = crimematnorm*evec*@inverse(@epow(evaldiag, 0.5))<br /> scorediff = scoreverify - scoremat<br /> show scorediff<br /><br /> 'symmetric weights<br /> crime.makepcomp(scale=symmetrics, cov=corr) s1 s2 s3 s4<br /> group scores s1 s2 s3 s4<br /> stom(scores, scoremat)<br /> scoreverify = crimematnorm*evec*@inverse(@epow(evaldiag, 0.25))<br /> scorediff = scoreverify - scoremat<br /> show scorediff<br /><br /> 'user weights<br /> crime.makepcomp(scale=0.36, cov=corr) s1 s2 s3 s4<br /> group scores s1 s2 s3 s4<br /> stom(scores, scoremat)<br /> scoreverify = crimematnorm*evec*@inverse(@epow(evaldiag, 0.18))<br /> scorediff = scoreverify - scoremat<br /> show scorediff<br /> </PRE> Above, observe that we derived eigenvalues and eigenvectors of the correlation matrix. Accordingly, to derive the score vectors manually, we needed to standardize the original variables first. In this regard, when using the covariance matrix instead, one need only to demean the original variables and disregard scaling information. We leave this as an exercise to interested readers.</br></br> <h4>Biplot Analysis</h4> As a last exercise, we superimpose the loading vectors and score vectors onto a single graph called the biplot. To do this, again, bring up the main principal component dialog and under the <b>Display</b> group select <b>Biplot (scores & loadings)</b>. As in the previous exercise, under the <b>Graph options</b> group, select <b>Label all obs.</b> from the <b>Obs. labels</b> dropdwon, and hit <b>OK</b>. We summarize these steps in Figure 13.</br></br> <!-- :::::::::: FIGURE 13 :::::::::: --> <center> <a href="https://3.bp.blogspot.com/-5ZJZITYjMFs/W_wuu3eXzaI/AAAAAAAAAn8/KIws_X7kqy80YUyLa531b2qBMEWTTURDgCPcBGAYYCw/s1600/pcademo20.jpg"><img src="https://3.bp.blogspot.com/-5ZJZITYjMFs/W_wuu3eXzaI/AAAAAAAAAn8/KIws_X7kqy80YUyLa531b2qBMEWTTURDgCPcBGAYYCw/s1600/pcademo20.jpg" title="PCA Dialog: Biplots (scores & loadings)" width="320" height="auto" /></a><br /><br /> <small>Figure 13: PCA Dialog: Biplots (scores & loadings)</small><br /><br /> </center> <!-- :::::::::: FIGURE 13 :::::::::: --> From an inferential standpoint, there's little to contribute beyond what we laid out in each of the previous two sections. Nevertheless, having both the loading and score vectors appear on the same graph visually reinforces our previous analysis. Accordingly, we close this section with just the graphical output.</br></br> <!-- :::::::::: FIGURE 14 :::::::::: --> <center> <a href="https://1.bp.blogspot.com/-fAFKcwUp_GQ/W_wuvN2sTHI/AAAAAAAAAnw/gyAc9evYMk0pu4pdYEkUKvPJZup0CM9fACPcBGAYYCw/s1600/pcademo21.jpg"><img src="https://1.bp.blogspot.com/-fAFKcwUp_GQ/W_wuvN2sTHI/AAAAAAAAAnw/gyAc9evYMk0pu4pdYEkUKvPJZup0CM9fACPcBGAYYCw/s1600/pcademo21.jpg" title="Biplots (scores & loadings) Output" width="320" height="auto" /></a><br /><br /> <small>Figure 14: Biplots (scores & loadings) Output</small><br /><br /> </center> <!-- :::::::::: FIGURE 14 :::::::::: --> <h3>Concluding Remarks</h3> In <a href="http://blog.eviews.com/2018/10/principal-component-analysis-part-i.html">Part I</a> of this series we laid out the theoretical foundations underlying PCA. Here, we used EViews to conduct a brief data exploratory implementation of PCA on serious crimes across 50 US states. Our aim was to illustrate the use of numerous PCA tools available in EViews with brief interpretations associated with each.</br></br> In closing, we would like to point out that apart from the main principal component dialog we used above, EViews also offers a <b>Make Principal Components...</b> proc function which provides a unified framework for producing vectors and matrices of the most important objects related to PCA. These include the vector of eigenvalues, the matrix of eigenvectors, the matrix of loading vectors, as well as the matrix of scores. To access this function, open the crime group from the workfile, click on <b>Proc</b> and click on <b>Make Principal Components...</b>. We summarize this in Figures 15a and 15b below.</br></br> <table> <tbody> <tr> <td> <!-- :::::::::: FIGURE 15A :::::::::: --> <center> <a href="https://4.bp.blogspot.com/-qJTuFEKJwqc/W_wuvBI9YRI/AAAAAAAAAn4/x7sxipltfmoNhmHp_QHDkwMuisTLavLLACPcBGAYYCw/s1600/pcademo22.jpg"><img src="https://4.bp.blogspot.com/-qJTuFEKJwqc/W_wuvBI9YRI/AAAAAAAAAn4/x7sxipltfmoNhmHp_QHDkwMuisTLavLLACPcBGAYYCw/s1600/pcademo22.jpg" title="Group Proc: Make Principal Components..." width="320" height="auto" /></a><br /><br /> </center> <!-- :::::::::: FIGURE 15A :::::::::: --> </td> <td> <!-- :::::::::: FIGURE 15B :::::::::: --> <center> <a href="https://2.bp.blogspot.com/-soMYratGHJQ/W_wuvP3PpUI/AAAAAAAAAn0/mUEZAjHolnkVuPg9JRHQDZQkwg0WArtigCPcBGAYYCw/s1600/pcademo23.jpg"><img src="https://2.bp.blogspot.com/-soMYratGHJQ/W_wuvP3PpUI/AAAAAAAAAn0/mUEZAjHolnkVuPg9JRHQDZQkwg0WArtigCPcBGAYYCw/s1600/pcademo23.jpg" title="Make Principal Components Dialog" width="320" height="auto" /></a><br /><br /> </center> <!-- :::::::::: FIGURE 15B :::::::::: --> </td> </tr> <tr> <td><center><small>Figure 15a: Group Proc: Make Principal Components...</small><br /><br /></center></td> <td><center><small>Figure 15b: Make Principal Components Dialog</small><br /><br /></center></td> </tr> </tbody> </table> From here, one can insert names for all objects one wishes to place in the workfile, select the scaling one wishes to use in the creation of the loading and score vectors, and hit <b>OK</b>.</br></br> <hr> <h3>Files</h3> The EViews workfile can be downloaded here: <a href="http://www.eviews.com/blog/PCA/usarrests.wf1">usarrests.wf1</a></br> The EViews program file can be downloaded here: <a href="http://www.eviews.com/blog/PCA/usarrests.prg">usarrests.prg</a></br></br> <hr> <h3>References</h3> <table> <tr valign="top"> <td align="right" class="bibtexnumber"> [<a name="bai-2002">1</a>] </td> <td class="bibtexitem"> Jushan Bai and Serena Ng. Determining the number of factors in approximate factor models. <em>Econometrica</em>, 70(1):191--221, 2002. </td> </tr> </table> </span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com0tag:blogger.com,1999:blog-6883247404678549489.post-50967236099759991142018-10-15T14:04:00.000-07:002018-11-26T15:03:04.234-08:00Principal Component Analysis: Part I (Theory)<script type="text/x-mathjax-config">MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { lb: "{\\left(}", rb: "{\\right)}", bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"></script> <span style="font-family: "verdana" , sans-serif;"> Most students of econometrics are taught to appreciate the value of data. We are generally taught that more data is better than less, and that throwing data away is almost "taboo". While this is generally good practice when it concerns the number of observations per variable, it is not always recommended when it concerns the number of variables under consideration. In fact, as the number of variables increases, it becomes increasingly more difficult to rank the importance (impact) of any given variable, and can lead to problems ranging from basic overfitting, to more serious issues such as multicollinearity or model invalidity. In this regard, selecting the smallest number of the most <i>meaningful</i> variables -- otherwise known as <i>dimensionality reduction</i> -- is not a trivial problem, and has become a staple of modern data analytics, and a motivation for many modern techniques. One such technique is <b>Principal Component Analysis</b> (PCA).<a name='more'></a></br></br> <h3>Variance Decomposition</h3> Consider a linear statistical system -- a random matrix (multidimensional set of random variables) $ \mathbf{X} $ of size $ n \times m $ where the first dimension denotes observations and the second variables. Moreover, recall that linear statistical systems are characterized by two inefficiencies: 1) noise and 2) redundancy. The former is commonly measured through the <i>signal (desirable information) to noise (undesirable information) ratio</i> $ \text{SNR} = \sigma^{2}_{\text{signal}} / \sigma^{2}_{\text{noise}} $, and implies that systems with larger signal variances $ \sigma^{2}_{\text{signal}} $ relative to their noise counterpart, are more informative. Assuming that noise is a nuisance equally present in observing each of the $ m $ variables of our system, it stands to reason that variables with larger variances have larger SNRs, therefore carry relatively richer signals, and are in this regard relatively more important, or <i>principal</i>.</br></br> Whereas relative importance reduces to relative variances across system variables, redundancy, or relative <i>uniqueness</i> of information, is captured by system covariances. Recall that covariances (or normalized covariances called correlations) are measures of variable dependency or co-movement (direction and magnitude of joint variability). In other words, variables with overlapping (redundant) information will typically move in the same direction with similar magnitudes, and will therefore have non-zero covariances. Conversely, when variables share little to no overlapping information, they exhibit small to zero linear dependency, although statistical dependence could still manifest nonlinearly.</br></br> Together, system variances and covariances quantify the amount of information afforded by each variable, and how much of that information is truly unique. In fact, the two are typically derived together using the familiar <i>variance-covariance</i> matrix formula: $$ \mathbf{\Sigma}_{X} = E \left( \mathbf{X}^{\top}\mathbf{X} \right) $$ where $ \mathbf{\Sigma}_{X} $ is an $ m\times m $ <i>square symmetric</i> matrix with (off-)diagonal elements as (co)variances, and where we have <i>a priori</i> assumed that all variables in $ \mathbf{X} $ have been demeaned. Thus, systems where all variables are unique will result in a diagonal $ \mathbf{\Sigma}_{X} $, whereas those exhibiting redundancy will have non-zero off-diagonal elements. In this regard, systems with zero redundancy have a particularly convenient feature known as <i>variance decomposition</i>. Since covariance terms in these systems are zero, total system variation (and therefore information) is the sum of all variance terms, and the proportion of total system information contributed by a variable is the ratio of its variance to total system variation.</br></br> Although the variance-covariance matrix is typically not diagonal, suppose there exists a way to diagonalize $ \mathbf{\Sigma}_{X} $, and by extension transform $ \mathbf{X} $, while simultaneously preserving information. If such transformation exists, one is guaranteed a new set of at most $ m $ variables (some variables may be perfectly correlated with others) which are uncorrelated, and therefore linearly independent. Accordingly, discarding any one of those new variables would have no linear statistical impact on the $ m-1 $ remaining variables, and would reduce dimensionality at the cost of losing information to the extent contained in the discarded variables. In this regard, if one could also quantify the amount of information captured by each of the new variables, order the latter in descending order of information quantity, one could discard variables from the back until sufficient dimensionality reduction is achieved, while maintaining the maximum amount of information within the preserved variables. We summarize these objectives below: <ol> <li> Diagonalize $ \mathbf{\Sigma}_{X} $. <li> Preserve information. <li> Identify principal (important) information. <li> Reduce dimensionality. </ol> So how does one realize these objectives? It is precisely this question which motivates the subject of this entry.</br></br> <h3>Principal Component Analysis</h3> Recall that associated with every matrix $ \mathbf{X} $ is a <i>basis</i> -- a set (matrix) of <i>linearly independent</i> vectors such that <i>every</i> row vector in $ \mathbf{X} $ is a linear combination of the vectors in the basis. In other words, the row vectors are <i>projections</i> onto the column vectors in $ \mathbf{B} $. Since the covariance matrix contains all noise and redundancy information associated with a matrix, the idea driving <i>principal component analysis</i> is to re-express the original covariance matrix using a basis that results in a new, diagonal covariance matrix -- in other words, off-diagonal elements in the original covariance matrix are driven to zero and redundancy is eliminated.</br></br> <h4>Change of Basis</h4> The starting point of PCA is the <i>change of basis</i> relationship. In particular, if $ \mathbf{B} $ is an $ m\times p $ matrix of geometric transformations with $ p \leq m $, the $ n\times p $ matrix $ \mathbf{Q}=\mathbf{XB} $ is a projection of the $ n\times m $ matrix $ \mathbf{X} = [\mathbf{X}_{1}^{\top}, \ldots, \mathbf{X}_{n}^{\top}]^{\top}$ onto $ \mathbf{B} $. In other words, the rows of $ \mathbf{X} $ are linear combinations of the column vectors in $ \mathbf{B} = [\mathbf{B}_{1}, \ldots, \mathbf{B}_{p}]$. Formally, \begin{align*} \mathbf{Q} & = \begin{bmatrix} \mathbf{X}_{1}\\ \vdots\\ \mathbf{X}_{n} \end{bmatrix} \begin{bmatrix} \mathbf{B}_{1} &\cdots &\mathbf{B}_{p} \end{bmatrix}\\ &= \begin{bmatrix} \mathbf{X}_{1}\mathbf{B}_{1} &\cdots &\mathbf{X}_{1}\mathbf{B}_{p}\\ \vdots &\ddots &\vdots\\ \mathbf{X}_{n}\mathbf{B}_{1} &\cdots &\mathbf{X}_{n}\mathbf{B}_{p} \end{bmatrix} \end{align*} More importantly, if the column vectors $ \left\{ \mathbf{B}_{1}, \ldots, \mathbf{B}_{p} \right\} $ are also linearly independent, then $ \mathbf{B} $, by definition, characterizes a matrix of basis vectors for $ \mathbf{X} $. Furthermore, the covariance matrix of this transformation formalizes as: \begin{align} \mathbf{\Sigma}_{Q} = E\left( \mathbf{Q}^{\top}\mathbf{Q} \right) = E\left( \mathbf{B}^{\top}\mathbf{X}^{\top}\mathbf{XB} \right) = \mathbf{B}^{\top}\mathbf{\Sigma}_{X}\mathbf{B} \label{eq1} \end{align} It is important to reflect here on the dimensionality of $ \mathbf{\Sigma}_{Q} $, which, unlike $ \mathbf{\Sigma}_{X} $, is of dimension $ p\times p $ where $ p \leq m $. In other words, the covariance matrix under the transformation $ \mathbf{B} $ is at most the size of the original covariance matrix, and possibly smaller. Since dimensionality reduction is clearly one of our objectives, the transformation above is certainly poised to do so. However, the careful reader may remark here: <i>if the objective is simply dimensionality reduction, then any matrix $ \mathbf{B} $ of size $ m \times p $ with $ p\leq m $ will suffice; so why especially does $ \mathbf{B} $ have to characterize a basis?</i></br></br> The answer is simple: dimensionality reduction is not the <i>only</i> objective, but one among <i>preservation of information</i> and <i>importance of information</i>. As to the former, we recall that what makes a set of basis vectors special is that they characterize <i>entirely</i> the space on which an associated matrix takes values and therefore <i>span</i> the multidimensional space on which that matrix resides. Accordingly, if $ \mathbf{B} $ characterizes a basis, then information contained in $ \mathbf{X} $ is never lost during the transformation to $ \mathbf{Q} $. Furthermore, recall that the channel for dimensionality reduction that motivated our discussion earlier was never intended to go through a sparser basis. Rather, the mechanism of interest was a diagonalization of the covariance matrix followed by variable exclusion. Accordingly, any dimension reduction that reflects basis sparsity via $ p \leq m $, is a consequence of perfect co-linearity (correlation) among some of the original system variables. In other words, $ p = \text{rk}\left( \mathbf{X} \right) $, where $ \text{rk}(\cdot) $ denotes the matrix <i>rank</i>, or the number of its linearly independent columns (or rows).</br></br> <h4>Diagonalization</h4> We argued earlier that any transformation from $ \mathbf{X} $ to $ \mathbf{Q} $ that preserves information must operate through a basis transformation $ \mathbf{B} $. Suppose momentarily that we have in fact found such $ \mathbf{B} $. Our next objective would be to ensure that $ \mathbf{B} $ also produces a diagonal $ \mathbf{\Sigma}_{Q} $. In this regard, we remind the reader of two famous results in linear algebra: <ol> <li>[Thm. 1:] <i>A matrix is symmetric if and only if it is orthogonally diagonalizable.</i> <ul> <li> In other words, if a matrix $ \mathbf{A} $ is symmetric, there exists a diagonal matrix $ \mathbf{D} $ and a matrix $ \mathbf{E} $ which <i>diagonalizes</i> $ \mathbf{A} $, such that $ \mathbf{A} = \mathbf{EDE}^{\top} $. The converse statement holds as well. </ul> <li>[Thm. 2:] <i>A symmetric matrix is diagonalized by a matrix of its orthonormal eigenvectors.</i> <ul> <li> Extending the result above, if a $ q\times q $ matrix $ \mathbf{A} $ is symmetric, the diagonalizing matrix $ \mathbf{E} = [\mathbf{E}_{1}, \ldots, \mathbf{E}_{q}]$, the diagonal matrix $ \mathbf{D} = \text{diag} [\lambda_{1}, \ldots, \lambda_{q}] $, and $ \mathbf{E}_{i} $ and $ \lambda_{i} $ are respectively the $ i^{\text{th}} $ <i>eigenvector</i> and associated <i>eigenvalue</i> of $ \mathbf{A} $. <li> Note that a set of vectors is <i>orthonormal</i> if each vector is of length unity and orthogonal to all other vectors in the set. Accordingly, if $ \mathbf{V} = [\mathbf{V}_{1}, \ldots, \mathbf{V}_{q}]$ is orthonormal, then $ \mathbf{V}_{j}^{\top}\mathbf{V}_{j} = 1 $ and $ \mathbf{V}_{j}^{\top}\mathbf{V}_{k} = 0 $ for all $ j \neq k $. Furthermore, $ \mathbf{V}^{\top}\mathbf{V} = \mathbf{I}_{q} $ where $ \mathbf{I}_{q} $ is the identity matrix of size $ q $, and therefore, $ \mathbf{V}^{\top} = \mathbf{V}^{-1} $. <li> Recall further that eigenvectors of a linear transformation are those vectors which only change magnitude but not direction when subject to said transformation. Since any matrix is effectively a linear transformation, if $ \mathbf{v} $ is an eigenvector of some matrix $ \mathbf{A} $, it satisfies the relationship $ \mathbf{Av} = \lambda \mathbf{v} $. Here, associated with each eigenvector is the eigenvalue $ \lambda $ quantifying the resulting change in magnitude. <li> Finally, observe that matrix rank determines the maximum number of eigenvectors (eigenvalues) one can extract for said matrix. In particular, if $ \text{rk}(\mathbf{A}) = r \leq q $, there are in fact only $ r $ orthonormal eigenvectors associated with $ \mathbf{A} $. To see this, use a geometric interpretation to note that $ q- $dimensional objects reside in spaces with $ q $ orthogonal directions. Since any $ n\times q $ matrix is effectively a $ q- $dimensional object of vectors, the maximum number of orthogonal directions that characterize these vectors is $ q $. Nevertheless, if the (column) rank of this matrix is in fact $ r \leq q $, then $ q - r $ of the $ q $ orthogonal directions are never used. For instance, think of 2$ d $ drawings in 3$ d $ spaces. It makes no difference whether the drawing is characterized in the $ xy $, the $ xz $, or the $ yz $ plane -- the drawing still has 2 dimensions and in any of those configurations, the dimension left out is a linear combination of the others. In particular, if the $ xz $ plane is used, then the $ z- $direction is a linear combination of the $ y- $direction since the drawing can be equivalently characterized in the $ xy $ plane, and so on. In other words, one of the three dimensions is never used, although it exists and can be characterized if necessary. Along the same lines, if $ \mathbf{A} $ indeed has rank $ r \leq q $, we can construct $ q - r $ additional orthogonal eigenvectors to ensure dimensional equality in the diagonalization $ \mathbf{A} = \mathbf{EDE}^{\top} $, although their associated eigenvalues will in fact be 0, essentially negating their presence. <li> By extension of the previous point, since $ \mathbf{A} $ is a $ q- $dimensional object of $ q- $dimensional column vectors, it can afford at most $ q $ orthogonal directions to characterize its space. Since all $ q $ such vectors are collected in $ \mathbf{E} $, we are guaranteed that $ \mathbf{E} $ is a spanning set and therefore constitutes an <i>eigenbasis</i>. </ul> </ol> Since $ Cov(\mathbf{X}) $ is a symmetric matrix by construction, the $ 1^{\text{st}} $ result above affords a re-express of equation (\ref{eq1}) as follows: \begin{align} \mathbf{\Sigma}_{Q} &= \mathbf{B}^{\top} \mathbf{\Sigma}_{X} \mathbf{B} \notag \\ &= \mathbf{B}^{\top}\mathbf{E}_{X}\mathbf{D}_{X}\mathbf{E}_{X}^{\top} \mathbf{B} \label{eq2} \end{align} where $ \mathbf{E}_{X} = [\mathbf{E}_{1}, \ldots, \mathbf{E}_{m}] $ is the orthonormal matrix of eigenvectors of $ \mathbf{\Sigma}_{X} $ and $ \mathbf{D}_{X} = \text{diag} [\lambda_{1}, \ldots, \lambda_{q}] $ is the diagonal matrix of associated eigenvalues.</br></br> Now, since we require $ \mathbf{\Sigma}_{Q} $ to be diagonal, we can set $ \mathbf{B}^{\top} = \mathbf{E}^{-1} $ in order to reduce $ Cov(\mathbf{Q}) $ to the diagonal matrix $ \mathbf{D}_{X} $. Since the $ 2^{\text{nd}} $ linear algebra result above guarantees that $ \mathbf{E}_{X} $ is orthonormal, we know that $ \mathbf{E}^{-1} = \mathbf{E}^{\top} $. Accordingly, \begin{align} \mathbf{\Sigma}_{Q} = \mathbf{D}_{X} \quad \text{if and only if} \quad \mathbf{B} = \mathbf{E}_{X} \label{eq3} \end{align} The entire idea is visualized below in Figures 1 and 2. In particular, Figure 1 demonstrates the ``data perspective'' view of the system in relation to an alternate basis. That is, two alternate basis axes, labeled as ``Principal Direction 1'' and ``Principal Direction 2'' are superimposed on the familiar $ x $ and $ y $ axes. Since the vectors of a basis are mutually orthogonal, the principal direction axes are naturally drawn at $ 90 $° angles. Alternatively, Figure 2 demonstrates the view of the system when the perspective uses the principal directions as the reference axes. <table> <tbody> <tr> <td><!-- :::::::::: FIGURE 1 :::::::::: --><center><a href="https://4.bp.blogspot.com/-W4UMHPURwds/W8TwEvK80eI/AAAAAAAAAlM/9TDa99SG5a8vOPshslHobrzsPfAgb3yWQCLcBGAs/s1600/pcadta.jpg"><img src="https://4.bp.blogspot.com/-W4UMHPURwds/W8TwEvK80eI/AAAAAAAAAlM/9TDa99SG5a8vOPshslHobrzsPfAgb3yWQCLcBGAs/s1600/pcadta.jpg" title="" width="320" height="auto" /></a><br /><br /></center><!-- :::::::::: FIGURE 1 :::::::::: --> </td> <td><!-- :::::::::: FIGURE 2 :::::::::: --><center><a href="https://2.bp.blogspot.com/-JTwqThzlduY/W8TwElonl8I/AAAAAAAAAlI/mBD1-k36W0kayxm-egMxS1Ew3gQHYR9vQCLcBGAs/s1600/pcaeig.jpg"><img src="https://2.bp.blogspot.com/-JTwqThzlduY/W8TwElonl8I/AAAAAAAAAlI/mBD1-k36W0kayxm-egMxS1Ew3gQHYR9vQCLcBGAs/s1600/pcaeig.jpg" title="" width="320" height="auto" /></a><br /><br /></center><!-- :::::::::: FIGURE 2 :::::::::: --> </td> </tr> </tbody></table> <h4>Consistency</h4> In practice, $ \mathbf{\Sigma}_{X} $, and by extension $ \mathbf{\Sigma}_{Q}, \mathbf{E}_{X}, $ and $ \mathbf{D}_{X} $, are typically not observed. Nevertheless, we can apply the analysis above using sample covariance matrices $$ \mathbf{S}_{Q} = \frac{1}{n}\mathbf{Q}^{\top}\mathbf{Q} \xrightarrow[n \to \infty]{p} \mathbf{\Sigma}_{Q} \quad \text{and} \quad \mathbf{S}_{X} = \frac{1}{n}\mathbf{X}^{\top}\mathbf{X} \xrightarrow[n \to \infty]{p} \mathbf{\Sigma}_{X} $$ where $ \xrightarrow[\color{white}{n \to \infty}]{p} $ indicates weak convergence to asymptotic counterparts. In this regard, the result analogous to equation (\ref{eq2}) for estimated $ 2^{\text{nd}} $ moment matrices states that \begin{align} \mathbf{S}_{Q} = \widehat{\mathbf{E}}_{X}^{\top} \mathbf{S}_{X} \widehat{\mathbf{E}}_{X} = \widehat{\mathbf{E}}_{X}^{\top} \left( \widehat{\mathbf{E}}_{X}\widehat{\mathbf{D}}_{X}\widehat{\mathbf{E}}_{X}^{\top} \right) \widehat{\mathbf{E}}_{X} = \widehat{\mathbf{D}}_{X} \label{eq4} \end{align} where $ \widehat{\mathbf{E}}_{X} $ and $ \widehat{\mathbf{D}}_{X} $ now represent the eigenbasis and respective eigenvalues associated with the square symmetric matrix $ \mathbf{S}_{X} $. It is important to understand here that while $ \widehat{\mathbf{E}}_{X} \neq \mathbf{E}_{X} $ and $ \widehat{\mathbf{D}}_{X} \neq \mathbf{D}_{X} $, there is a long-standing literature far beyond the scope of this entry which guarantees that $ \widehat{\mathbf{E}}_{X} $ and $ \widehat{\mathbf{D}}_{X} $ are both consistent estimators of $ \mathbf{E}_{X} $ and $ \mathbf{D}_{X} $, provided $ m/n \to 0 $ as $ n \to \infty $. In other words, as in classical regression paradigms, consistency of PCA holds only under the usual ``large $ n $ and small $ m $ '' framework. There are modern results which address cases for $ m/n \to c > 0 $, however, they too are beyond the scope of this text. In proceeding however, in order to contain notational complexity, unless otherwise stated, we will maintain that $ \mathbf{E}_{X} $ and $ \mathbf{D}_{X} $ now represent the eigenbasis and respective eigenvalues associated with the square symmetric matrix $ \mathbf{S}_{X} $.</br></br> <h4>Preservation of Information</h4> In addition to diagonalizing $ \mathbf{S}_{Q} $, we also require preservation of information. For this we need to guarantee that $ \mathbf{B} $ is a basis. Here, we recall the final remark under the $ 2^{\text{nd}} $ linear algebra result above, which argues that $ \mathbf{S_{Q}} $ affords at most $ m $ orthonormal eigenvectors and associated eigenvalues, with the former also forming an eigenbasis. Since all $ m $ eigenvectors are collected in $ \mathbf{E}_{X} = \mathbf{B} $, we are guaranteed that $ \mathbf{B} $ is indeed a basis. In this regard, we transform $ \mathbf{X} $ into $ m $ statistically uncorrelated, but exhaustive <i>directions</i>. We are careful not to use the word <i>variables</i> (although technically they are), since the transformation $ \mathbf{Q} = \mathbf{XE}_{X} $ does not preserve variable interpretation. That is, the $ j^{\text{th}} $ column of $ \mathbf{Q} $ no longer retains the interpretation of the $ j^{\text{th}} $ variable (column) in $ \mathbf{X} $. In fact, the $ j^{\text{th}} $ column of $ \mathbf{Q} $ is a projection (linear combination) of <i>all</i> $ m $ variables in $ \mathbf{X} $, in the direction of the $ j^{\text{th}} $ eigenvector $ \mathbf{E}_{j} $. Accordingly, we can interpret $ \mathbf{XE}_{X} $ as $ m $ orthogonal weighted averages of the $ m $ variables in $ \mathbf{X} $. Furthermore, since $ \mathbf{E}_{X} $ is an eigenbasis, the total variation (information) of the original system $ \mathbf{X} $, namely $ \mathbf{S}_{X} $, is preserved in the transformation to $ \mathbf{Q} $. Unlike $ \mathbf{S}_{X} $ however, $ \mathbf{S}_{Q} = \mathbf{D}_{X}$ is diagonal, and total variation in $ \mathbf{X} $ is now distributed across $ \mathbf{Q} $ without redundancy.</br></br> <h4>Principal Directions</h4> Since preservation of information is guaranteed under the transformation $ \mathbf{Q} = \mathbf{XE}_{X} $, the proportion of information in $ \mathbf{S}_{X} $ associated with the $ j^{\text{th}} $ column of $ \mathbf{S}_{Q}$ is in fact $ \lambda_{j} $. By extension, each column in $ \mathbf{Q} $ has standard deviation $ \sqrt{\lambda_{j}} $ or variance $ \lambda_{j} $. Moreover, since $ \mathbf{S}_{Q} $ is diagonal and information redundancy is not an issue, it stands to reason that the total amount of system variation is the sum of variations due to each column in $ \mathbf{Q} $. In other words, total system variation is $ \text{tr}\left( \mathbf{S}_{Q} \right) = \lambda_{1} + \ldots + \lambda_{m} $, where $ \text{tr}(\cdot) $ denotes the matrix trace operator, and the $ j^{\text{th}} $ orthogonalized direction contributes to $$ \frac{\lambda_{j}}{\lambda_{1} + \ldots + \lambda_{m}} \times 100 \% $$ of total system variation (information). If we now arrange the columns of $ \mathbf{Q} $, or equivalently those of $ \mathbf{E}_{X} $, according to the order $ \lambda_{(1)} \geq \lambda_{(2)} \geq \ldots \geq \lambda_{(m)} $, where $ \lambda_{(j)} $ are ordered versions of their counterparts $ \lambda_{j} $, we are guaranteed to have the directions arranged from most principal to least, measured as the proportion of total system variation contributed by that direction.</br></br> Another useful feature of the vectors in $ \mathbf{E}_{X} $ is that they quantify the proportion of directionality each original variable contributes toward the overall direction of that vector. In particular, let $ e_{i,j} $ denote the $ i^{\text{th}} $ element in $ \mathbf{E}_{j} = [e_{1,j}, \ldots, e_{m,j} ]$, where $ i \in {1, \ldots, m} $, and observe that since $ \mathbf{E}_{j} $ are the eigenvectors of $ \mathbf{S}_{X} $, each element $ e_{i,j} $ is in fact associated with the $ i^{\text{th}} $ variable (column) of $ \mathbf{X} $. Furthermore, since the vectors $ \mathbf{E}_{j} $ each have unit length due to (ortho)normality, we know that they must lie inside the unit circle and that $ e_{i,j}^{2} \times 100 \% $ of the direction $ \mathbf{E}_{j} $ is due to variable $ i $. In other words, we can quantify how principal each variable is in each direction.</br></br> <h4>Principal Components</h4> Principal directions, the eigenvectors in $ \mathbf{E}_{X} $, are often mistakenly called principal components. Nevertheless, correct literature reserves the term <i>principal components</i> for the projections of the original system variables <i>onto</i> the principal directions. That is, principal components refer to the column vectors in $ \mathbf{Q} = [\mathbf{Q}_{1}, \ldots, \mathbf{Q}_{m}] = \mathbf{XE}_{X} $, and are sometimes also referred to as <i>scores</i>. Like their principal direction counterparts, principal components contain several important properties worth observing.</br></br> As a direct consequence of the diagonalization properties discussed earlier, the variance of each principal component is in fact the eigenvalue associated with the underlying principal direction, and principal components are mutually uncorrelated. To see this formally, let $ \mathbf{C}_{j} = [0, \ldots, 0, \underbrace{1}_j, 0, \ldots, 0 ]^{\top} $ denote the canonical basis vector in the $ j^{\text{th}} $ dimension. Then, using the result in equation (\ref{eq4}), the correlation between the $ j^{\text{th}} $ and $ k^{\text{th}} $ principal components $ \mathbf{Q}_{j} = \mathbf{QC}_{j} $ and $ \mathbf{Q}_{k} = \mathbf{QC}_{k} $, respectively, is obviously: \begin{align*} s_{Q_{j}, Q_{k}} &= \frac{1}{n}\mathbf{Q}_{j}^{\top}\mathbf{Q}_{k} \\ &= \mathbf{C}_{j}^{\top} \left( \frac{1}{n} \mathbf{Q}^{\top}\mathbf{Q} \right) \mathbf{C}_{k} \\ &= \mathbf{C}_{j}^{\top} \mathbf{S}_{Q} \mathbf{C}_{k} \\ &= \mathbf{C}_{j}^{\top} \mathbf{D}_{X} \mathbf{C}_{k} \\ \end{align*} which equals $ \lambda_{j} $ when $ j = k $ and $ 0 $ otherwise.</br></br> Moreover, we can quantify how (co)related the original variables are with the principal directions. In particular, consider the covariance between the $ i^{\text{th}} $ variable $ \mathbf{X}_{i}=\mathbf{XC}_{i} $ and the $ j^{\text{th}} $ principal component $ \mathbf{Q}_{j} $, formalized as: \begin{align} \mathbf{S}_{X_{i}Q_{j}} & = \frac{1}{n} \mathbf{X}_{i}^{\top}\mathbf{Q}_{j} \notag\\ &= \mathbf{C}_{i}^{\top} \left( \frac{1}{n}\mathbf{X}^{\top}\mathbf{Q} \right) \mathbf{C}_{j}\notag\\ &= \mathbf{C}_{i}^{\top} \left( \frac{1}{n}\mathbf{X}^{\top}\mathbf{X}\mathbf{E}_{X} \right) \mathbf{C}_{j}\notag\\ &= \mathbf{C}_{i}^{\top} \mathbf{S}_{X} \mathbf{E}_{X} \mathbf{C}_{j}\notag\\ &= \mathbf{C}_{i}^{\top} \mathbf{E}_{X}\mathbf{D}_{X} \mathbf{E}_{X}^{\top} \mathbf{E}_{X} \mathbf{C}_{j}\notag\\ &= \mathbf{C}_{i}^{\top} \mathbf{E}_{X}\mathbf{D}_{X} \mathbf{C}_{j}\notag\\ &= e_{i,j} \lambda_{j} \label{eq5} \end{align} where the antepenultimate line invokes Theorem 1 to $ \mathbf{S}_{X} $, and the cancelation to identity in the penultimate line follows by Theorem 2 and orthonormality of $ \mathbf{E}_{X} $, and the ultimate line is the product of the $ j^{\text{th}} $ element of the principal direction $ \mathbf{E}_{j} $ and the $ j^{\text{th}} $ principal eigenvalue.</br></br> <h4>Dimension Reduction</h4> At last, we arrive a the issue of dimensionality reduction. Assuming that the columns of $ \mathbf{Q} $ are arranged in decreasing order of importance (more principal columns come first), we can discard the $ g < m $ least principal columns of $ \mathbf{Q} $ until sufficient dimension reduction is achieved, and rest assured that the remaining (first) $ m - g $ columns are in fact most principal. In other words, the $ m - g $ directions which are retained, contribute to $$ \frac{ \sum \limits_{j=1}^{m-g}\lambda_{(j)}}{\lambda_{1} + \ldots + \lambda_{m}} \times 100 \% $$ of the original variation in $ \mathbf{X} $. Since directions are ordered in decreasing order of importance, the first few directions will capture the majority of variation, leaving the less principal directions to contribute information only marginally. Accordingly, one can significantly reduce dimensionality whilst retaining the majority of information. This is particularly important when we want to measure the complexity of our data set. In particular, if the $ r $ most principal directions account for the majority of variance, it stands to reason that our underlying data set is in fact only $ r- $dimensional, with the remaining $ m-r $ dimensions being noise. In other words, dimensionality reduction naturally leads to data <i>denoising</i>.</br></br> So how does one select how many principal directions to retain? There are several approaches, but we list only several below: <ol> <li> A very popular approach is to use a <i>scree plot</i> -- a plot of the ordered eigenvalues from most to least principal. The idea here is to look for a sharp drop in the function, and select the <i>bend</i> or <i>elbow</i> as the cutoff value, retaining all eigenvalues (and by extension principal directions) to the left of this value. <li> Another popular alternative is to use the cumulative proportion of variation explained by the first $ r $ principal directions. In other words, select the first $ r $ principal directions such that $ \frac{ \sum \limits_{j=1}^{r}\lambda_{(j)}}{\lambda_{1} + \ldots + \lambda_{m}} \geq 1 - \alpha $, where $ \alpha \in [0,1] $. Typical uses set $ \alpha = 0.1 $ in order to retain $ r $ most principal directions that capture at least 90\% of the system variation. <li> A more data driven result is known as the Guttman-Kaiser (Guttman (1954), Kaiser (1960), Kaiser (1961)) criterion. This criterion advocates the retention of all eigenvalues, and by extension, the associated principal directions, that exceed the average of all eigenvalues. In other words, select the first $ r $ principal directions such that $ \lambda_{(1)} + \ldots + \lambda_{(k)} \geq r\bar{\lambda} $, where $ \bar{\lambda} = \frac{1}{m} \sum\limits_{j = 1}^{m}\lambda_{j} $. <li> An entirely data-driven approach akin to classical information criteria selection methods borrows the Bai and Ng (2002) paper on factor models. In this regard, consider $$ \mathbf{X}_{j} = \beta_{1}\mathbf{Q}_{1} + \ldots + \beta_{r}\mathbf{Q}_{r} + \mathbf{U}(j,r) $$ as the regression of the $ j^{\text{th}} $ variable in $ \mathbf{X} $ on the first $ r $ principal components of $ \mathbf{S}_{X} $, and let $ \widehat{\mathbf{U}}(j,r) $ denote the corresponding residual vector. Furthermore, define $ SSR(j,r) = \frac{1}{n} \widehat{\mathbf{U}}(j,r)^{\top} \widehat{\mathbf{U}}(j,r) $ as the sum of squared residuals from said regression, and define $ SSR(r) = \frac{1}{m}\sum \limits_{j=1}^{m}SSR(j,r) $ as the average of all $ SSR(j,r) $ across all variables $ j $ for a given $ r $. We can then select $ r $ as the one that minimizes a particular penalty function. In other words, the problem reduces to: $$ \min\limits_{r} \left\{ \ln\left( SSR(r) \right) + rg(n,m) \right\} $$ where $ g(n,m) $ is a penalty term which leads to one of several criteria proposed in Bai and Ng (2002). For instance when $ n > m $, one such option is the $ IC_{p2}(r) $ criterion, and the problem above formalizes as: $$ \min\limits_{r} \left\{ \ln\left( SSR(r) \right) + r\left( \frac{n + m}{nm} \right) \ln(m) \right\} $$ </ol> Of course, it goes without saying that discarding information comes at its own cost, although, if dimensionality reduction is desired, it may well be a price worth paying.</br></br> <h3>Inference</h3> Although PCA is deeply rooted in linear algebra, it is also a very visual experience. In this regard, a particularly convenient feature is the ability to visualize multidimensional structures across two-dimensional summaries. In particular, comparing two principal directions provides a wealth of information that is typically inaccessible in traditional multidimensional contexts.</br></br> <h4>Loading Plots</h4> A powerful inferential tool unique to PCA is element-wise comparison of two principal directions. In particular, consider two principal directions $ \mathbf{E}_{j} = [e_{1,j}, \ldots, e_{m,j}]$ and $ \mathbf{E}_{k} = [e_{1,k}, \ldots, e_{m,k}]$, and let $ \left\{ \mathbf{V}_{1,j,k}, \ldots, \mathbf{V}_{m,j,k} \right\}$ denote the set of vectors from the origin $ (0,0) $ to $ \left( e_{i,j}, e_{i,k} \right) $ for $ i \in {1, \ldots, m} $. In other words, $ \mathbf{V}_{i,j,k} = \left( e_{i,j}, e_{i,k} \right)^{\top}$. Then, for any $ (j,k) $ principal direction pairs, a plot of all $ m $ vectors $ \mathbf{V}_{i,j,k} $, for $ i \in {1, \ldots, m} $, on a single plot, is called a <i>loading plot</i>.</br></br> There is an important connection between the vectors $ \mathbf{V}_{i,j,k} $ and original variable covariances. In particular, consider $ \mathbf{S}_{X_{i},X_{s}} $ -- the finite sample covariance between $ \mathbf{X}_{i} $ and $ \mathbf{X}_{s} $ -- and, assuming we have ordered eigenvalues from most principal to least, note that: \begin{align*} \mathbf{S}_{X_{i},X_{s}} &= \mathbf{C}_{i}^{\top} \mathbf{S}_{X} \mathbf{C}_{s}\\ &= \mathbf{C}_{i}^{\top} \mathbf{E}_{X} \mathbf{D}_{X} \mathbf{E}_{X}^{\top} \mathbf{C}_{s}\\ &= \lambda_{(1)}e_{i,1}e_{s,1} + \lambda_{(2)}e_{i,2}e_{s,2} + \ldots + \lambda_{(m)}e_{i,m}e_{s,m}\\ &= \mathbf{V}_{i,1,2}^{\top}\mathbf{L}_{1,2}\mathbf{V}_{s,1,2} + \ldots + \mathbf{V}_{i,m-1,m}^{\top}\mathbf{L}_{m,m-1}\mathbf{V}_{s,m-1,m} \end{align*} where $ \mathbf{L}_{j,k} = \text{diag} \left[\lambda_{(j)}, \lambda_{(k)} \right] $ denotes the appropriate scaling matrix. In other words, for any $ (j,k) $ principal direction pairs, $ \mathbf{V}_{i,j,k}^{\top} \mathbf{L}_{j,k} \mathbf{V}_{s,j,k} $ explains a proportion of the covariance $ \mathbf{S}_{X_{i},X_{s}} $. Accordingly, when $ \mathbf{X}_{i} $ and $ \mathbf{X}_{s} $ are highly correlated, we can expect $ \mathbf{V}_{i,j,k}^{\top} \mathbf{L}_{j,k} \mathbf{V}_{s,j,k} $ to be larger values. In this regard, let $ \theta_{i,s,j,k} $ denote the angle between any two vectors $ \mathbf{V}_{i,j,k} $ and $ \mathbf{V}_{s,j,k} $, and recall that \begin{align*} \cos \theta_{i,s,j,k} &= \frac{\mathbf{V}_{i,j,k}^{\top}\mathbf{V}_{s,j,k}}{\norm{\mathbf{V}_{i,j,k}} \norm{\mathbf{V}_{s,j,k}}} \end{align*} To accommodate the use of the scaling matrices $ \mathbf{L}_{j,k} $, observe that we can modify this result as follows: \begin{align} \mathbf{V}_{i,j,k}^{\top} \mathbf{L}_{j,k} \mathbf{V}_{s,j,k} = \mathbf{V}_{i,j,k}^{\top} \mathbf{L}_{j,k} \left(\mathbf{V}_{i,j,k}\mathbf{V}_{i,j,k}^{\top} \right)^{-1} \mathbf{V}_{i,j,k} \norm{\mathbf{V}_{i,j,k}} \norm{\mathbf{V}_{s,j,k}} \cos \theta_{i,s,j,k} \label{eq6} \end{align} Now, when $ \theta_{i,s,j,k} $ is small, say between $ 0 $ and $ \pi/2 $, we can expect $ \mathbf{V}_{i,j,k}^{\top} \mathbf{L}_{j,k} \mathbf{V}_{s,j,k} $ to be large, and by extension, $ \mathbf{X}_{i} $ and $ \mathbf{X}_{s} $ to be more correlated. In other words, vectors that are close to one another in a loading plot indicate stronger correlations of their underlying variables. Figure 3 below gives a visual representation. <!-- :::::::::: FIGURE 3 :::::::::: --><center><a href="https://4.bp.blogspot.com/-UoZIl4Zcy-M/W8TwEqX3v6I/AAAAAAAAAlQ/JOSR21yYSJos50X0obVJfctkd5fPwnhngCLcBGAs/s1600/pcacorr.jpg"><img src="https://4.bp.blogspot.com/-UoZIl4Zcy-M/W8TwEqX3v6I/AAAAAAAAAlQ/JOSR21yYSJos50X0obVJfctkd5fPwnhngCLcBGAs/s1600/pcacorr.jpg" title="" width="640" height="auto" /></a><br /><br /></center><!-- :::::::::: FIGURE 3 :::::::::: --> It is important to realize here that since $ \theta_{i,s,j,k} $ is in fact the angle between $ \mathbf{V}_{i,j,k} $ and $ \mathbf{V}_{s,j,k} $, the interpretation of how exhibitive $ \theta_{i,s,j,k} $ is of the underlying correlation $ \mathbf{S}_{X_{i}, X_{s}} $ is made more complicated by the presence of $ \mathbf{L}_{j,k} $ in equation (\ref{eq6}). Accordingly, to ease interpretation, the vectors $ \mathbf{V}_{i,j,k} $ are sometimes scaled appropriately, or <i>loaded</i> with scaling information, leading to the term <i>loadings</i>. In this regard, consider the vectors $ \widetilde{\mathbf{V}}_{i,j,k} = \mathbf{V}_{i,j,k} \mathbf{L}_{j,k}^{1/2} $. Here, loading is done via $ \mathbf{L}_{j,k}^{1/2} $, and we have: $$ \mathbf{S}_{X_{i}, X_{s}} = \widetilde{\mathbf{V}}_{i,1,2}^{\top}\widetilde{\mathbf{V}}_{s,1,2} + \ldots + \widetilde{\mathbf{V}}_{i,m-1,m}^{\top}\widetilde{\mathbf{V}}_{s,m-1,m} $$ and $$ \widetilde{\mathbf{V}}_{i,j,k}^{\top}\widetilde{\mathbf{V}}_{s,j,k} = \norm{\widetilde{\mathbf{V}}_{i,j,k}} \norm{\widetilde{\mathbf{V}}_{s,j,k}} \cos \widetilde{\theta}_{i,s,j,k} $$ As such, $ \widetilde{\theta}_{i,s,j,k} $ more closely exhibits the true angle between $ \mathbf{X}_{i} $ and $ \mathbf{X}_{s} $ than $ \theta_{i,s,j,k} $, and loading plots using $ \widetilde{\mathbf{V}}_{i,j,k} $ tend to be more exhibitive of the underlying correlations $ \mathbf{S}_{X_{i}, X_{s}} $ than those based on $ \mathbf{V}_{i,j,k} $. Of course, one does not have to resort to the use of $ \mathbf{L}_{j,k}^{1/2} $ as the loading matrix. In principle, one can use $ \mathbf{L}_{j,k}^{\alpha} $ for some $ 0 \leq \alpha \leq 1 $, although the underlying interpretation of what such a loading means ought to be understood first.</br></br> Of course, it is not difficult to see that $ \widetilde{\mathbf{V}}_{i,j,k} = \mathbf{V}_{i,j,k} \mathbf{L}_{j,k}^{\alpha} $ is in fact the $ i^{\text{th}} $ "XY"-pair between $ \mathbf{E}_{j}\lambda_j^{\alpha} $ and $ \mathbf{E}_{k}\lambda_k^{\alpha} $. In other words, it is the $ i^{\text{th}} $ "XY"-pair using the "loaded" $ j^{\text{th}} $ and $ k^{\text{th}} $ principal directions. Accordingly, the term <i>loading vector</i> is sometimes used to denote a loaded principal direction. In particular, the entire matrix of loading vectors $ \widetilde{\mathbf{E}}_X $ can be obtained as follows: $$ \widetilde{\mathbf{E}}_X = \mathbf{E}_X \mathbf{D}_X^{\alpha} $$ Figure 4 below demonstrates the impact of using a loading weight. In particular, the vectors in Figure 3 are superimposed on the set of loaded vectors where the loading factor is $ \mathbf{D}_{X}^{1/2} $. Clearly, the loaded vectors are much more correlated with the general shape of the data as represented by the ellipse. <!-- :::::::::: FIGURE 4 :::::::::: --><center><a href="https://2.bp.blogspot.com/--jkDij4_4Ik/W8TwFPoDq_I/AAAAAAAAAlU/uBh58i1d_4cV_R20a6b-rzcZNkW7BI0EwCLcBGAs/s1600/pcaload.jpg"><img src="https://2.bp.blogspot.com/--jkDij4_4Ik/W8TwFPoDq_I/AAAAAAAAAlU/uBh58i1d_4cV_R20a6b-rzcZNkW7BI0EwCLcBGAs/s1600/pcaload.jpg" title="" width="640" height="auto" /></a><br /><br /></center><!-- :::::::::: FIGURE 4 :::::::::: --> <h4>Scores Plots</h4> A <i>score plot</i> across principal direction pairs $ (j,k) $ is essentially a scatter plot of the principal component vector $ \mathbf{Q}_{i} $ vs. $ \mathbf{Q}_{j} $. In fact, it is the analogous version of the loading plot, but for observations as opposed to variables. In this regard, whereas the angle between two loading vectors is exhibitive of the underlying correlation between some variables, the distance between observations in a score plot exhibits homogeneity across observations. Accordingly, observations which tend to cluster together, tend to move together, and one typically looks to identify important clusters when conducting inference.</br></br> Recall also the expression derived in the last line of \ref{eq5}, namely, $ \mathbf{S}_{X_{i}Q_{j}} = e_{i,j} \lambda_{j} = \left(e_{i,j} \lambda_{j}^{1/2}\right)\lambda_{j}^{1/2} $. Notice that the latter expression states that the correlation between the $ i^{\text{th}} $ variable and the $ j^{\text{th}} $ score vector is in fact a product of the $j^{\text{th}}$ element of a loaded $i^{\text{th}}$ principal direction ($i^{\text{th}}$ loading vector), and $ \lambda_{j}^{1/2} $. Accordingly, in order to achieve a more natural interpretation, one can proceed in a manner analogous the the creation of loading vectors, and either scale or entirely remove the remaining scaling factor. This leads to the idea of <i>loaded score vectors</i>. In particular, using the context above, if one wishes to interpret the correlation between the $ i^{\text{th}} $ variable and the $ j^{\text{th}} $ score vector as just a loaded principal direction without the additional factor $ \lambda_{j}^{1/2} $, then doing so is as simple as computing $$ \mathbf{S}_{X_{i}Q_{j}\lambda_j^{-1/2}} = e_{i,j} \lambda_{j}^{1/2} $$ where we now interpret $ Q_{j}\lambda_j^{-1/2} $ as a loaded score vector. Of course, an infinite array of such scaling options is achievable using $ Q_{j}\lambda_j^{-\alpha} $, although, as before, their interpretation ought to be understood first.</br></br> <h4>Outlier Detection</h4> An important application of PCA is to <i>outlier detection</i>. The general principle exploits the first few principal directions to explain the majority of variation in the original system, and uses <i>data reconstruction</i> to generate an approximation of the original system using the first few principal components.</br></br> Formally, if we start from the matrix of all principal components $ \mathbf{Q} $, it is trivial to reconstruct the original system $ \mathbf{X} $ using the inverse: $$ \mathbf{Q}\mathbf{E}_{X}^{\top} = \mathbf{X}\mathbf{E}_{X}\mathbf{E}_{X}^{\top} = \mathbf{X}$$ On the other hand, if we restrict our principal components to the first $ r \ll m $ most principal directions, then $ \widetilde{\mathbf{Q}}\widetilde{\mathbf{E}}_{X}^{\top} = \widetilde{\mathbf{X}} \approx \mathbf{X} $, where $ \widetilde{\mathbf{Q}} $ and $ \widetilde{\mathbf{E}}_{X} $ are respectively the matrix $ \mathbf{Q} $ and $ \mathbf{E}_{X} $ with the last $ m - r $ columns removed, and $ \approx $ denotes an approximation. Then, the difference $$ \mathbf{\xi} = \widetilde{\mathbf{X}} - \mathbf{X} $$ is known as the <i>reconstruction error</i>, and if the first $ r $ principal directions explain the original variation well, we can expect $ \norm{\mathbf{\xi}}_{D}^{2} \approx \mathbf{0}$ where $ \norm{\cdot}_{D} $ denotes some measure of distance.</br></br> We would now like to define a statistic associated with outlier identification, and as in usual regression analysis, the reconstruction error (residuals) plays a key role. In particular, we follow the contributions of Jackson and Mudholkar (1979) and define $$ \mathbf{SPE} = \mathbf{\xi} \mathbf{\xi}^{\top} $$ as the <i>squared prediction error</i> most resembling the usual sum of squared residuals. Moreover, Jackson and Mudholkar (1979) show that if observations (row vectors) in $ \mathbf{X} $ are independent and identically distributed, Gaussian random variables, $ \mathbf{SPE} $ has the following distribution $$ \mathbf{SPE} \sim \sum\limits_{j+1}^{m}\lambda_{(j)}Z_{j}^{2} \equiv \Psi(k) $$ where $ \chi^{2}_{p} $ denotes the $ \chi^{2}- $distribution with $ p $ degrees of freedom, and $ Z_{j} $ are independent $ \chi^{2}_{1} $ variables. Noting that the $ i^{\text{th}} $ diagonal element of $ \mathbf{SPE} $, namely $ \mathbf{SPE}_{ii} = \mathbf{C}_{i}^{\top} (\mathbf{SPE}) \mathbf{C}_{i} $ is associated with the $ i^{\text{th}} $ observation, we can now derive a rule for outlier detection. In particular, should $ \mathbf{SPE}_{ii} $, for any $ i $, fall into some critical region defined by the upper $ (1 - \alpha) $ percentile of $ \Psi(k) $, that observation would be considered an outlier.</br></br> <h3>Closing Remarks</h3> Principal component analysis is an extremely important multivariate statistical technique that is often misunderstood and abused. The hope is that in reading this entry you will have found the intuition one often seeks in complicated subject matters, with just enough mathematical rigour to ease any serious future undertakings. In Part II of this series, we will use EViews to exhibit a PCA case study and demonstrate just how easy this is with a a few clicks.</br></br> <!-- If you would like a PDF copy of this post, please download <a href="http://www.eviews.com/blog/Images/dhpanel/dhmcstudy.prg">here</a>.</br></br>--> <hr><h3>References</h3> <table> <tr valign="top"><td align="right" class="bibtexnumber">[<a name="bai-2002">1</a>] </td><td class="bibtexitem">Jushan Bai and Serena Ng. Determining the number of factors in approximate factor models. <em>Econometrica</em>, 70(1):191--221, 2002. </td></tr> <tr valign="top"><td align="right" class="bibtexnumber">[<a name="guttman-1954">2</a>] </td><td class="bibtexitem">Louis Guttman. Some necessary conditions for common-factor analysis. <em>Psychometrika</em>, 19(2):149--161, 1954. </td></tr> <tr valign="top"><td align="right" class="bibtexnumber">[<a name="jackson-1979">3</a>] </td><td class="bibtexitem">J Edward Jackson and Govind S Mudholkar. Control procedures for residuals associated with principal component analysis. <em>Technometrics</em>, 21(3):341--349, 1979. </td></tr> <tr valign="top"><td align="right" class="bibtexnumber">[<a name="kaiser-1960">4</a>] </td><td class="bibtexitem">Henry F Kaiser. The application of electronic computers to factor analysis. <em>Educational and psychological measurement</em>, 20(1):141--151, 1960. </td></tr> <tr valign="top"><td align="right" class="bibtexnumber">[<a name="kaiser-1961">5</a>] </td><td class="bibtexitem">Henry F Kaiser. A note on guttman's lower bound for the number of common factors 1. <em>British Journal of Statistical Psychology</em>, 14(1):1--2, 1961. </td></tr></table> </span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com0tag:blogger.com,1999:blog-6883247404678549489.post-76068721939620945812018-09-19T15:30:00.002-07:002018-09-19T15:30:41.836-07:00Dissecting the business cycle and the BBQ add-in<i>Authors and guest blog by Davaajargal Luvsannyam and Khuslen Batmunkh</i><br /><i><br /></i>Dating of business cycle is a very crucial for policy makers and businesses. Business cycle is the upward and downward trend of the production or business. Especially macro business cycle, which represents the general economic prospects, plays important role for policy and management decisions. For instance, when the economy is in downtrend companies tend to act more conservative. In contrast, when the economy is in uptrend companies tend to act more aggressive with the purpose of enhancing their market share. Keynesian business cycle theory suggests that business cycle is an important indicator for monetary policy which is able to stabilize the fluctuations of the economy. Therefore accurate dating of business cycle can be fundamental to efficient and practical policy decisions.<br /><br /><a name='more'></a>In the academic study, the dating process of the business cycle has been changed from a graphical orientation towards quantitative measures extracted from parametric models. For instance, Burns and Mitchell (1946) explained the main concepts of the business cycle and introduced a graphical (classical) model that aims to calculate the peak and trough of the cycle. While Cooley and Prescott (1995) started to calculate the cycle by using the variable moments of the parametric (detrend) models.<br /><br />Burns and Mitchell define that business cycle is a pattern seen in any series, <i>Y<span style="font-size: xx-small;">t </span></i>, taken to represent aggregate economic activity. In the process of defining a cycle, we usually use the logarithm of series <i>Y<span style="font-size: xx-small;">t . </span></i>Business cycles are identified as having four distinct phases: trough, expansion, peak, and contraction (Figure 1).<br /><br /><b>Figure 1. Business Cycle</b><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-94DmCoHIAyk/W6JiBuVIB6I/AAAAAAAAAik/MbSe4UfmCR0L9B-5y3TK0HVGFv3BU3YGwCLcBGAs/s1600/Figure%2B1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="281" data-original-width="397" src="https://1.bp.blogspot.com/-94DmCoHIAyk/W6JiBuVIB6I/AAAAAAAAAik/MbSe4UfmCR0L9B-5y3TK0HVGFv3BU3YGwCLcBGAs/s1600/Figure%2B1.png" /></a></div><div class="separator" style="clear: both; text-align: center;"></div><br />These are the characteristics of a cycle. Peak (A) is the turning point when the expansion transitions into the contraction phase. Trough (C) is the turning point when the contraction transitions into the expansion phase. Duration (AB length) is the number of quarters between peak and trough. Amplitude (BC length) is the height of differences between peak and trough.<br /><br /><b>Figure 2. Illustration of the Contraction Phase</b><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-5Pr2gVYOMcE/W6JiOaDtGBI/AAAAAAAAAio/MpR2rWcN7fAqwIELDtqn-XKbrH0FmZlagCLcBGAs/s1600/Figure%2B2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="324" data-original-width="462" src="https://4.bp.blogspot.com/-5Pr2gVYOMcE/W6JiOaDtGBI/AAAAAAAAAio/MpR2rWcN7fAqwIELDtqn-XKbrH0FmZlagCLcBGAs/s1600/Figure%2B2.png" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;">The EViews add-in “BBQ” implements the methodology outlined in Harding and Pagan (2002). Harding and Pagan (2002) chose three countries, the US, the UK and Australia and established turning points for each country by using Bry-Broschan algorithm. This algorithm performs the following three steps.</div><div class="separator" style="clear: both; text-align: left;"></div><ol><li>Estimation of the possible turning points, i.e. the troughs and peaks in a series.</li><li>A technique for alternating the troughs and the peaks.</li><li>A set of rules that meet pre-determined criteria of the duration and amplitudes of phases and complete cycles after step 1 and 2.</li></ol><br />We will replicate the result of Table 1 of Harding and Pagan (2002). The example program file (bbq_ex1.prg) will generate the result. First we need to open the data file named as hpagan.wf1.<br /><br /><span style="font-family: Courier New, Courier, monospace;">wfopen hpagan.wf1</span><br /><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-yzGqyFRT-E4/W6LM2f-sVqI/AAAAAAAAAjE/I45lORKfAS8w-INm8YvnMFQvwE1gfws8QCLcBGAs/s1600/wf1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="459" data-original-width="508" height="360" src="https://4.bp.blogspot.com/-yzGqyFRT-E4/W6LM2f-sVqI/AAAAAAAAAjE/I45lORKfAS8w-INm8YvnMFQvwE1gfws8QCLcBGAs/s400/wf1.png" width="400" /></a></div><div><br /></div><div><br /></div><div><div>Data of hpagan.wf1 is quaterly real GDP of the three countries. The sample size for the US is 1947q1 to 1997q1, for the UK 1955q1 to 1997q1 and for Australia 1959q1 to 1997q1.</div><div><br /></div><div>Next we take the logarithm of series us, uk and aust. </div></div><div><br /></div><div><div><span style="font-family: Courier New, Courier, monospace;">series lus=log(us)</span></div><div><span style="font-family: Courier New, Courier, monospace;">series luk=log(uk)</span></div><div><span style="font-family: Courier New, Courier, monospace;">series laust=log(aust)</span></div><div><br /></div><div>Then we apply the bbq add-in to each series. We can do this either by command line or menu driven interface.</div><div><br /></div><div><span style="font-family: Courier New, Courier, monospace;">lus.bbq(turnphase=2, phase=2, cycle=5, thresh=10.4)</span></div><div><span style="font-family: Courier New, Courier, monospace;">luk.bbq(turnphase=2, phase=2, cycle=4, thresh=10.4)</span></div><div><span style="font-family: Courier New, Courier, monospace;">laust.bbq(turnphase=2, phase=2, cycle=5, thresh=10.4)</span></div></div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-1ZcScYTyjC8/W6LNnp1XiDI/AAAAAAAAAjM/H2KQEmTtvjY-70dhM3mHVfl5vuheKacvwCLcBGAs/s1600/output.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="579" data-original-width="1027" height="360" src="https://3.bp.blogspot.com/-1ZcScYTyjC8/W6LNnp1XiDI/AAAAAAAAAjM/H2KQEmTtvjY-70dhM3mHVfl5vuheKacvwCLcBGAs/s640/output.png" width="640" /></a></div><div><br /></div><div><br /></div><div>By definition, a peak happens at time t if <i>Y<span style="font-size: xx-small;">t-k</span>,…,Y<span style="font-size: xx-small;">t-k+1</span> </i> < <i>Y<span style="font-size: xx-small;">t</span> </i> ><i> Y<span style="font-size: xx-small;">t+1</span>,…,Y<span style="font-size: xx-small;">t+k</span></i> . <i>k</i> needs to be set for example <i>k</i>=2 for quarterly data, <i>k</i>=5 for monthly data and <i>k</i>=1 for yearly data. <i>k</i> is called the symmetric window parameter (turn phase).</div><div><div><br /></div><div>Other restrictions are often imposed on the phases. Minimum 2 quarters for expansions and contractions are often applied, in line with the rules used by NBER when dating these phases. This is the minimum phase. A complete cycle length (contraction plus expansion duration) of five quarters is also common for quarterly data. This is the minimum cycle. Finally, it may sometimes be desirable to overrule the minimum phase restriction. For example, if the fall in a series is very large one might allow the contraction to be quite short. The parameter controlling this is threshold (thresh).</div><div><br /></div><div>Also the add-in produces dummy variables for expansions and contractions (state, state1 and state2) </div><div>Alternatively you can implement the BBQ add-in by menu driven interface. In order to do so first open the series, i.e lus. Then go to proc/add-ins menu and choose <i>Bry-Broschan-Pagan-Harding BC dating</i> menu.</div></div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-e1l95Dt0bEs/W6LN_DL_aJI/AAAAAAAAAjU/l2I9n1K1gEY-C1x85k7G02y5Iu6i5eJtwCLcBGAs/s1600/dlg.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="260" data-original-width="224" src="https://1.bp.blogspot.com/-e1l95Dt0bEs/W6LN_DL_aJI/AAAAAAAAAjU/l2I9n1K1gEY-C1x85k7G02y5Iu6i5eJtwCLcBGAs/s1600/dlg.png" /></a></div><div><br /></div><div><br /></div><div><div><i><b>References:</b></i></div><div>Bry and Boschan (1971). "<i>Cyclical Analysis of Time Series: Selected Procedures and Computer Programs</i>", NBER, New York.</div><div>Burns, A., Mitchell, W. C. (1946). "<i>Measuring Business Cycles (Vol. 2)</i>." New York, NY: National Bureau of Economic Research</div><div>Cooley and Prescott (1995) ʺ<i>Economic Growth and Business Cycles</i>ʺ Frontiers of Business Cycle Research, ed. Thomas F. Cooley, Princeton University Press, 1‐38.</div><div>Pagan and Harding (2002) "<i>Dissecting the cycle: a methodological investigation</i>", Journal of Monetary Economics, Volume 49, Issue 2, 365-381.</div></div><div><br /></div><div><br /></div><div><br /></div>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com7