tag:blogger.com,1999:blog-68832474046785494892022-05-21T15:14:47.130-07:00EViewsIHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.comBlogger57125tag:blogger.com,1999:blog-6883247404678549489.post-16301401579753563562022-04-19T10:31:00.001-07:002022-04-23T06:22:15.245-07:00Simulation and Bootstrap Forecasting from Univariate GARCH Models<style> table { border: 0px solid black; border-collapse: separate; border-spacing: 10px; } td { border: 1px solid black; } .nb { border: 0px solid black; } .step { counter-reset: section; list-style-type: none; } .step li::before { counter-increment: section; content: "Step "counter(section) ": "; } .seccol { } .subseccol { color: #fa5e5e } </style> <script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { lb: ['{\\left(}'], rb: ['{\\right)}'], rbrace: ['{\\left(#1\\right)}', 1], cbrace: ['{\\left\\{#1\\right\\}}', 1], sbrace: ['{\\left[#1\\right]}', 1], bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1], series: ['{\\left\\{#1_{#2}\\right\\}_{#2=#3}^{#4}}', 4], xsum: ['{\\sum_{#1=#2}^{#3}{#4}}', 4], var: ['{\\operatorname\{var\}}'], sign: ['{\\operatorname\{sign\}}'], diag: ['{\\operatorname\{diag\}}'], med: ['{\\operatorname\{median\}}'], vec: ['{\\operatorname\{vec\}}'], tr: ['{\\operatorname\{tr\}}'], min: ['{\\operatorname\{min\}}'] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"> </script> <span style="font-family: "verdana" sans-serif"> <i>Authors and guest post by Eren Ocakverdi</i><br /><br /> This blog piece intends to introduce a new add-in (i.e. <a href='http://www.eviews.com/Addins/simulugarch.aipz'>SIMULUGARCH</a>) that extends the current capability of EViews’ available features for the forecasting of univariate GARCH models. <a name='more'></a><br /><br /> <h3 class="seccol">Table of Contents</h3> <ol> <li><a href="#sec1">Introduction</a> <li><a href="#sec2">Forecasting with Simulation or Bootstrap</a> <li><a href="#sec3">Application to price of Bitcoin</a> <li><a href="#sec4">Files</a> <li><a href="#sec5">References</a> </ol><br /> <h3 class="seccol", id="sec1">Introduction</h3> Estimation of conditional volatility is not an easy task as it is an unobserved phenomenon and therefore certain assumptions need to be made for that purpose. Once the model parameters are identified, it is relatively straightforward to produce forecasts. However, unlike the regular mean models (e.g. OLS, ARIMA etc.), generating a confidence interval around the forecast of conditional volatility requires an additional effort.<br/><br/><br/><br/> <h3 class="seccol", id="sec2">Forecasting with Simulation or Bootstrap</h3> Suppose that we prefer a GARCH(1,1) model to explain the volatility dynamics of the logarithmic return of a financial asset: <br /><br /> \begin{align*} \Delta \log(P_t) &= r_t = \bar{r} + e_t\\ e_t &= \epsilon_t \sigma_t\\ \sigma_t^2 &= \omega + \alpha_1 e_{t - 1}^2 + \beta_1\sigma_{t - 1}^2 \end{align*} where $ \epsilon_t \sim IID(0,1) $. As shown by Enders (2014), h-step-ahead forecast of the conditional variance is as follows:<br /><br /> \begin{align*} \sigma_{t + h}^2 &= \omega + \alpha_1 e_{t + h -1}^2 + \beta_1\sigma_{t + h - 1}^2\\ E(\sigma_{t+h}^2) &= \omega + \alpha_1 E(e_{t + h - 1}^2) + \beta_1 E(\sigma_{t + h - 1}^2)\\ E(e_{t+h}^2) &= E(e_{t + h}^2\sigma_{t + h}^2) = E(\sigma_{t + h}^2)\\ E(\sigma_{t + h}^2) &= \omega + (\alpha_1 + \beta_1)E(\sigma_{t + h - 1}^2) \end{align*} If $ (\alpha_1 + \beta_1) < 1 $, then it implies that forecasts of conditional variance will converge to a long-run value of $ E(\sigma_t^2) = \omega/(1 - \alpha_1 - \beta_1) $.<br /><br /> Median of conditional variance would be a useful gauge as a central tendency since the variance is a squared value and therefore has a skewed distribution towards larger values. In order to compute the median value along with the associated confidence interval, we need different realizations of forecasted values of conditional variance. One can either simulate or bootstrap the values of innovations (i.e. $ \epsilon_t $) to do so. Simulation generates random samples of innovations from the theoretical distribution assumed in the estimation of model. Bootstrap, on the other hand, does resampling (with replacement) of innovations and is therefore mimics the sampling process successfully as long as the observed distribution of sample resembles the distribution of population.<br/><br/><br/><br/> <h3 class="seccol", id="sec3">Application to price of Bitcoin</h3> Bitcoin has emerged as the newest and well-known kid on the block (of investment products) and its value has been quite volatile so far (<b>XBTUSD.WF1</b>).<br /><br /> Simple visual inspections of price level and log returns show us the explosive dynamics and large fluctuations during the analysis period of 2011-2021 (<b>SIMULUGARCH_EXAMPLE.PRG</b>).<br /><br /> <!-- :::::::::: FIGURES 1a and 1b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 1a :::::::::: --> <center> <a href="http://www.eviews.com/blog/simulugarch/images/xbtusd.png"><img height="auto" src="http://www.eviews.com/blog/simulugarch/images/xbtusd.png" title="XBTUSD" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 1b :::::::::: --> <center> <a href="http://www.eviews.com/blog/simulugarch/images/dlogxbtusd.png"><img height="auto" src="http://www.eviews.com/blog/simulugarch/images/dlogxbtusd.png" title="Log Difference of XBTUSD" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 1a: XBTUSD</small> </center> </td> <td class="nb"> <center> <small>Figure 1b: Log Difference of XBTUSD</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 1a and 1b :::::::::: --> In order to estimate the conditional variance of returns, a simple GARCH(1,1) is fitted to log returns of Bitcoin.<br /><br /> <!-- :::::::::: FIGURE 2 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/simulugarch/images/est.png"><img height="auto" src="http://www.eviews.com/blog/simulugarch/images/est.png" title="GARCH(1,1)" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 2: GARCH(1,1)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 2 :::::::::: --> Level series depicts that there were some severe price fluctuations during 2021, whereas estimated conditional variance of return series suggests that the highest spikes seem to have occurred during 2013.<br /><br /> <!-- :::::::::: FIGURE 3 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/simulugarch/images/condvar.png"><img height="auto" src="http://www.eviews.com/blog/simulugarch/images/condvar.png" title="CONDVAR" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 3: CONDVAR</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 3 :::::::::: --> Before forecasting the price level, one needs to generate future values of estimated conditional variance either by simulation or bootstrap. This is where the add-in comes handy:<br /><br /> <!-- :::::::::: FIGURE 4 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/simulugarch/images/dialog.png"><img height="auto" src="http://www.eviews.com/blog/simulugarch/images/dialog.png" title="SIMULUGARCH Dialog" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 4: SIMULUGARCH Dialog</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 4 :::::::::: --> Details of the input parameters are explained in the help document that comes with the add-in package. Here, we change the default number of repetitions and forecast horizon to 10K and 22 steps, respectively. Also, fan chart is preferred to summarize the output.<br /><br /> Median scenario for volatility is a gradual increase over the coming month (i.e. 22 business days). This should be expected as the long-run value (i.e. unconditional variance) is calculated to be around 156. However, please keep in mind that median value is always smaller than the mean in right-skewed distributions.<br /><br /> <!-- :::::::::: FIGURES 5a and 5b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 5a :::::::::: --> <center> <a href="http://www.eviews.com/blog/simulugarch/images/forecast_dep.png"><img height="auto" src="http://www.eviews.com/blog/simulugarch/images/forecast_dep.png" title="Forecast of Dependent Variable" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 5b :::::::::: --> <center> <a href="http://www.eviews.com/blog/simulugarch/images/forecast_condvar.png"><img height="auto" src="http://www.eviews.com/blog/simulugarch/images/forecast_condvar.png" title="Forecast of Conditional Variance" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 5a: Forecast of Dependent Variable</small> </center> </td> <td class="nb"> <center> <small>Figure 5b: Forecast of Conditional Variance</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 5a and 5b :::::::::: --> The role of volatility in forecast uncertainty becomes visible as we simulate the future values of price level, which would have important financial implications (e.g. for computation of Value-at-Risk). Even by the end of next month, for instance, USD price of Bitcoin might climb to as high as 70K or drop to as low as 35K!<br /><br /><br /><br /> <hr /> <h3 class="seccol", id="sec4">Files</h3> <ul> <li><a href="http://www.eviews.com/blog/simulugarch/workfiles/xbtusd.wf1"'><b class="wf">XBTUSD.WF1</b></a></li> <li><a href="http://www.eviews.com/blog/simulugarch/workfiles/simulugarch_example.prg"'><b class="wf">SIMULUGARCH_EXAMPLE.PRG</b></a></li> </ul><br /><br /> <hr /> <h3 class="seccol", id="sec5">References</h3> <ol class="bib2xhtml"> <li id="enders-2004"> Enders, W. (2014), <i>Applied Economic Time Series, Fourth Edition"</i>, John Wiley & Sons. </li> </ol></span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com2tag:blogger.com,1999:blog-6883247404678549489.post-40376420410486653872021-11-01T15:20:00.001-07:002021-11-01T15:23:17.427-07:00SpecEval Add-In - Part 2<style> table { border: 0px solid black; border-collapse: separate; border-spacing: 10px; } td { border: 1px solid black; } .classic_table { border: 1px solid black; border-collapse: collapse; border-spacing: 0px; } .classic_table tr { border-bottom: 1px solid black; border-top: 1px solid black; } .classic_table tr:first-child { border-top: none; } .classic_table tr:last-child { border-bottom: none; } .classic_table td { border-left: 1px solid black; border-right: 1px solid black; padding-right: 10px; padding-left: 10px; } .classic_table td:first-child { border-left: none; } .classic_table td:last-child { border-right: none; } .break_row { border-bottom: 3px solid #fa5e5e !important } .nb { border: 0px solid black; } .step { counter-reset: section; list-style-type: none; } .step li::before { counter-increment: section; content: "Step "counter(section) ": "; } .seccol { } .subseccol { color: #fa5e5e } </style> <script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { lb: ['{\\left(}'], rb: ['{\\right)}'], rbrace: ['{\\left(#1\\right)}', 1], cbrace: ['{\\left\\{#1\\right\\}}', 1], sbrace: ['{\\left[#1\\right]}', 1], bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1], series: ['{\\left\\{#1_{#2}\\right\\}_{#2=#3}^{#4}}', 4], xsum: ['{\\sum_{#1=#2}^{#3}{#4}}', 4], var: ['{\\operatorname\{var\}}'], sign: ['{\\operatorname\{sign\}}'], diag: ['{\\operatorname\{diag\}}'], med: ['{\\operatorname\{median\}}'], vec: ['{\\operatorname\{vec\}}'], tr: ['{\\operatorname\{tr\}}'], min: ['{\\operatorname\{min\}}'] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"> </script> <span style="font-family: "verdana" sans-serif"> <i>Authors and guest post by Kamil Kovar</i><br /><br /> This is the second in a series of blog posts (the first can be found <a href="http://www.eviews.com/blog/speceval/images/overview_est.png">here</a>) that present a new EViews add-in, SpecEval, aimed at facilitating development of time series models used for forecasting. This blog post will focus on the illustration of the basic outputs of the add-in by following a simple application, which will also illustrate the model development process that the add-in aims to facilitate. Next section provides brief discussion of this process, while the following section discusses the data and models considered. The main content of this blog post is contained in next two sections, which discuss basic execution before presenting the actual application. <a name='more'></a><br /><br /> <h3 class="seccol">Table of Contents</h3> <ol> <li><a href="#sec1">Model Development Process</a> <li><a href="#sec2">Data and Models</a> <li><a href="#sec3">Execution</a> <li><a href="#sec4">Model Forecasting Performance</a> <li><a href="#sec5">Model Sensitivity</a> <li><a href="#sec6">Concluding Remarks</a> <li><a href="#sec7">Footnotes</a> </ol><br /> <h3 class="seccol", id="sec1">Model Development Process</h3> The SpecEval add-in was created with a particular model development process in mind. Specifically, the add-in is based on the belief that model development process should be both iterative and – more importantly – interactive. It should be iterative in that it proceeds in steps, each improving the earlier version of the model, be it in form of inclusion of additional regressors or modification of already included regressors. It should be interactive in that the improvements should be based on information about shortcomings of the earlier model. Importantly, this means that the development process should be done by human developer, rather than rely on computer algorithm, since it requires a modicum of imagination.<br /><br /> The workflow of the model development process is shown in figure below. The process starts with initial proposed model, which is then evaluated using the outputs of the add-in. These outputs contain multiple relevant pieces of information, from basic model properties entailed in estimation output such as regression coefficients, to forecast performance and finally sensitivity properties. Each of these can be used to identify shortcomings of the current model and propose modifications which will address these shortcomings, in an interactive model development process on part of model developer.<br /><br /> <!-- :::::::::: FIGURE 1 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/speceval_part2/images/model_development_process.png"><img height="auto" src="http://www.eviews.com/blog/speceval_part2/images/model_development_process.png" title="Model Development Process" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 1: Model Development Process</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 1 :::::::::: --> Since in most situations the information can be ordered in terms of importance – e.g. "correct" coefficient signs are necessary, while desired degree of sensitivity often is not - one can view the process as linear, proceeding from basic properties through forecasting performance to sensitivity. We will roughly follow this model development process in the remainder of this blog post.<br /><br /><br /><br /> <h3 class="seccol", id="sec2">Data and Models</h3> The add-in will be illustrated on modelling a relatively simple time series – an industrial production in Czechia.<sup><a href="#fn1" id="ref1">1</a> The quarterly series is displayed in figure below. It is clear, that the series is trending, but that it does not follow a deterministic trend. Correspondingly, in what follows we will use log-difference of industrial production as the dependent variable.</sup><br /><br /> <!-- :::::::::: FIGURE 2 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/speceval_part2/images/czechia_ip.png"><img height="auto" src="http://www.eviews.com/blog/speceval_part2/images/czechia_ip.png" title="Czechia Industrial Production" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 2: Czechia Industrial Production</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 2 :::::::::: --> What model should we use for forecasting industrial production? The answer to this question depends on the environment in which one is forecasting given series. The type of models can vary from simple univariate reduced from ARIMA models, through their multivariate multiequation cousins, VAR models, to structural single or multiple-equation models. Here we will illustrate the SpecEval add-in on the multivariate single-equation models for which the add-in is most suitable. The choice corresponds to environment where one has available forecasts for multiple potential right-hand side variables, such as GDP, and wants to “expand” these forecasts to industrial production, i.e. produce forecasts industrial production that are consistent with forecasts for other macroeconomic variables. This is fairly common task, especially in the context of macroeconomic stress testing.<br/><br/> Within this class of models, our starting point is simple regression linking a log-difference of the industrial production to a log-difference of the GDP: $$ \text{dlog}(IP_{t}) = \beta_0 + \beta_1 \text{dlog}(GDP_t) $$ This equation simply postulates that current growth rate of industrial production can be well predicted by the current growth rate of GDP, a reasonable postulation, given that both are measures of economic activity. Later we will enrich this model by including additional variables/regressors based on the analysis of this model. Before considering additional multivariate models, though, we will use simple ARIMA(0,1,2) model as our benchmark. The first equation is called <b>EQ_GDP</b> while the second is called <b>EQ_ARIMA</b>.<br/><br/><br/><br/> <h3 class="seccol", id="sec3">Execution</h3> SpecEval allows modeler to produce a report by either executing it through GUI or issuing the relevant command from given equation object, an approach we take here: <pre><code><br /> eq_gdp.speceval(noprompt)<br /> </code></pre> This command would produce and display spool with several output objects that can be used to evaluate the given equation (see left panel of figure below). However, it is more interesting to consider the given equation in context of the benchmark ARIMA equation and hence execute SpecEval for both equations, what can be don by simply adding another equation to a list of specifications: <pre><code><br /> eq_gdp.speceval(spec_list=eq_arima)<br /> </code></pre> What we have done here is just specify that the list of specification for which the add-in will be executed should include also ‘eq_arima’ equation. As a result, the add-in will produce and display spool that is organized by the type of output, so that same outputs for different specifications are next to each other, facilitating quick comparison. See right panel of figure below.<br/><br/> <!-- :::::::::: FIGURE 3 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/speceval_part2/images/output_spools.png"><img height="auto" src="http://www.eviews.com/blog/speceval_part2/images/output_spools.png" title="Output Spools" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 3: Output Spools</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 3 :::::::::: --> <h3 class="seccol", id="sec4">Model Forecasting Performance</h3> Starting point of analysis of any forecasting model is of course its estimation output, and so SpecEval includes it among its outputs. Rather than using the standard estimation output reported by Eviews, the SpecEval reports estimation output that is enhanced in several ways, such as color coding and formatting of numbers, as well as information about included variables:<br/><br/> <!-- :::::::::: FIGURE 4 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/speceval_part2/images/czechia_ip_estimation.png"><img height="auto" src="http://www.eviews.com/blog/speceval_part2/images/czechia_ip_estimation.png" title="Czechia Industrial Production - Estimation" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 4: Czechia Industrial Production - Estimation</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 4 :::::::::: --> Estimation output provides some basic information about the model. However, it provides limited information about forecasting performance. True, statistics like R-squared, standard deviation of residual, or Durbin-Watson statistic can be re-interpreted as indicators of forecasting performance, but only as very limited ones. Addressing this shortcoming is one of the key motivations for SpecEval and hence the report includes explicit information about forecasting performance. First, there is table with values of forecast precision metrics, such as Root Mean Square Percentage Error (RMSPE), that are color-coded according to their rank. For our application this table shows that the proposed model is worse in terms of forecasting performance than the benchmark ARIMA model if we consider longer forecasting horizons, which is dispiriting conclusion given that our model includes additional information. <br/><br/> <!-- :::::::::: FIGURE 5 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/speceval_part2/images/czechia_ip_rmspe.png"><img height="auto" src="http://www.eviews.com/blog/speceval_part2/images/czechia_ip_rmspe.png" title="Czechia Industrial Production - RMSPE" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 5: Czechia Industrial Production - RMSPE</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 5 :::::::::: --> Before despairing and concluding that GDP is not useful for forecasting industrial production, it is useful to look at forecasting performance in more detail than what is incorporated in the summary statistics. Specifically, we can leverage the second output focused on forecasting performance, the forecast summary graphs. The motivation for those is simple: precision metrics are summary statistics over the whole backtesting sample, and hence it is possible that they mask important heterogeneity across the sample, something that forecast summary graphs will immediately reveal. This is indeed the case in our application since bad forecasting performance for the our model is concentrated in the early periods – after 2000 the forecasting performance looks much better than that of the benchmark model.<br/><br/> <!-- :::::::::: FIGURE 6 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/speceval_part2/images/czechia_ip_forecast_summary.png"><img height="auto" src="http://www.eviews.com/blog/speceval_part2/images/czechia_ip_forecast_summary.png" title="Czechia Industrial Production - Forecast Summary" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 6: Czechia Industrial Production - Forecast Summary</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 6 :::::::::: --> SpecEval provides flexibility to explore this issue in further detail. For example, the forecasting performance in beginning of the sample is so bad that one would likely suspect issues with the estimated coefficients. To check this out, we can include coefficient stability graphs among the outputs: <pre><code><br /> eq_gdp.speceval(spec_list=eq_arima)<br /> </code></pre> Here we just specified that the execution list should also include stability outputs, apart from the normal outputs. The resulting graph displayed below shows the full time series of recursive regression coefficients, together with their standard errors. What is crucial from our perspective, is that the graph indeed confirms our suspicions: the coefficient on GDP in the early parts of the sample is negative, which is at odds with our expectations and likely reflects the very small number of observations used for estimation in the beginning of the backtesting sample.<br/><br/> <!-- :::::::::: FIGURE 7 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/speceval_part2/images/czechia_ip_coef_stability.png"><img height="auto" src="http://www.eviews.com/blog/speceval_part2/images/czechia_ip_coef_stability.png" title="Czechia Industrial Production - Coefficient Stability" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 7: Czechia Industrial Production - Coefficient Stability</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 7 :::::::::: --> Another way we can explore this issue is to switch from out-of-sample to in-sample forecasting. In other words, we can use the actual equation estimated on the full available sample to make the individual backtest forecasts. Or alternatively, and more simply, we can stick with out-of-sample forecasts but limit the evaluation sample to start in 2000q1. The two execution commands corresponding to these options are following: <pre><code><br /> eq_gdp.speceval(spec_list=eq_arima,oos=”f”)<br /> eq_gdp.speceval(spec_list=eq_arima,tfirst_test=”2000q1”) <br /> </code></pre> Either of these approaches show that the initial superiority of ARIMA model was consequence of bad forecasts based on short estimation sample, as evidenced by tables below. Crucially, these early forecasts do not provide approximation of what the forecast would be at that point in time: any economist operating the model would likely discard forecasts from model with negative coefficient on GDP. However, without the knowledge of this artifact of the results – such as when we would rely on precision metrics alone, as is customary - we would potentially discard the model altogether. This shows both the value added by the SpecEval and its flexibility, and the value of incorporating graphical information about forecasting performance. Document ‘SpecEval illustrated’ provides many additional examples of this flexibility and how it can be leveraged in developing forecasting models.<br/><br/> <!-- :::::::::: FIGURE 8 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/speceval_part2/images/czechia_ip_is_rmspe.png"><img height="auto" src="http://www.eviews.com/blog/speceval_part2/images/czechia_ip_is_rmspe.png" title="Czechia Industrial Production - In-Sample RMSPE" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 8: Czechia Industrial Production - In-Sample RMSPE</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 8 :::::::::: --> <br/><br/><br/><br/> <h3 class="seccol", id="sec5">Model Sensitivity</h3> The second main focus on SpecEval outputs – in addition to forecasting performance – is evaluation of model sensitivity, that is how does the proposed model respond to outside shocks. There are three types of outputs that belong to this category. First, SpecEval allows user to specify set of historical sub-samples for which forecasts performance can be analyzed separately, be it in terms of forecast precision metrics or in terms of forecast graphs, on which we will focus here. The above figures captured the forecasting performance over the whole sample, but sometimes performance for particular historical period is of special interest given their unusual nature relative to the rest of the backtest sample. An example from credit risk modelling are recessionary periods or periods of financial stress. To analyze such period in context of our example, we simply need to include specify sub-samples of interest: <pre><code><br /> eq_gdp.speceval(subsamples=”2008q3-2009q4, 2011q3-2013q2”,oos=”f”)<br /> </code></pre> The top panels of figure below show the resulting graphs which capture the forecast from our model over the period of Great Recession and European Sovereign Debt Crisis. The conclusion is not very positive since the model fails to predict the magnitude of the decline in industrial production, especially during the Great Recession.<br/><br/> <!-- :::::::::: FIGURE 9 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/speceval_part2/images/czechia_ip_subsample_forecast.png"><img height="auto" src="http://www.eviews.com/blog/speceval_part2/images/czechia_ip_subsample_forecast.png" title="Czechia Industrial Production - Subsample Forecast" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 9: Czechia Industrial Production - Subsample Forecast</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 9 :::::::::: --> One potential solution is to allow for the relationship between GDP and industrial production to be different during normal and recessionary periods by adding interaction with dummy variable indicating recessionary period: $$ \text{dlog}(IP_{t}) = \beta_0 + \beta_1 \text{dlog}(GDP_t) + \beta_2 \text{dlog}(GDP_t) D_t^recession $$ <!-- :::::::::: FIGURE 10 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/speceval_part2/images/czechia_ip_estimation_recession.png"><img height="auto" src="http://www.eviews.com/blog/speceval_part2/images/czechia_ip_estimation_recession.png" title="Czechia Industrial Production - Estimation (Recession)" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 10: Czechia Industrial Production - Estimation (Recession)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 10 :::::::::: --> The forecasts from resulting model captured in bottom panels of the above figure show significant improvement over the original model in terms of forecasting during recessionary periods in context of in-sample forecasting.<br/><br/> Second category of outputs focused on model sensitivity displays conditional scenario forecasts made using given model specification. This entails making forecast for the dependent variable under alternative scenario paths for the independent variables. While this is especially useful in situation when such scenario forecasting is of interest, it is useful more generally in model development as source of alternative information about the model and its behavior, something we illustrate here. To obtain conditional scenario forecasts using SpecEval we just need to specify list of scenarios as one of the arguments as in the first argument in following command: <pre><code><br /> eq_gdp_dummy.speceval(scenarios=”bl sd”,exec_list=”normal scenarios_individual”,tfirst_sgraph=”2006q1”, graph_add_scenarios="gdp[r],trans=”deviation”)<br /> </code></pre> Here, apart from the list of scenarios, we have specified several other options: we have indicated that we want to have individual scenario graphs as the output (rather than graphs showing all scenarios together); that we want the scenario graphs to start in 2006q1; that they should also include GDP (as opposed to only the industrial production); and that the transformation charts should be in terms of deviations from baseline. Top panels of figure below show the graph capturing level of the forecast and graph capturing the deviation from baseline, respectively. These leave us with mixed feelings about the model. On the positive side, the decline in industrial production seems appropriate given the decline in GDP - as was historically the case, industrial production does fall significantly more than GDP, reflecting the fact that the combined coefficient is above 2. On the negative side the industrial production remains significantly below GDP even in the long run, which seems counterintuitive - one would expect both the drop and rebound in industrial production to be larger so that the permanent effect on industrial production is only slightly larger than for GDP.<br/><br/> <!-- :::::::::: FIGURE 11 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/speceval_part2/images/czechia_ip_forecast_scenario.png"><img height="auto" src="http://www.eviews.com/blog/speceval_part2/images/czechia_ip_forecast_scenario.png" title="Czechia Industrial Production - Forecast Scenario" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 11: Czechia Industrial Production - Forecast Scenario</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 11 :::::::::: --> The reason why the model fails to make such forecast is because it makes industrial production more sensitive movements in GDP only during recessions, not during recoveries. One simple way to address this is to replace the dummy indicating recession by dummy that captures both recessions and recoveries. Here, we simply use new dummy that is now equal to 1 also 4 quarters after the end of recessions: $$ \text{dlog}(IP_{t}) = \beta_0 + \beta_1 \text{dlog}(GDP_t) + \beta_2 \text{dlog}(GDP_t) (\text{@}movav(D_t^recession, 4) > 1) $$ The resulting scenario forecasts are in bottom panels of figure above and show that the model modification addressed our initial concerns: industrial production still falls more than GDP, but then also rebounds more strongly so that in the long run the shortfall in industrial production is only slightly larger than that of GDP.<br/><br/> The inclusion of recession dummy was motivated by shortcomings of the model in terms of historical forecasts during recessionary periods, while its replacement by recession-and-recovery dummy was motivated by shortcomings in terms of scenario forecasts. However, it turns out that both modifications also help a lot with overall forecasting performance, as evidenced in the table below. In this sense analysis of model sensitivity and especially of its behavior in conditional scenarios are complementary to analysis overall forecasting performance and hence useful for model development purposes even if model sensitivity and scenario forecasting is itself not of importance.<br/><br/> <!-- :::::::::: FIGURE 12 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/speceval_part2/images/czechia_ip_scenario_rmspe.png"><img height="auto" src="http://www.eviews.com/blog/speceval_part2/images/czechia_ip_scenario_rmspe.png" title="Czechia Industrial Production - In-Sample RMSPE (Scenario)" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 12: Czechia Industrial Production - In-Sample RMSPE (Scenario)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 12 :::::::::: --> Final category of model sensitivity outputs is composed of shock response graphs. The concept should be familiar from the VAR literature: one studies how does the dependent variable respond to shocks to individual independent variables.<sup><a href="#fn2" id="ref2">2</a></sup> SpecEval implements this procedure for single equation multivariate time series models; one simply needs to include shocks in the execution list: <pre><code><br /> eq_gdp_dummy2.speceval(exec_list=normal shocks, shock_type=transitory)<br /> </code></pre> As a result, the report will now include two types of figures corresponding to two types of shocks, depending on whether the underlying independent variables or the actual regressor is being shocked. In either case the corresponding figure shows 4 graphs: (1) graph with two paths for the underlying dependent variable, without shock and one with shock, (2) graph with deviation/difference between the two paths, (3&4) analogical graphs for the shocked variable/regressors. Below is example for a modified version of our model with dummy variable, which now includes also lagged dependent variable and lag of the GDP regressor. This means that the model now belongs to the Autoregressive Distributed Lag (ARDL) family, making its shock responses dynamic and hence hard to gauge from estimation output alone. For such models visualizing the exact shock responses can be very valuable. For example, in current context the transitory decrease in GDP (see bottom panels) leads to initial drop in industrial production, which is then reversed so much that industrial production is above the no-shock path for several quarters (see top panels).<br/><br/> <!-- :::::::::: FIGURE 13 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/speceval_part2/images/czechia_ip_ir.png"><img height="auto" src="http://www.eviews.com/blog/speceval_part2/images/czechia_ip_ir.png" title="Czechia Industrial Production - Shock-Response" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 13: Czechia Industrial Production - Shock-Response</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 13 :::::::::: --> This shock response might be unappealing from scenario perspective because it can easily lead to downside scenarios characterized by recession and recovery in GDP featuring industrial production that temporarily rises above baseline. In this way studying shock responses can be important tool when models will be used in scenario forecasting. However, the value is not limited to this use case: the above shock response would probably alert the modeler that different model structure – for example replacing lagged dependent variable with autoregressive error – might be preferable from forecasting perspective. Indeed, while the ARDL model has worse forecasting performance than the model without any lagged components, model that includes only AR(1) error – and hence does not feature the shock response reversals - has significantly better forecasting performance, as shown in table below.<br/><br/> <!-- :::::::::: FIGURE 14 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/speceval_part2/images/czechia_ip_compare_rmspe.png"><img height="auto" src="http://www.eviews.com/blog/speceval_part2/images/czechia_ip_compare_rmspe.png" title="Czechia Industrial Production - RMSPE Comparison" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 14: Czechia Industrial Production - RMSPE Comparison</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 14 :::::::::: --> <br/><br/><br/><br/> <h3 class="seccol", id="sec6">Concluding Remarks</h3> This part of the blog post series dedicated to SpecEval was focused on showcasing how SpecEval can be operated, what are the basic outputs and how they can be leveraged in model development process. However, for the sake of brevity the possibilities highlighted here were far from exhaustive – reader should consult ‘SpecEval illustrated’ document for more detailed discussion. That said, the next blog post in this series will focus on one particular functionality of SpecEval – the use and value of transformations in model development process.<br/><br/><br/><br/> <hr /> <h3 id="sec7">Footnotes</h3> <sup id="fn1">1. The data and together with program that will replicate the outputs reported here can be found on my personal <a href="https://drive.google.com/open?id=1gNdUVrCOVY2xCfsO1nBhxonTxa0bQRyT">website</a>.<a href="#ref1" title="Jump back to footnote 1 in the text.">↩</a></sup><br/> <sup id="fn2">2. This kind of analysis is readily available in Eviews (or other statistical packages) for VAR models. However, this type of analysis is puzzlingly uncommon in case of single equation multivariate time series models, and correspondingly is not supported by Eviews of other statistical packages, a gap SpecEval tries to fill. Note that for univariate ARIMA models Eviews – unlike most other statistical packages - does support this kind of analysis.<a href="#ref2" title="Jump back to footnote 2 in the text.">↩</a></sup><br/> <sup id="fn3">3. Note that these two features – in-sample forecasting and inclusion of multiple equations in the forecasting model – are possible thanks to in-built EViews functionality and hard to replicate in other statistical programs. The former is thanks to the separation between estimation and forecasting samples, the latter thanks to flexible model objects.<a href="#ref3" title="Jump back to footnote 3 in the text.">↩</a></sup></span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com0tag:blogger.com,1999:blog-6883247404678549489.post-79586036652011853442021-05-13T08:29:00.000-07:002021-05-13T08:29:38.682-07:00Box-Cox Transformation and the Estimation of Lambda Parameter<style> table { border: 0px solid black; border-collapse: separate; border-spacing: 10px; } td { border: 0px solid black; } .nb { border: 0px solid black; } .step { counter-reset: section; list-style-type: none; } .step li::before { counter-increment: section; content: "Step "counter(section) ": "; } .seccol { } .subseccol { color: #fa5e5e } </style> <script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { lb: ['{\\left(}'], rb: ['{\\right)}'], rbrace: ['{\\left(#1\\right)}', 1], cbrace: ['{\\left\\{#1\\right\\}}', 1], sbrace: ['{\\left[#1\\right]}', 1], bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1], series: ['{\\left\\{#1_{#2}\\right\\}_{#2=#3}^{#4}}', 4], xsum: ['{\\sum_{#1=#2}^{#3}{#4}}', 4], var: ['{\\operatorname\{var\}}'], sign: ['{\\operatorname\{sign\}}'], diag: ['{\\operatorname\{diag\}}'], med: ['{\\operatorname\{median\}}'], vec: ['{\\operatorname\{vec\}}'], tr: ['{\\operatorname\{tr\}}'], min: ['{\\operatorname\{min\}}'] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"> </script> <span style="font-family: "verdana" sans-serif"> <i>Authors and guest post by Eren Ocakverdi</i><br /><br /> This blog piece intends to introduce a new add-in (i.e. <a href='http://www.eviews.com/Addins/boxcox.aipz'><b>BOXCOX</b></a>) that can be used in applying power transformations to the series of interest and provides alternative methods to estimate the optimal lambda parameter to be used in transformation. <a name='more'></a><br /><br /> <h3 class="seccol">Table of Contents</h3> <ol> <li><a href="#sec1">Introduction</a> <li><a href="#sec2">Box-Cox family of transformations</a> <li><a href="#sec3">Application to Turkey’s tourism data</a> <li><a href="#sec4">Files</a> <li><a href="#sec5">References</a> </ol><br /> <h3 class="seccol", id="sec1">Introduction</h3> A stationary time series requires stable mean and variance, which can then be modelled through ARMA-type models. If a series does not have a finite variance, it violates this condition and will lead to ill-defined models. Common practice in dealing with time varying volatility is modeling the variance explicitly through GARCH-type models. However, when the variance of a given series changes with respect to the level, then there is a practical alternative: transforming the original series so as to scale down (up) the large (small) values.<br/><br/><br/><br/> <h3 class="seccol", id="sec2">Box-Cox family of transformations</h3> Box and Cox (1964) proposed a family of power transformations, which later became a popular tool in time series analysis to deal with skewness in the data:<br /><br /> $$ \tilde{y}_t = \begin{cases} \frac{y_t^\lambda - 1}{\lambda} & \text{if } \lambda \neq 0\\ log(y_t) & \text{if } \lambda = 0 \end{cases} $$ Transformation of a series is straightforward once the value of $\lambda$ is known. One way to determine the value of $\lambda$ is to maximize the (regular or profile) log likelihood of a linear regression model fitted to data. For trending and/or seasonal data, appropriate dummy variables are added to regressions to capture such effects. Guerrero (1993) proposed a model-independent method to select $\lambda$ that minimizes the coefficient of variation for the subsets of series.<br /><br /><br /><br /> <h3 class="seccol", id="sec3">Application to Turkey’s Tourism Data</h3> With its pervasive trend and seasonal components, monthly tourism statistics emerge as a natural candidate for implementation (<a href="http://www.eviews.com/blog/boxcox/workfiles/tourism.wf1"><b>TOURISM.WF1</b></a>). Suppose that we want to carry out a counterfactual analysis to estimate the potential loss of visitors to Turkey in 2020 due to the COVID-19 pandemic. First, set the training sample to cover the period until the end of 2019.<br /><br /> <!-- :::::::::: FIGURE 1 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/boxcox/images/visitors.png"><img height="auto" src="http://www.eviews.com/blog/boxcox/images/visitors.png" title="Visitors" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 1: Visitors</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 1 :::::::::: --> Next, run the add-in. The following dialog pops up.<br /><br /> <!-- :::::::::: FIGURE 2 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/boxcox/images/boxcox.png"><img height="auto" src="http://www.eviews.com/blog/boxcox/images/boxcox.png" title="Box-Cox Dialog" width="180" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 2: Box-Cox Dialog</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 2 :::::::::: --> The add-in would compute the optimal value of lambda to be 0.106.<br /><br /> <!-- :::::::::: FIGURE 3 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/boxcox/images/visitors_transfrom.png"><img height="auto" src="http://www.eviews.com/blog/boxcox/images/visitors_transfrom.png" title="Visitors (Box-Cox Transformation)" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 3: Visitors (Box-Cox Transformation)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 3 :::::::::: --> We can then apply Auto ARIMA method to original series and supply the value of estimated lambda to Box-Cox transformation as the power parameter. Forecasts produced by Auto ARIMA method can also be combined via Bayesian Model Averaging.<br /><br /> <!-- :::::::::: FIGURE 4 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/boxcox/images/arima.png"><img height="auto" src="http://www.eviews.com/blog/boxcox/images/arima.png" title="ARIMA Forecasting" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 4: ARIMA Forecasting Dialog</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 4 :::::::::: --> As an alternative approach, one can also perform ETS Exponential Smoothing method on the transformed series to select for the best model and then back transform the forecasted values.<br /><br /> <!-- :::::::::: FIGURE 5 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/boxcox/images/visitors_loss.png"><img height="auto" src="http://www.eviews.com/blog/boxcox/images/visitors_loss.png" title="Visitors Loss" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 5: Visitors Loss</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 5 :::::::::: --> ARIMA model results imply that the number of visitors to Turkey might have decreased by 29 million during 2020. ETS model portrays an even worse picture by estimating a potential loss of 42 million visitors!<br /><br /><br /><br /> <hr /> <h3 class="seccol", id="sec4">Files</h3> <ul> <li><a href="http://www.eviews.com/blog/boxcox/workfiles/tourism.wf1"'><b class="wf">TOURISM.WF1</b></a></li> <li><a href="http://www.eviews.com/blog/boxcox/workfiles/boxcox_example.prg"'><b class="wf">BOXCOX_EXAMPLE.PRG</b></a></li> </ul><br /><br /> <hr /> <h3 class="seccol", id="sec5">References</h3> <ol class="bib2xhtml"> <li id="boxcox-1964"> Box, G.E.P., and Cox, D.R. (1964), "An analysis of transformations", <i>Journal of the Royal Statistical Society</i>, Series B, vol. 26, no. 2, pp. 211-246. </li> <li id="guerrero-1993"> Guerrero V.M. (1993), "Time-series analysis supported by power transformations", <i>Journal of Forecasting</i>, vol. 12, pp. 37-48. </li> </ol></span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com3tag:blogger.com,1999:blog-6883247404678549489.post-27693608770802674652021-05-04T14:13:00.003-07:002021-11-01T11:08:27.926-07:00SpecEval Add-In<style> table { border: 0px solid black; border-collapse: separate; border-spacing: 10px; } td { border: 1px solid black; } .classic_table { border: 1px solid black; border-collapse: collapse; border-spacing: 0px; } .classic_table tr { border-bottom: 1px solid black; border-top: 1px solid black; } .classic_table tr:first-child { border-top: none; } .classic_table tr:last-child { border-bottom: none; } .classic_table td { border-left: 1px solid black; border-right: 1px solid black; padding-right: 10px; padding-left: 10px; } .classic_table td:first-child { border-left: none; } .classic_table td:last-child { border-right: none; } .break_row { border-bottom: 3px solid #293d5c !important } .nb { border: 0px solid black; } .step { counter-reset: section; list-style-type: none; } .step li::before { counter-increment: section; content: "Step "counter(section) ": "; } .seccol { } .subseccol { color: #fa5e5e } </style> <script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { lb: ['{\\left(}'], rb: ['{\\right)}'], rbrace: ['{\\left(#1\\right)}', 1], cbrace: ['{\\left\\{#1\\right\\}}', 1], sbrace: ['{\\left[#1\\right]}', 1], bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1], series: ['{\\left\\{#1_{#2}\\right\\}_{#2=#3}^{#4}}', 4], xsum: ['{\\sum_{#1=#2}^{#3}{#4}}', 4], var: ['{\\operatorname\{var\}}'], sign: ['{\\operatorname\{sign\}}'], diag: ['{\\operatorname\{diag\}}'], med: ['{\\operatorname\{median\}}'], vec: ['{\\operatorname\{vec\}}'], tr: ['{\\operatorname\{tr\}}'], min: ['{\\operatorname\{min\}}'] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"> </script> <span style="font-family: "verdana" sans-serif"> <i>Authors and guest post by Kamil Kovar</i><br /><br /> This is the first in a series of blog posts that will present a new EViews add-in, <b>SpecEval</b>, aimed at facilitating time series model development. This blog post will focus on the motivation and overview of the add-in functionality. Remaining blog posts in this series will illustrate the use of the add-in. <a name='more'></a><br /><br /> <h3 class="seccol">Table of Contents</h3> <ol> <li><a href="#sec1">Basic Principles</a> <li><a href="#sec2">Comprehensiveness: What Does SpecEval Do?</a> <li><a href="#sec3">Flexibility in Practice</a> <li><a href="#sec4">What’s Next?</a> <li><a href="#sec5">Footnotes</a> </ol><br /> <h3 class="seccol", id="sec1">Basic Principles</h3> The idea behind SpecEval is simple: to do model development effectively – especially in time constrained environment – one should have a tool that can quickly produce and summarize information about particular model. Such tool should satisfy three key requirements:<br /><br /> <ol> <li>It should be very <b>easy</b> to use, so that its use does not introduce additional costs into the model development process.</li> <li>It should be <b>comprehensive</b> in the sense that it includes all relevant information one would like to have when evaluating particular model.</li> <li>It should be <b>flexible</b> so that user can easily change what information is included in particular situations. Flexibility is a necessary counterpart of comprehensiveness so that one avoids congestion.</li> </ol> The first requirement is facilitated by EViews add-in functionality which allows execution either through GUI or command, so that model evaluation can be performed repeatedly through one quick action. Apart from this, the add-in functionality and options are designed in a way that allows the user to easily adjust the execution settings. For example, the add-in can be executed both for one model at a time or for multiple models at the same time. Furthermore, including multiple models is as simple as just listing them (wildcards are acceptable). Meanwhile, each output type can be specified as part of the execution list, making it easy to include additional outputs.<br /><br /><br /><br /> <h3 class="seccol", id="sec2">Comprehensiveness: What Does SpecEval Do?</h3> So what does SpecEval add-in do? In broad terms, it <b>produces tables and graphs that provide information about the model, and especially its behavior</b>. Note here that discussing the set of possible outputs (listed in the table below) is not in the scope of this blog post since most functionality will be illustrated in the blog posts to follow. Instead the table should highlight that the add-in is indeed comprehensive from a model development perspective.<sup><a href="#fn1" id="ref1">1</a></sup><br /><br /> <center> <table class='classic_table'> <tr style="color: white; background-color: #293d5c"> <td><b>Object Name</b></td> <td><b>Description</b></td> </tr> <tr> <td>Estimation output table</td> <td>Adjusted regression output table</td> </tr> <tr> <td>Coefficient stability graph</td> <td>Graph with recursive equation coefficients</td> </tr> <tr class='break_row'> <td>Model stability graph</td> <td>Graph with recursive lag orders</td> </tr> <tr> <td>Performance metrics tables</td> <td>Table with values of forecast performance metrics</td> </tr> <tr> <td>Performance metrics tables (multiple specifications)</td> <td>Table with values of forecast performance metrics for given metric for all specifications</td> </tr> <tr> <td>Forecast summary graph</td> <td>Graph with all recursive forecasts with given horizons</td> </tr> <tr> <td>Sub-sample forecast graph</td> <td>Graph with forecast for given sub-sample</td> </tr> <tr> <td>Subsample forecast decomposition graph</td> <td>Graph with decomposition of sub-sample forecast</td> </tr> <tr class='break_row'> <td>Forecast bias graph</td> <td>Scatter plot of forecast and actual values for given forecast horizon (Minzer-Zarnowitz plot)</td> </tr> <tr> <td>Individual conditional scenario forecast graph (level)</td> <td>Graph with forecast for single scenario and specification</td> </tr> <tr> <td>Individual conditional scenario forecast graph (transformation)</td> <td>Graph with transformation of forecast for single scenario and specification</td> </tr> <tr> <td>All conditional scenario forecast graph</td> <td>Graph with forecasts for all scenarios for single specification</td> </tr> <tr> <td>Multiple specification conditional scenario forecast graph</td> <td>Graph with forecasts for single scenario for multiple specifications</td> </tr> <tr> <td>Shock response graphs</td> <td>Graphs with response to shock to individual independent variable/regressor</td> </tr> </table> <br /> </center> The first category of outputs includes information about the model in form of estimation output, with several enhancements that facilitate quick evaluation such as suitable color-coding. Moreover, the information about the model is not limited to final model estimates, but also includes information about recursive model estimates (e.g. recursive coefficients and/or lag orders). See figures below for illustration of both outputs.<br/><br/> <!-- :::::::::: FIGURE 1 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/speceval/images/overview_est.png"><img height="auto" src="http://www.eviews.com/blog/speceval/images/overview_est.png" title="Estimation Example" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 1: Estimation Example</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 1 :::::::::: --> <!-- :::::::::: FIGURE 2 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/speceval/images/overview_stability.png"><img height="auto" src="http://www.eviews.com/blog/speceval/images/overview_stability.png" title="Coefficient Stability" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 2: Coefficient Stability</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 2 :::::::::: --> Nevertheless, far more stress is put on information about forecasting performance, which is the key focus of the add-in. Correspondingly, the add-in contains several outputs that either visualize historical (backtest) forecasts<sup><a href="#fn2" id="ref2">2</a></sup>, or that provide numerical information about the precision of these forecasts. The main graph – indeed in some sense the workhorse graph of the add-in – displays all available historical forecasts together with the actuals, see figure below. Apart from listing multiple horizons, the user can also include additional series in the graph or decide to use one of four alternative transformations.<br/><br/> <!-- :::::::::: FIGURE 3 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/speceval/images/overview_forecast.png"><img height="auto" src="http://www.eviews.com/blog/speceval/images/overview_forecast.png" title="Conditional Forecasts" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 3: Conditional Forecasts</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 3 :::::::::: --> The next table summarizes measures of precision of historical forecasts. The table displays the values of particular precision metrics (MAE, RMSE or bias) for alternative specifications and for multiple horizons. Crucially, this table is color-coded facilitating quick comparison across specifications.<br/><br/> <!-- :::::::::: FIGURE 4 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/speceval/images/overview_precision.png"><img height="auto" src="http://www.eviews.com/blog/speceval/images/overview_precision.png" title="Forecast Precision" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 4: Forecast Precision</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 4 :::::::::: --> Lastly, the add-in also provides detailed information about the behavior of the model under different conditions. This includes two types of exercises. The first exercise consists of creating and visualizing conditional scenario forecasts. This is useful both as a goal in itself, when scenario forecasting is an important use of the model, but more importantly also for instrumental reasons: thanks to their controlled-experiment nature, scenario forecasts can help identify problems with the model. The add-in produces several types of graphs visualizing scenario forecasts, see figure below for illustration.<br/><br/> <!-- :::::::::: FIGURE 5 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/speceval/images/overview_scenarios.png"><img height="auto" src="http://www.eviews.com/blog/speceval/images/overview_scenarios.png" title="Model Scenarios" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 5: Model Scenarios</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 5 :::::::::: --> The second exercise is creating and visualizing impulse shock responses, i.e. introducing shocks to a single independent variable or regressor and studying the response of the dependent variable. This allows the modeler to assess the influence a particular independent variable/regressor has on the dependent variable, as well as the dynamic profile of responses. See figure below for illustration.<br/><br/> <!-- :::::::::: FIGURE 6 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/speceval/images/overview_ir.png"><img height="auto" src="http://www.eviews.com/blog/speceval/images/overview_ir.png" title="Impulse Responses" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 6: Impulse Responses</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 6 :::::::::: --> The above discussion makes it clear that the <b>focus here is on graphical information, rather than on numerical information</b> as is more customary in model development toolkits. This is motivated by two considerations. First, graphical information is significantly more suitable for the interactive model development process in which the modeler comes up with improvements to the current model based on information on its performance. Second, the human brain is able to process graphical information faster than numerical information; hence even when numerical information is presented, it is associated with graphical cues to increase the processing speed, such as color-coding of the estimation output.<br/><br/><br/><br/> <h3 class="seccol", id="sec3">Flexibility in Practice</h3> The third basic principle – flexibility – is in practice embodied in the ability of the user to adjust the processes or the outputs via add-in options. There are altogether almost 40 user settings – all listed and explained in the add-in documentation - which can be divided into several categories.<br/><br/> First, general options focus on which of the in-built functionality is going to be performed and on which objects/specifications. Next, there is a group of options that allows customization of the outputs, such as specification of horizons for tables and/or graphs, transformations used in graphs, or additional series to be included in graphs. Third group of options allows for some basic customization of the forecasting processes. For example, one can choose between in-sample and out-of-sample forecasting, or one can specify additional equations/identities to be treated as part of the forecasting model.<sup><a href="#fn2" id="ref2">2</a></sup> These are just two examples in which the forecasting process can be customized.<br/><br/> Final two groups focus on control of samples used in the various procedures and on customization of storage settings. The former includes for example an option to manually specify sample boundaries for the backtesting procedures, or for the conditional scenario forecasts. The latter then allows the user to determine which objects will be kept in the workfile after the execution and under what names or aliases.<br/><br/><br/><br/> <h3 id="sec4">What's Next</h3> Future blog posts in this series will focus on illustrating both the use of the add-in, highlighting the ease of use and flexibility, and on the outputs. Each will follow a particular application, always focusing on a particular feature(s) of the add-in. First in the series will provide overview of basics of using the add-in, highlighting the key outputs and the customization of the process and the outputs. Second in the series will then stress the ability - and power - of using transformations in model development. Third post will focus on creating unconditional forecasts, while the last post will conclude with a brief look at recursive model structures.<br/><br/><br/><br/> <hr /> <h3 id="sec5">Footnotes</h3> <sup id="fn1">1. Of course, comprehensiveness is more a goal rather than a state in that there will always be additional functionalities that could/should be included. See model development list on the add-in GitHub site for what additional functionality is on the roadmap, but feel free to also make suggestions there.<br /> Also, the add-in is comprehensive in terms of its focus, which is forecasting behavior of a given model – as opposed to econometric characteristics of the model. This means that currently the add-in does not include any information in the form of outputs of econometric tests.<a href="#ref1" title="Jump back to footnote 1 in the text.">↩</a></sup><br/> <sup id="fn2">2. By historical forecasts I mean conditional forecasts, which are potentially multistep and dynamic, and/or recursive.<a href="#ref2" title="Jump back to footnote 2 in the text.">↩</a></sup><br/> <sup id="fn3">3. Note that these two features – in-sample forecasting and inclusion of multiple equations in the forecasting model – are possible thanks to in-built EViews functionality and hard to replicate in other statistical programs. The former is thanks to the separation between estimation and forecasting samples, the latter thanks to flexible model objects.<a href="#ref3" title="Jump back to footnote 3 in the text.">↩</a></sup></span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com0tag:blogger.com,1999:blog-6883247404678549489.post-12390203530726640622021-04-06T12:13:00.004-07:002021-04-12T16:28:06.674-07:00Time series cross-validation in ENET<style> table { border: 0px solid black; border-collapse: separate; border-spacing: 10px; } td { border: 1px solid black; } .nb { border: 0px solid black; } .step { counter-reset: section; list-style-type: none; } .step li::before { counter-increment: section; content: "Step "counter(section) ": "; } .seccol { } .subseccol { color: #fa5e5e } </style> <script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { lb: ['{\\left(}'], rb: ['{\\right)}'], rbrace: ['{\\left(#1\\right)}', 1], cbrace: ['{\\left\\{#1\\right\\}}', 1], sbrace: ['{\\left[#1\\right]}', 1], bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1], series: ['{\\left\\{#1_{#2}\\right\\}_{#2=#3}^{#4}}', 4], xsum: ['{\\sum_{#1=#2}^{#3}{#4}}', 4], var: ['{\\operatorname\{var\}}'], sign: ['{\\operatorname\{sign\}}'], diag: ['{\\operatorname\{diag\}}'], med: ['{\\operatorname\{median\}}'], vec: ['{\\operatorname\{vec\}}'], tr: ['{\\operatorname\{tr\}}'], min: ['{\\operatorname\{min\}}'] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"> </script> <span style="font-family: "verdana" sans-serif"> EViews 12 has added several new enhancements to <b>ENET</b> (elastic net) such as the ability to add observation and variable weights and additional cross-validation methods.<br /><br /> In this blog post we will show one of the new methods for time series cross-validation. The demonstration will compare the forecasting performance of rolling window cross-validation with models constructed from least squares as well as a simple split of our dataset into training and test sets.<br /><br /> We will be evaluating the out-of-sample prediction abilities of this new technique on some important macroeconomic variables. The analysis will show the promising forecast performance obtained on the variables in this dataset by using a time series specific cross validation method compared with simpler methods. <a name='more'></a><br /><br /> <h3 class="seccol">Table of Contents</h3> <ol> <li><a href="#sec1">Background</a> <li><a href="#sec2">Dataset</a> <li><a href="#sec3">Analysis</a> <li><a href="#sec4">Files</a> </ol><br /> <h3 class="seccol", id="sec1">Background</h3> When performing model selection for a time series forecasting problem it is important to be aware of the temporal properties of the data. The time series may be generated by an underlying process that changes over time, resulting in data that are not independent and identically distributed (i.i.d.). For example, time series data are frequently serially correlated and the ordering of the data are important.<br /><br /> Traditional time series econometrics solves this problem by splitting the data into training and test sets, with the test set coming from the end of the dataset. While this preserves the temporal aspects of the data, not all of the information in the dataset is used because the data in the test set are not used to train the model. Any characteristics unique to the training or test dataset may negatively affect the forecast performance of the model on new data.<br /><br /> Meanwhile, other model selection procedures such as cross-validation typically assume the data to be i.i.d., but have often been applied to time series data without regard to temporal structure. For example, the very popular k-fold cross-validation is done by splitting the data into k sets, treating k-1 of them collectively as the training set, and using the remaining set as the test set. While the data within each set retain their original ordering, the test set may occur before portions of the training data. So while cross-validation makes full use of the data, it partly ignores its time ordering.<br /><br /> The two time series cross-validation methods introduced in EViews 12 combine the benefits of temporal awareness of traditional time series econometrics with the use of the entire dataset from cross-validation. More details about these procedures can be found in the <a href='https://help.eviews.com/helpintro.html#page/content%2Fenet-Estimating_an_Elastic_Net_Regression_in_EViews.html%23ww273241'>EViews documentation</a>. We have chosen to demonstrate ENET with rolling time series cross-validation, which “rolls” a window of constant length forward through the dataset, keeping the test set after the training set.<br /><br /> In order to illustrate another method in the family of elastic net shrinkage estimators we use ridge regression for this analysis. Ridge regression is another penalized estimator that is related to Lasso (more details are in <a href='http://blog.eviews.com/2021/02/lasso-variable-selection.html'>this blog post</a>). Instead of adding an L1 penalty term to the linear regression cost function as in Lasso, we add an L2 penalty term: \begin{align*} J = \frac{1}{2m}\xsum{i}{1}{m}{\rbrace{y_i - \beta_0 -\xsum{j}{1}{p}{x_{ij}\beta_j}}} {\color{red}{+\lambda\xsum{j}{1}{p}{\beta_j^2}}} \end{align*} where the regularization parameter $\lambda$ is chosen by cross-validation.<br /><br /><br /><br /> <h3 class="seccol", id="sec2">Dataset</h3> The data for this demonstration consist of 108 monthly US macroeconomic series from January 1959 to December 2007. This was part of the <a href='http://www.princeton.edu/~mwatson/ddisk/stock_watson_generalized_shrinkage_June_2012.zip'>dataset</a> used in <a href='http://www.princeton.edu/~mwatson/papers/Stock_Watson_JBES_2012.pdf'>Stock and Watson (2012)</a> (we only use the data on “Sheet1”). Each time series is stationary transformed according to the specification in the data heading. The stationarity transformation is important to ensure that the series are identically distributed and so that the simple split into training and test data in the first part of our analysis does not produce a test set that is significantly different from our training set. In the table below we show part of the data used for this example.<br/><br/> <!-- :::::::::: FIGURE 1 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/enet_tscv/images/data_overview.png"><img height="auto" src="http://www.eviews.com/blog/enet_tscv/images/data_overview.png" title="Data Preview" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 1: Data Preview</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 1 :::::::::: --> Additional information about the data can be found <a href='http://www.princeton.edu/~mwatson/papers/stock_watson_generalized_shrinkage_supplement_June_2012.pdf'>here</a>.<br/><br/><br/><br/> <h3 class="seccol", id="sec3">Analysis</h3> We take each series in turn as the dependent and treat the other 107 variables as independent variables for estimation and forecasting. Each regression then has 108 variables, plus one intercept. The independent variables are lagged by one observation, which is one month. The first 80% of the dataset is used to estimate the model (the "estimation sample") and the last 20% is reserved for forecasting (the "forecasting sample").<br/><br/> Because we want to compare each model type (least squares, simple split, and rolling) on an equal basis, we have chosen to take the coefficients estimated from each model and keep them fixed over the forecast period. In addition, while it might be more interesting to use pseudo out-of-sample forecasting over the forecast period rather than fixed coefficients, rolling cross-validation is time intensive and we preferred to keep the analysis tractable.<br/><br/> The first model is a least squares regression on each series over the estimation sample as a baseline. With the coefficients estimated from OLS we forecast over the forecast sample.<br/><br/> Next, we use ridge regression with a simple split on the estimation sample as a comparison. (Simple Split is a new addition to ENET cross-validation in EViews 12 that divides the data into an initial training set and subsequent test set.) We then split this first 80% of the dataset further into training and test sets using the default parameters. Cross-validation chooses a set of coefficients that minimize the mean squared error (MSE). Using these coefficients we again forecast over the remaining forecast sample.<br/><br/> Finally, we apply rolling time series cross-validation to the same split of the data for each series: the estimation sample as a training and test set for rolling cross-validation and the forecast sample for forecasting using the coefficients chosen for each series. We use the default parameters for rolling cross validation and again minimize the MSE.<br/><br/> After generating 324 forecasts with our 108 variables and three different models, we collected the root mean squared error (RMSE) of each forecast into a table. This table is shown below.<br/><br/> <!-- :::::::::: FIGURE 2 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/enet_tscv/images/rmse.png"><img height="auto" src="http://www.eviews.com/blog/enet_tscv/images/rmse.png" title="Root Mean Squared Error" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 2: Root Mean Squared Error</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 2 :::::::::: --> Each row of the table has, in order, the name of the dependent in the regression and the RMSE for the least squares, simple split, and time series CV models. The minimum value in each row is highlighted in yellow. If a row contains duplicate values, then none of the cells are highlighted because we are only counting instances when one model has the lowest error measure compared with the others. At the bottom of the table is a row with the total number of times each cross-validation method had the minimum value, summed across all series. For example, OLS had the minimum RMSE 21 times, or 25% of the total, while rolling cross-validation had the minimum RMSE 38 times, for 45% of the total. Simple split makes up the remaining 31% (the percentages do not add up to one because of rounding).<br/><br/> Below we include the equivalent table for mean absolute error (MAE). Percentages for this error measure are 20% for OLS, 31% for simple split cross-validation, and 50% for rolling cross-validation.<br/><br/> <!-- :::::::::: FIGURE 3 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/enet_tscv/images/mae.png"><img height="auto" src="http://www.eviews.com/blog/enet_tscv/images/mae.png" title="Mean Absolute Error" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 3: Mean Absolute Error</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 3 :::::::::: --> In the two tables above we can see some interesting highlighted clusters of series that belong in the same categories as defined in the <a href='http://www.princeton.edu/~mwatson/papers/Stock_Watson_JBES_2012.pdf'>paper</a> and <a href='http://www.princeton.edu/~mwatson/papers/stock_watson_generalized_shrinkage_supplement_June_2012.pdf'>supplemental materials</a>. For example, looking only at the "Rolling" column, the five EXR* series in group 11 are the exchange rates of four currencies with the USD as well as the effective exchange rate of the dollar. Other groups with the lowest forecast errors after using rolling cross-validation include the three CES*R series, for hourly earnings, and the FS* series, representing various measures of the S&P 500.<br /><br /> We leave further investigation of these time series, and their estimation and forecasting properties with methods that are temporally aware, to the reader.<br /><br /> <hr /> <h3 id="sec4">Files</h3> <ol> <li><a href="http://www.eviews.com/blog/enet_tscv/stock_watson.wf1">stock_watson.WF1</a> <li><a href="http://www.eviews.com/blog/enet_tscv/stock_watson.prg">stock_watson.PRG</a> </ol><br /></span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com0tag:blogger.com,1999:blog-6883247404678549489.post-36357222750144735832021-03-03T09:56:00.002-08:002021-05-25T07:49:57.615-07:00New Variable Selection Diagnostics and Data MembersThe 2021/03/03 update to EViews 12 has two new smaller Variable Selection features. These will help you extract information on the outcome of any selection method and obtain diagnostics on the selection process for a subset of methods. <span><a name='more'></a></span><div><br /></div><div>The first new feature is a way to extract lists of the search variables that have been kept or rejected by the selection procedure. Naturally, they are the data members <span style="font-family: courier;">@varselkept</span> and <span style="font-family: courier;">@varselrejected</span>. For any Equation object (say, “<span style="font-family: courier;">EQ</span>”) that has been estimated with any of the variable selection techniques, the calls </div><div><span style="font-family: courier;"> eq.@varselkept </span></div><div><span style="font-family: courier;"> eq.@varselrejected </span></div><div>will return space-delimited lists of the variables in EQ that were kept or rejected by variable selection, not including the always included regressors. </div><div><br /></div><div>The second new feature is additions to the views for Variable Selection. For the Uni-directional, Stepwise, and Swapwise methods, there is a new Selection Diagnostics menu. The former two have six items in this menu: R-squared, t-Stats, and Alpha-squared Graphs, and corresponding Tables. Swapwise has R-squared and Alpha-squared Graphs and Tables. Each graph and table show the chosen statistic at each step in the selection process. Choosing R-squared Graph for forward stepwise selection in an example dataset displays:</div><div class="separator" style="clear: both; text-align: center;"><br /></div><br /><div><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-yEsc0Mp--ME/YD_Y_aEBaAI/AAAAAAAAA9k/vqACo8HOWvoj6Qh0jdqqB9C-CDryQl6xACNcBGAsYHQ/s458/varsel1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="423" data-original-width="458" height="370" src="https://1.bp.blogspot.com/-yEsc0Mp--ME/YD_Y_aEBaAI/AAAAAAAAA9k/vqACo8HOWvoj6Qh0jdqqB9C-CDryQl6xACNcBGAsYHQ/w400-h370/varsel1.png" width="400" /></a></div><br /></div><div>showing the increase to the R-squared statistic with each step in the selection. It is interesting to see the large contributions to R-squared in just the first few steps. </div><div><br /></div><div>R-squared Table shows the same information in table form:</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div><br /></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-FLK_mqnYbtA/YD_ZHiQ-uoI/AAAAAAAAA9o/Nz3AcwZGVPQRTBW-W13WnCRNBR6Mq5CowCNcBGAsYHQ/s291/varsel2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="265" data-original-width="291" height="364" src="https://1.bp.blogspot.com/-FLK_mqnYbtA/YD_ZHiQ-uoI/AAAAAAAAA9o/Nz3AcwZGVPQRTBW-W13WnCRNBR6Mq5CowCNcBGAsYHQ/w400-h364/varsel2.png" width="400" /></a></div><br />IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com2tag:blogger.com,1999:blog-6883247404678549489.post-25978748104437565982021-02-16T11:11:00.008-08:002021-04-12T16:28:23.393-07:00Lasso Variable Selection<style> table { border: 0px solid black; border-collapse: separate; border-spacing: 10px; } td { border: 0px solid black; } .nb { border: 0px solid black; } .step { counter-reset: section; list-style-type: none; } .step li::before { counter-increment: section; content: "Step "counter(section) ": "; } .seccol { } .subseccol { color: #fa5e5e } </style> <script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { lb: ['{\\left(}'], rb: ['{\\right)}'], rbrace: ['{\\left(#1\\right)}', 1], cbrace: ['{\\left\\{#1\\right\\}}', 1], sbrace: ['{\\left[#1\\right]}', 1], bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1], series: ['{\\left\\{#1_{#2}\\right\\}_{#2=#3}^{#4}}', 4], xsum: ['{\\sum_{#1=#2}^{#3}{#4}}', 4], var: ['{\\operatorname\{var\}}'], sign: ['{\\operatorname\{sign\}}'], diag: ['{\\operatorname\{diag\}}'], med: ['{\\operatorname\{median\}}'], vec: ['{\\operatorname\{vec\}}'], tr: ['{\\operatorname\{tr\}}'], min: ['{\\operatorname\{min\}}'] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"> </script> <span style="font-family: "verdana" sans-serif"> In this blog post we will show how Lasso variable selection works in EViews by comparing it with a baseline least squares regression. We will be evaluating the prediction and variable selection properties of this technique on the same <a href="https://web.stanford.edu/~hastie/StatLearnSparsity_files/DATA/diabetes.html">dataset</a> used in the well-known paper “Least Angle Regression” by Efron, Hastie, Johnstone, and Tibshirani. The analysis will show the generally superior in-sample fit and out-of-sample forecast performance of Lasso variable selection compared with a baseline least squares model. <a name='more'></a><br /><br /> Lasso variable selection, <a href="http://eviews.com/EViews12/ev12ecest_n.html#varsel">new to EViews 12</a> and also known as the Lasso-OLS hybrid, post-Lasso OLS, the relaxed Lasso (under certain conditions), or post-estimation OLS, uses Lasso as a variable selection technique followed by ordinary least squares estimation on the selected variables.<br /><br /> <h3 class="seccol">Table of Contents</h3> <ol> <li><a href="#sec1">Background</a> <li><a href="#sec2">Dataset</a> <li><a href="#sec3">Analysis</a> </ol><br /> <br /><br /> <h3 class="seccol", id="sec1">Background</h3> In today’s data-rich environment it is useful to have methods of extracting information from complex datasets with large numbers of variables. A popular way of doing this is with dimension reduction techniques such as principal components analysis or dynamic factor models. By reducing the number of variables in a model, we can reduce overfitting, reduce the complexity of the model and make it easier to interpret, and decrease computation time. However, dimension reduction methods have the risk of losing useful information contained in variables that are not included in the reduced set, and may potentially have poorer predictive power.<br/><br/> Lasso is useful because it is a shrinkage estimator: it shrinks the size of the coefficients of the independent variables depending on their predictive power. Some coefficients may shrink down to zero, allowing us to restrict the model to variables with nonzero coefficients.<br/><br/> Lasso is just one method out of a family of penalized least squares estimators (other members include ridge regression and elastic net). Starting with the linear regression cost function: \begin{align*} J = \frac{1}{2m}\xsum{i}{1}{m}{\rbrace{y_i - \beta_0 -\xsum{j}{1}{p}{x_{ij}\beta_j}}} \end{align*} where $y_i$ is the dependent variable, $x_{ij}$ are the independent variables, $\beta_j$ are the coefficients, $m$ is the number of data points, and $p$ the number of independent variables, we obtain the coefficients $\beta_j$ by minimizing $J$. If the model based on linear regression is overfit and does not make good predictions on new data, then one solution is to construct a Lasso model by adding a penalty term: \begin{align*} J = \frac{1}{2m}\xsum{i}{1}{m}{\rbrace{y_i - \beta_0 -\xsum{j}{1}{p}{x_{ij}\beta_j}}} {\color{red}{+\lambda\xsum{j}{1}{p}{|\beta_j|}}} \end{align*} where the parameters are the same as before with the addition of the regularization parameter $\lambda$. By adding these extra terms the cost of $\beta_j$ is increased, so to minimize the cost function the values of $\beta_j$ have to be reduced. Smaller values of $\beta_j$ will "smooth out" the function so it fits the data less tightly, leaving it more likely to generalize well to new data. The regularization parameter $\lambda$ determines how much the cost of $\beta_j$ is increased. Lasso estimation in EViews can automatically select an appropriate value with cross-validation, which is a data-driven method of choosing $\lambda$ based on its predictive ability.<br/><br/> If we have a dataset with many independent variables, ordinary least squares models may produce estimates with large variances and therefore unstable forecasts. By applying Lasso regression to the data and removing variables that have been shrunk to zero, then applying OLS to the reduced number of variables, we may be able to improve forecasting performance. In this way we can perform dimension reduction on our data based on the predictive accuracy of our model. <br/><br/><br/><br/> <h3 class="seccol", id="sec2">Dataset</h3> In the table below we show part of the data used for this example.<br/><br/> <!-- :::::::::: FIGURE 1 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/lassosel/images/spreadsheet.png"><img height="auto" src="http://www.eviews.com/blog/lassosel/images/spreadsheet.png" title="Data Preview" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 1: Data Preview</small><br/> <small>(Click to enlarge)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 1 :::::::::: --> The ten original variables are age, sex, body mass index (bmi), average blood pressure (bp), and six blood serum measurements for 442 patients. They have all been standardized as described in the paper. The dependent variable is a measure of disease progression one year after the other measurements were taken and has been scaled to have mean zero. We are interested in the accuracy of the fit and predictions from any model we develop of this data and in the relative importance of each regressor.<br/><br/><br/><br/> <h3 class="seccol", id="sec3">Analysis</h3> We first perform an OLS regression on the dataset to give us a baseline for comparison.<br/><br/> <!-- :::::::::: FIGURE 2 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/lassosel/images/ls_all.png"><img height="auto" src="http://www.eviews.com/blog/lassosel/images/ls_all.png" title="OLS Regression" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 2: OLS Regression</small><br/> <small>(Click to enlarge)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 2 :::::::::: --> One thing to note in this estimation result is that the adjusted R-squared for this model is .5066, indicating that the model explains approximately 51% of the variation in the dependent variable. We see that certain variables (BMI, BP, LTG, and SEX) have both a greater impact on the progression of diabetes after one year and are the most statistically significant.<br/><br/> Next, we run a Lasso regression over the same dataset and look at the plot of the coefficients against the L1 norm of the coefficients. This gives us a sense of how each coefficient contributes to the dependent variable. We can see that as the degree of regularization decreases (the L1 norm increases) more coefficients enter the model.<br/><br/> <!-- :::::::::: FIGURE 3 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/lassosel/images/coef_evol.png"><img height="auto" src="http://www.eviews.com/blog/lassosel/images/coef_evol.png" title="Coefficient Evolution" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 3: Coefficient Evolution</small><br/> <small>(Click to enlarge)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 3 :::::::::: --> Let’s take a closer look at the coefficients.<br /><br /> <!-- :::::::::: FIGURE 4 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/lassosel/images/lasso.png"><img height="auto" src="http://www.eviews.com/blog/lassosel/images/lasso.png" title="Lasso Regression" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 4: Lasso Regression</small><br/> <small>(Click to enlarge)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 4 :::::::::: --> The set of coefficients at the minimum value of lambda (.004516) are all nonzero. However, when we move to the lambda value in the next column (6.401), which is the largest value of lambda that is within one standard deviation of the minimum, we see that only four of the original ten regressors are nonzero. Compared with least squares, most of the coefficients in the first column have shrunk slightly toward zero, and more so in the next column with a larger regularization penalty (with the exception of an interesting sign change for HDL). Three of the variables retained (BMI, BP, and LTG) are the same as the variables identified by least squares as being both more influential on the outcome and statistically significant. But compared to least squares, this is a less complex model. Does reducing the number of variables in this way lead to a better fitting model? Evaluate a Lasso variable selection model with the same options and see.<br /><br /> <!-- :::::::::: FIGURE 5 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/lassosel/images/lasso_vs.png"><img height="auto" src="http://www.eviews.com/blog/lassosel/images/lasso_vs.png" title="Lasso Variable Selection" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 5: Lasso Variable Selection</small><br/> <small>(Click to enlarge)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 5 :::::::::: --> The unimpressive result of OLS applied to the variables selected from the Lasso fit is that adjusted R-squared has increased ever-so-slightly to .5068. Another thing to note is that while Lasso generally shrinks, or biases, the coefficients toward zero, OLS applied to Lasso expands, or debiases, them away from zero. This results in a decrease in the variance of the final model, as you can see by comparing the errors for the Lasso variable selection model with the first OLS model.<br /><br /> You may have noticed that the set of nonzero coefficients here is different than that for the Lasso example earlier. That’s because Lasso variable selection uses a different measure (AIC) to select the preferred model compared to Lasso. This is the same measure used for the other variable selection methods in EViews.<br /><br /> What about out-of-sample predictive power? We have randomly labeled each of the 442 observations as either training or test datapoints (the split is 70% training, 30% test). After doing least squares and Lasso variable selection on the training data, we use Series->View->Forecast Evaluation to compare the forecasts for least squares and Lasso variable selection over the test set:<br /><br /> <!-- :::::::::: FIGURE 6 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/lassosel/images/fcomp_orig.png"><img height="auto" src="http://www.eviews.com/blog/lassosel/images/fcomp_orig.png" title="Lasso Predictive Evaluation" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 6: Lasso Predictive Evaluation</small><br/> <small>(Click to enlarge)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 6 :::::::::: --> We have achieved very slightly better predictive performance for some measures (MAE, MAPE) and very slightly worse for others (RMSE, SMAPE).<br /><br /> This is all mildly interesting. But the real power of variable selection techniques comes when you have a larger dataset and want to reduce the set of variables under consideration to a more manageable set. To this end, we use the “extended” dataset provided by the authors that includes the ten original variables plus squares of nine variables and forty-five interaction terms, for a total of sixty-four variables.<br /><br /> First, we repeat the OLS regression from earlier with the new extended dataset:<br /><br /> <!-- :::::::::: FIGURE 7 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/lassosel/images/ls_extended.png"><img height="auto" src="http://www.eviews.com/blog/lassosel/images/ls_extended.png" title="Extended OLS" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 7: Extended OLS</small><br/> <small>(Click to enlarge)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 7 :::::::::: --> Adjusted R-squared is actually higher than it was for the original ten variables, at .5233, so the additional variables have added some explanatory power to the model.<br /><br /> Next, let’s go straight to Lasso variable selection on the extended dataset.<br /><br /> <!-- :::::::::: FIGURE 8 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/lassosel/images/lassovs_all.png"><img height="auto" src="http://www.eviews.com/blog/lassosel/images/lassovs_all.png" title="Extended Lasso Variable Selection" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 8: Extended Lasso Variable Selection</small><br/> <small>(Click to enlarge)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 8 :::::::::: --> Out of sixty-four original search variables, the selection procedure has kept fourteen. This is a significant reduction in complexity. The adjusted R-squared has increased from .5233 to .5308, and the standard error of the regression has decreased.<br /><br /> The in-sample R-squared and errors have moved in a modest but promising direction. What about out-of-sample prediction? We again compare the forecasts for least squares and Lasso variable selection over the test set:<br /><br /> <!-- :::::::::: FIGURE 9 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/lassosel/images/fcomp_ext.png"><img height="auto" src="http://www.eviews.com/blog/lassosel/images/fcomp_ext.png" title="Extended Lasso Predictive Evaluation" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 9: Extended Lasso Predictive Evaluation</small><br/> <small>(Click to enlarge)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 9 :::::::::: --> Now we can see a meaningful improvement in forecasting performance. All of the error measures have improved, some significantly. Applying Lasso variable selection to this larger dataset has led to reduced model complexity, a slight improvement in the in-sample fit, and improved forecasting performance over least squares.<br /><br /><br /><br /> <h3>Request a Demonstration</h3> If you would like to experience Lasso methods in EViews for yourself, you can request a demonstration copy <a href="http://www.eviews.com/demo">here</a>. </span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com1tag:blogger.com,1999:blog-6883247404678549489.post-6688321458965513982021-02-02T10:51:00.001-08:002021-02-02T10:51:19.453-08:00Univariate GARCH Models with Skewed Student’s-t Errors<style> table { border: 0px solid black; border-collapse: separate; border-spacing: 10px; } td { border: 0px solid black; } .nb { border: 0px solid black; } .step { counter-reset: section; list-style-type: none; } .step li::before { counter-increment: section; content: "Step "counter(section) ": "; } .seccol { } .subseccol { color: #fa5e5e } </style> <script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { lb: ['{\\left(}'], rb: ['{\\right)}'], rbrace: ['{\\left(#1\\right)}', 1], cbrace: ['{\\left\\{#1\\right\\}}', 1], sbrace: ['{\\left[#1\\right]}', 1], bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1], series: ['{\\left\\{#1_{#2}\\right\\}_{#2=#3}^{#4}}', 4], xsum: ['{\\sum_{#1=#2}^{#3}{#4}}', 4], var: ['{\\operatorname\{var\}}'], sign: ['{\\operatorname\{sign\}}'], diag: ['{\\operatorname\{diag\}}'], med: ['{\\operatorname\{median\}}'], vec: ['{\\operatorname\{vec\}}'], tr: ['{\\operatorname\{tr\}}'], min: ['{\\operatorname\{min\}}'] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"> </script> <span style="font-family: "verdana" sans-serif"> <i>Authors and guest post by Eren Ocakverdi</i><br /><br /> This blog piece intends to introduce a new add-in (i.e. <b>SKEWEDUGARCH</b>) that extends the current capability of EViews’ available features for the estimation of univariate GARCH models. <a name='more'></a><br /><br /> <h3 class="seccol">Table of Contents</h3> <ol> <li><a href="#sec1">Introduction</a> <li><a href="#sec2">Skewed Student’s-t Distribution </a> <li><a href="#sec3">Application to USDTRY currency </a> <li><a href="#sec4">Files</a> <li><a href="#sec5">References</a> </ol><br /> <h3 class="seccol", id="sec1">Introduction</h3> Volatility is an important concept in itself, but it has a special place in finance as it is usually associated with risk. Although investors believe in higher risk higher reward, it is not an easy task to exploit this trade-off. Price of an asset can change dramatically over a short period of time and in either direction, which makes it exceedingly difficult to predict. Volatility is responsible from such sharp movements, so it is important to develop a gauge to measure and identify its dynamics.<br/><br/> One of the critical observations regarding the returns of financial assets was that the volatilities were not fixed over time and tended to cluster around large changes. GARCH models are specifically designed to capture this behavior and describe the movement of volatility more accurately. Details of GARCH estimation in EViews can be found <a href='http://www.eviews.com/help/helpintro.html#page/content%2Farch-ARCH_and_GARCH_Estimation.html%23'>here</a>.<br/><br/> Conditional distribution of error terms of returns (i.e. mean equation) plays an important role in the estimation of GARCH-type models. Currently, EViews offers <a href='http://www.eviews.com/help/helpintro.html#page/content%2Farch-Basic_ARCH_Specifications.html%23ww165096'>three different assumptions</a> regarding the specification of this distribution.<br/><br/><br/><br/> <h3 class="seccol", id="sec2">Skewed Student’s-t Distribution</h3> Consistent with the stylized facts of financial markets, distribution of returns has fat tails (i.e. high kurtosis) and are not symmetrical (i.e. positively skewed). Although Student’s-t and GED specifications can account for the excess kurtosis, they are symmetrical densities by design. Lambert and Laurent (2001) suggest the use of a skewed Student’s-t density within the GARCH framework. The log likelihood contributions of a standardized skewed Student’s-t are as follows:<br /><br /> \begin{align*} l_t &= -\frac{1}{2} \log \rbrace{ \frac{\pi(\nu - 2) \Gamma \rbrace{\frac{\nu}{2}}^2}{\Gamma \rbrace{\frac{\nu + 1}{2}} } } + \log \rbrace{\frac{2}{\xi + \frac{1}{\xi}}} + \log(s)\\ &-\frac{1}{2}\log(\sigma^2_t) - \frac{\nu + 1}{2} \log \rbrace{1 + \frac{\rbrace{s\rbrace{y_t - X_t^\top \theta} + m}^2}{\sigma_t^2\rbrace{\nu - 2}}\xi^{-2I_t}} \end{align*} Here, $\xi$ is the asymmetry parameter and $\nu$ is the degrees-of-freedom of the distribution. Other parameters, $m,s$ and $I_t$ are given by: \begin{align*} m &= \frac{\Gamma \rbrace{\frac{\nu - 1}{2}} \sqrt{\nu - 2}}{\sqrt{\pi}\Gamma\rbrace{\frac{\nu}{2}}}\rbrace{\xi - \frac{1}{\xi}}\\ s &= \sqrt{\rbrace{\xi^2 + \frac{1}{\xi^2} - 1} - m^2}\\ I_t &= \begin{cases} \phantom{-}1 \quad \text{if} \quad \rbrace{\frac{y_t - X_t^\top \theta}{\sigma_t}} \geq - \frac{m}{s}\\ -1 \quad \phantom{\text{if}}\text{otherwise} \end{cases} \end{align*} For a symmetrical distribution, $ξ=1$, but since the add-in estimates the logarithmic transformation of the parameter, you should consider $\log(\xi)=0$ for testing the null hypothesis of symmetry.<br /><br /> Below is the comparison of theoretical distribution of Student’s-t and its (positively) skewed version. Skewness increases the chance of observing extreme values, which has important implications in finance.<br /><br /> <!-- :::::::::: FIGURE 1 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/skewedugarch/images/skewedtdist.png"><img height="auto" src="http://www.eviews.com/blog/skewedugarch/images/skewedtdist.png" title="Skewed t-Distribution" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 1: Skewed t-Distribution</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 1 :::::::::: --> <h3 class="seccol", id="sec3">Application to USDTRY currency</h3> FX markets are convenient places for studying the dynamics of volatility and Turkish Lira has recently come to the fore among emerging markets due to sudden capital outflows as well as currency shocks (<b>USDTRY.WF1</b>).<br /><br /> A simple visual inspection of squared returns shows us the magnitude of the shock that hit the markets on August 10th, 2018 (<b>SKEWEDUGARCH_EXAMPLE.PRG</b>). The impact was so severe that it dwarfed all other volatilities experienced during the analysis period of 2005-2020.<br /><br /> <!-- :::::::::: FIGURE 2 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/skewedugarch/images/returnssq.png"><img height="auto" src="http://www.eviews.com/blog/skewedugarch/images/returnssq.png" title="Squared Returns" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 2: Squared Returns</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 2 :::::::::: --> In order to estimate the conditional variance of returns, we start by fitting two alternative models (i.e. GARCH(1,1) and TGARCH(1,1)) with two different distributional assumptions (i.e. Normal and Student’s-t). Mean equation is same for all models: \begin{align*} r_t &= \bar{r} + e_t\\ e_t &= \epsilon_t \sigma_t \end{align*} \begin{align*} \textbf{Model 1}: \quad \sigma_t^2 &= \omega + \alpha_1 e_{t-1}^2 + \beta_1\sigma_{t-1}^2, \quad \text{where} \quad \epsilon_t \sim N(0,1)\\ \textbf{Model 2}: \quad \sigma_t^2 &= \omega + \alpha_1 e_{t-1}^2 + \beta_1\sigma_{t-1}^2 + \gamma_1 e_{t-1}^2(e_t < 0), \quad \text{where} \quad \epsilon_t \sim N(0,1)\\ \textbf{Model 3}: \quad \sigma_t^2 &= \omega + \alpha_1 e_{t-1}^2 + \beta_1\sigma_{t-1}^2, \quad \text{where} \quad \epsilon_t \sim \text{Student}(0,1,\nu)\\ \textbf{Model 2}: \quad \sigma_t^2 &= \omega + \alpha_1 e_{t-1}^2 + \beta_1\sigma_{t-1}^2 + \gamma_1 e_{t-1}^2(e_t < 0), \quad \text{where} \quad \epsilon_t \sim \text{Student}(0,1,\nu) \end{align*} <!-- :::::::::: FIGURES 3a and 3b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 3a :::::::::: --> <center> <a href="http://www.eviews.com/blog/skewedugarch/images/model1.png"><img height="auto" src="http://www.eviews.com/blog/skewedugarch/images/model1.png" title="Model 1 Results" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 3b :::::::::: --> <center> <a href="http://www.eviews.com/blog/skewedugarch/images/model2.png"><img height="auto" src="http://www.eviews.com/blog/skewedugarch/images/model2.png" title="Model 2 Results" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 3a: Model 1 Results</small> </center> </td> <td class="nb"> <center> <small>Figure 3b: Model 2 Results</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 3a and 3b :::::::::: --> <!-- :::::::::: FIGURES 4a and 4b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 4a :::::::::: --> <center> <a href="http://www.eviews.com/blog/skewedugarch/images/model3.png"><img height="auto" src="http://www.eviews.com/blog/skewedugarch/images/model3.png" title="Model 3 Results" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 4b :::::::::: --> <center> <a href="http://www.eviews.com/blog/skewedugarch/images/model4.png"><img height="auto" src="http://www.eviews.com/blog/skewedugarch/images/model4.png" title="Model 4 Results" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 4a: Model 3 Results</small> </center> </td> <td class="nb"> <center> <small>Figure 4b: Model 4 Results</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 4a and 4b :::::::::: --> From a purely statistical point of view ($p$-values and information criteria that is), fat tails and/or leverage effects better represent the Turkish Lira’s volatility dynamics. Distribution fit to standardized residuals and the analysis of news impact can be provided as supporting evidence in that respect.<br /><br /> <!-- :::::::::: FIGURES 5a and 5b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 5a :::::::::: --> <center> <a href="http://www.eviews.com/blog/skewedugarch/images/leverage.png"><img height="auto" src="http://www.eviews.com/blog/skewedugarch/images/leverage.png" title="Leverage" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 5b :::::::::: --> <center> <a href="http://www.eviews.com/blog/skewedugarch/images/nic.png"><img height="auto" src="http://www.eviews.com/blog/skewedugarch/images/nic.png" title="News Impact Curve" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 5a: Leverage</small> </center> </td> <td class="nb"> <center> <small>Figure 5b: News Impact Curve</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 5a and 5b :::::::::: --> Extreme events seem to occur more often than suggested by the normal distribution and the volatility response to these shocks are more severe in the case of depreciation than that of appreciation.<br /><br /> At this point, one may also wonder if there is any long memory effect in the volatility of returns. In order to do so, we first estimate an ARFIMA model for the squared return series and a simple FIGARCH model for the variance part of regular return series: \begin{align*} &\textbf{Fractional Mean Model}: \quad \rbrace{1 - L}^d(r_t^2 - \mu) = e_t, \quad \text{where} \quad e_t \sim N(0,\bar{\sigma})\\ &\textbf{Fractional Variance Model}: \quad \sigma_t^2 = \omega + \rbrace{1 - \beta_1 - \rbrace{1 - \alpha_1}\rbrace{1 - L}^d}e_{t-1}^2 + \beta_1\sigma_{t-1}^2, \quad \text{where} \quad \epsilon_t \sim \text{Student}(0,1,\nu) \end{align*} <!-- :::::::::: FIGURES 6a and 6b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 6a :::::::::: --> <center> <a href="http://www.eviews.com/blog/skewedugarch/images/model5.png"><img height="auto" src="http://www.eviews.com/blog/skewedugarch/images/model5.png" title="Fractional Mean Model" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 6b :::::::::: --> <center> <a href="http://www.eviews.com/blog/skewedugarch/images/model6.png"><img height="auto" src="http://www.eviews.com/blog/skewedugarch/images/model6.png" title="Fractional Variance Model" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 6a: Fractional Mean Model</small> </center> </td> <td class="nb"> <center> <small>Figure 6b: Fractional Variance Model</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 6a and 6b :::::::::: --> Fractional difference parameter is significantly different from 0 and 1 in both models, but it is also significantly smaller than 0.5 in the ARFIMA model suggesting that the squared return series has long memory properties. However, modelling the variance of the return series explicitly we have successfully explained the behaviour of volatility and mitigated the impact of (and need for) long memory.<br /><br /> Since the estimation of fractional difference parameter can be sensitive to the choice of truncation limits, it may not worth the effort unless the statistical properties of results from FIGARCH models are significantly better than that of rival GARCH models. Here, our previous TGARCH(1,1) model with Student’s-t errors is still the frontrunner in that respect.<br /><br /> What if the positive shocks (i.e. depreciation) happen less frequently but more severe than negative shocks (i.e. appreciation) implied by a symmetric distribution? In order to test this hypothesis, one needs to look for asymmetry towards larger positive extreme values. We can estimate our final model via add-in assuming a skewed Student’s-t distribution and see if we can further improve the fit.<br /><br /> <!-- :::::::::: FIGURE 7 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/skewedugarch/images/skewedgarch.png"><img height="auto" src="http://www.eviews.com/blog/skewedugarch/images/skewedgarch.png" title="Skewed GARCH Estimates" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 7: Skewed GARCH Estimates</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 7 :::::::::: --> Estimated parameter values change slightly vis-à-vis our original TGARCH model, but the asymmetry parameter seems to be positive and significant, supporting the evidence of skewness. Information criteria favors this version of the model over all other specifications above.<br /><br /> One of the main uses of GARCH models in financial institutions is the estimation of Value-at-Risk (VaR), a concept that tracks and calculates the potential loss that might happen during a trading activity of any sort. Commonly used symmetric error distributions for the purpose might lead to underestimation of right tail risk (i.e. in short trading positions). The chart below compares the daily VaR estimations from commonly used distributions and depicts effects of fat tails and skewness for a long position in TL (or a short position in USDTRY).<br /><br /> <!-- :::::::::: FIGURE 8 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/skewedugarch/images/valueatrisk.png"><img height="auto" src="http://www.eviews.com/blog/skewedugarch/images/valueatrisk.png" title="Value-at-Risk" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 8: Value-at-Risk</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 8 :::::::::: --> At its peak around the summer of 2018, currency shock led 99% VaR threshold of a TL-denominated asset or portfolio to jump to a daily loss of 14.5%. It would have been considered as an astronomical event a year ago, since it was only around 1% back then. Increasing the likelihood of extreme events and incorporating the asymmetric tail behaviour of the shocks, would further add 5.1 and 3.5 bps, respectively and would carry this limit to 23.1%!<br /><br /><br /><br /> <hr /> <h3 class="seccol", id="sec4">Files</h3> <ul> <li><a href="http://www.eviews.com/blog/skewedugarch/workfiles/usdtry.wf1"'><b class="wf">USDTRY.WF1</b></a></li> <li><a href="http://www.eviews.com/blog/skewedugarch/workfiles/skewedugarch_example.prg"'><b class="wf">SKEWEDUGARCH_EXAMPLE.PRG</b></a></li> </ul><br /><br /> <hr /> <h3 class="seccol", id="sec5">References</h3> <ol class="bib2xhtml"> <li id="lambert-laurent-2001"> Lambert P and Laurent S (2001), <i>"Modelling Financial Time Series Using GARCH-Type Models and a Skewed Student Density"</i>, Mimeo, Universite de Liege. </li> </ol></span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com1tag:blogger.com,1999:blog-6883247404678549489.post-85782422361418923402021-01-20T09:28:00.003-08:002021-01-20T09:35:09.379-08:00Automatic Factor Selection: Working with FRED-MD Data<style> table { border: 0px solid black; border-collapse: separate; border-spacing: 10px; } td { border: 0px solid black; } .nb { border: 0px solid black; } .step { counter-reset: section; list-style-type: none; } .step li::before { counter-increment: section; content: "Step "counter(section) ": "; } .seccol { } .subseccol { color: #fa5e5e } </style> <script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { lb: ['{\\left(}'], rb: ['{\\right)}'], rbrace: ['{\\left(#1\\right)}', 1], cbrace: ['{\\left\\{#1\\right\\}}', 1], sbrace: ['{\\left[#1\\right]}', 1], bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1], series: ['{\\left\\{#1_{#2}\\right\\}_{#2=#3}^{#4}}', 4], xsum: ['{\\sum_{#1=#2}^{#3}{#4}}', 4], var: ['{\\operatorname\{var\}}'], sign: ['{\\operatorname\{sign\}}'], diag: ['{\\operatorname\{diag\}}'], med: ['{\\operatorname\{median\}}'], vec: ['{\\operatorname\{vec\}}'], tr: ['{\\operatorname\{tr\}}'], min: ['{\\operatorname\{min\}}'] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"> </script> <span style="font-family: "verdana" sans-serif"> This is the first of two posts devoted to automatic factor selection and panel unit root tests with cross-sectional dependence. Both features were recently released with EViews 12. Here, we summarize and work with two seminal contributions to automatic factor selection by Bai and Ng (2002) and Ahn and Horenstein (2013). <a name='more'></a><br /><br /> <h3 class="seccol">Table of Contents</h3> <ol> <li><a href="#sec1">Introduction</a> <li><a href="#sec2">Overview of Automatic Factor Selection</a> <ul> <li><a href="#sec2.1">Bai and Ng (2002)</a> <li><a href="#sec2.2">Ahn and Horenstein (2013)</a> </ul> <li><a href="#sec3">Working with FRED-MD</a> <ul> <li><a href="#sec3.1">Factor Selection using Bai and Ng (2002)</a> <li><a href="#sec3.2">Factor Selection using Ahn and Horenstein (2013)</a> <li><a href="#sec3.3">Factor Model Estimation</a> <li><a href="#sec3.4">Forecasting Industrial Production</a> </ul> <li><a href="#sec4">Files</a> <li><a href="#sec5">References</a> </ol><br /> <h3 class="seccol", id="sec1">Introduction</h3> Recent trends in empirical economics (particularly those in macroeconomics) indicate increased use and demand for large dimensional datasets. Since the temporal dimension ($T$) is typically thought to be large anyway, the term <b>large dimensional</b> here refers to the number of variables ($N$), otherwise referred to as <b>factors</b> or <b>cross-sectional</b> units. This is in contrast with traditional paradigms where the number of variables is few in number, but the temporal dimension is long. This paradigm shift is markedly the result of theoretical advancements in <b>dimension-aware</b> techniques such as factor-augmented and panel models.<br /><br /> At the heart of all dimension-aware methods is <b>factor selection</b>, or the correct specification (estimation) of the number of factors. Traditionally, this parameter was often assumed. Recently, however, several contributions have offered data driven (semi-)autonomous factor selection methods, most notably those of Bai and Ng (2002) and Ahn and Horenstein (2013).<br /><br /> These automatic factor selection techniques have come to play important roles in factor augmented (vector auto)regressions, panel unit root tests with cross sectional dependence, and data manipulation. A particularly important example of the latter is <a href='https://research.stlouisfed.org/econ/mccracken/fred-databases/'><b>FRED-MD</b></a> - a regularly updated and freely distributed macroeconomic database designed for the empirical analysis of <i>big data</i>. What is notable here is that the dataset is leveraged by collecting a vast number of important macroeconomic variables (factors) which are then optimally reduced in dimensionality using the Bai and Ng (2002) factor selection procedure.<br /><br /> In this post, we will demonstrate how to perform this dimensionality reduction using EViews' native Bai and Ng (2002) and Ahn and Horenstein (2013) factor selection procedures. The latter were introduced with the release of EViews 12. In particular, we will download the raw FRED-MD data, transform each series according to the FRED-MD instructions, and then proceed to perform dimensionality reduction. We will next estimate a traditional factor model with the optimally selected factors, and then proceed to forecast industrial production.<br /><br /> We pause briefly in the next section to provide a quick overview of the aforementioned factor selection procedures. <br /><br /><br /><br /> <h3 class="seccol", id="sec2">Overview of Automatic Factor Selection</h3> Recall that the maximum number of factors cannot exceed the number of observable variables. factor selection is often used as a <b>dimension reduction</b> technique. In other words, the goal is always to optimally select the smallest number of the most representative or <b>principal</b> variables in a set. Since dimensional principality (or importance) is typically quantified in terms of <b>eigenvalues</b>, virtually all dimension reduction techniques in this literature go through <b>principal component analysis</b> (PCA). For detailed theoretical and empirical discussions of PCA, please refer to our blog entries: <a href='http://blog.eviews.com/2018/10/principal-component-analysis-part-i.html'>Principal Component Analysis: Part I (Theory)</a> and <a href='http://blog.eviews.com/2018/11/principal-component-analysis-part-ii.html'>Principal Component Analysis: Part II (Practice)</a>.<br /><br /> Although PCA can identify which dimensions are most principal in a set, it is not designed to offer guidance on how many dimensions to retain. As a result, traditionally, this parameter was often assumed rather than driven by the data. To address this inadequacy, Bai and Ng (2002) proposed to cast the problem of factor selection as a model selection problem whereas Ahn and Horenstein (2013) achieve automatic factor selection by maximizing over ratios of two adjacent eigenvalues. In either case, optimal factor selection is data driven.<br /><br /> <h4 class="subseccol", id="sec2.1">Bai and Ng (2002)</h4> Bai and Ng (2002) handle the problem of optimal factor selection as the more familiar model selection problem. In particular, criteria are judged as a tradeoff between goodness of fit and parsimony. To formalize matters, consider the traditional factor augmented model: $$ Y_{i,t} = \mathbf{\lambda}_{i}^{\top} \mathbf{F}_{t} + e_{i,t} $$ where $ \mathbf{F}_{t} $ is a vector of $ r $ <b>common factors</b>, $ \mathbf{\lambda}_{i} $ denotes a vector of <b>factor loadings</b>, and $ e_{i,t} $ is the <b>idiosyncratic component</b> which is cross-sectionally independent provided $ \mathbf{F}_{t} $ accounts for all inter-cross-sectional correlations. When $ e_{i,t} $ are not cross-sectionally independent, the factor model governing $ u_{i,t} $ is said to be <i>approximate</i>.<br /><br /> The objective here is to identify the optimal number of factors. In particular, $ \mathbf{\lambda}_{i}$ and $ \mathbf{F}_{t} $ are estimated through th optimization problem: \begin{align} \min_{\mathbf{\Lambda}, \mathbf{F}}\frac{1}{NT} \xsum{i}{1}{N}{\xsum{t}{1}{T}{\rbrace{ Y_{i,t} - \mathbf{\lambda}_{i}^{\top}\mathbf{F}_{t} }^{2}}} \label{eq1} \end{align} subject to the normalization $ \frac{1}{T}\mathbf{F}^{\top}\mathbf{F} = \mathbf{I} $ where $ \mathbf{I} $ is the identity matrix.<br /><br /> Traditionally, the estimated factors $\widehat{\mathbf{F}}_{t}$ are proportional to the $T \times \min(N,T)$ matrix of eigenvectors associated with all eigenvalues of the $T\times T$ matrix $\mathbf{Y}\mathbf{Y}^{\top}$. This generates the full set of $ \min(N,T) $ factors. The objective then is to choose $ r < \min(N,T) $ factors that best capture the variation in $ \mathbf{Y} $.<br /><br /> Since the minimization problem in \eqref{eq1} is linear, once the factor matrix is estimated (observed), estimation of the factor loadings reduces to an ordinary least squares problem for a given set of regressors (factors). In particular, let $ \mathbf{F}^{r} $ denote the factors associated with the $ k $ largest eigenvalues of $ \mathbf{Y}\mathbf{Y}^{\top} $, and let $ \mathbf{\lambda}_{i}^{r} $ denote the associated factor loadings. Then, the problem of estimating $ \mathbf{\lambda}_{i}^{r} $ is cast as: $$ V \rbrace{ r, \widehat{\mathbf{F}}^{r} } = \min_{\mathbf{\Lambda}}\frac{1}{NT} \xsum{i}{1}{N}{\xsum{t}{1}{T}{\rbrace{ Y_{i,t} - \mathbf{\lambda}_{i}^{r^{\top}}\widehat{\mathbf{F}}_{t}^{r} }^{2}}} $$ Since a model with $ r+1 $ factors can fit no worse than a model with $ r $ factors, although efficiency is a decreasing function of the number of regressors, the problem of optimally selecting $ r $ becomes a classical problem of model selection. Furthermore, observe that $ V \rbrace{ r, \mathbf{F}^{r} } $ is the sum of squared residuals from a regression of $ \mathbf{Y_{i}} $ on the $ r $ factors, for all $ i $. Thus, to determine $ r $ optimally, one can use a loss function $ L_{r} $ of the form $$ V \rbrace{ r, \widehat{\mathbf{F}}^{r} } + rg(N,T) $$ where $ g(N,T) $ is a penalty for overfitting. \cite{bai-2002} propose 6 such loss functions, labeled PC 1 through 3 and IC 1 through 3. loss functions that yield consistent estimates: The optimal number of factors now derives as the minimum of $V \rbrace{ r, \widehat{\mathbf{F}}^{r} }$ across $ r \leq r_{\text{max}} < \min(N,T) $, where $r_{\text{max}}$ is some known number of maximum factors under consideration. In other words: $$ r^{\star} \equiv \min_{1 \leq k \leq r_{max}} V \rbrace{ r, \widehat{\mathbf{F}}^{r} } $$ Note that since $r_{\text{max}}$ must be specified <i>a priori</i>, its choice will play a role in optimization. <br /><br /> <h4 class="subseccol", id="sec2.2">Ahn and Horenstein (2013)</h4> In contrast to Bai and Ng (2002), Ahn and Horenstein (2013) exploit the fact that the $ r $ largest eigenvalues of some matrix grow unboundedly as the rank of said matrix increases, whereas the other eigenvalues remain bounded. The optimization strategy is then simply the maximum of the ratio of two adjacent eigenvalues. One of the advantages of this contribution is that it's far less sensitive to the choice $ r_{\text{max}} $ than Bai and Ng (2002). Furthermore, the procedure is significantly easier to compute, requiring only eigenvalues.<br /><br /> To further the discussion, let $ \psi_{r} $ denote the $ r^{\text{th}} $ largest eigenvalue of some positive semi-definite matrix $ \mathbf{Q} \equiv \mathbf{Y}\mathbf{Y}^{\top} $ or $ \mathbf{Q} \equiv \mathbf{Y}^{\top}\mathbf{Y} $. Furthermore, define: $$ \tilde{\mu}_{NT,\, r} \equiv \frac{1}{NT}\psi_{r} $$ Ahn and Horenstein (2013) propose the following tow estimators factors. For some $ 1 \leq r_{max} < \min(N,T) $, the optimal number of factors, $ r^{\star} $ is derived as: <ul> <li><b>Eigenvalue Ratio</b> (ER) $$ r^{\star} \equiv \displaystyle \max_{r \leq r_{max}} ER(k) \equiv \frac{\tilde{\mu}_{NT,\, r}}{\tilde{\mu}_{NT,\, r + 1}} $$ </li> <li><b>Growth Ratio</b> (ER) $$ r^{\star} \equiv \displaystyle \max_{r \leq r_{max}} ER(k) \equiv \frac{\log \rbrace{ 1 + \widehat{\mu}_{NT,\, r} }}{\log \rbrace{ 1 + \widehat{\mu}_{NT,\, r + 1} }} $$ where $$ \widehat{\mu}_{NT,\, r} \equiv \frac{\tilde{\mu}_{NT,\, r}}{\displaystyle \xsum{k}{r+1}{\min(N,T)}{\tilde{\mu}_{NT,\, k}}} $$ </li> </ul> At last, we note that Ahn and Horenstein (2013) suggest demeaning the data both in the time dimension as well as the cross-section dimension. While not absolutely necessary for consistency, this step is extremely useful in case of small samples.<br /><br /><br /><br /> <h3 class="seccol", id="sec1">Working with FRED-MD Data</h3> The FRED-MD data a large dimensional dataset updated in real-time and publicly distributed by the Federal Reserve Bank of St. Louis. In its raw form, it consists of 128 time series either in quarterly or monthly frequency. Here, we will work with the monthly frequency which can be downloaded in its raw flavour from <a href='https://s3.amazonaws.com/files.fred.stlouisfed.org/fred-md/monthly/current.csv'><b>current.csv</b></a>. Furthermore, associated with the raw dataset is a set of instructions on how to process each variable in the dataset for empirical work. This can be obtained from <a href='https://s3.amazonaws.com/files.fred.stlouisfed.org/fred-md/Appendix_Tables_Update.pdf'><b>Appendix_Tables_Update.pdf</b></a>.<br /><br /> As a first step, we will write a brief EViews program to download the raw dataset and process each variable according to the aforementioned instructions. The latter is summarized below: <pre><code><br /> <span style="color: green;">'documentation on the data:</span><br /> <span style="color: green;">'https://s3.amazonaws.com/files.fred.stlouisfed.org/fred-md/Appendix_Tables_Update.pdf</span><br /> <br /> close @wf<br /> <br /> <span style="color: green;">'get the latest data (monthly only):</span><br /> wfopen https://s3.amazonaws.com/files.fred.stlouisfed.org/fred-md/monthly/current.csv colhead=2 namepos=firstatt<br /> pagecontract if sasdate<>na<br /> pagestruct @date(sasdate)<br /> <br /> <span style="color: green;">'perform transformations</span><br /> %serlist = @wlookup("*", "series")<br /> <span style="color: blue;">for</span> %j {%serlist}<br /> %tform = {%j}.@attr("Transform:")<br /> <span style="color: blue;">if</span> @len(%tform) <span style="color: blue;">then</span><br /> <span style="color: blue;">if</span> %tform="1" <span style="color: blue;">then</span><br /> series temp = {%j} 'no transform<br /> <span style="color: blue;">endif</span><br /> <span style="color: blue;">if</span> %tform="2" <span style="color: blue;">then</span><br /> series temp = d({%j}) 'first difference<br /> <span style="color: blue;">endif</span><br /> <span style="color: blue;">if</span> %tform="2" <span style="color: blue;">then</span><br /> %tform="3" <span style="color: blue;">then</span><br /> series temp = d({%j},2) 'second difference<br /> <span style="color: blue;">endif</span><br /> <span style="color: blue;">if</span> %tform="2" <span style="color: blue;">then</span><br /> %tform="4" <span style="color: blue;">then</span><br /> series temp = log({%j}) 'log<br /> <span style="color: blue;">endif</span><br /> <span style="color: blue;">if</span> %tform="2" <span style="color: blue;">then</span><br /> %tform= "5" <span style="color: blue;">then</span><br /> series temp = dlog({%j}) 'log difference<br /> <span style="color: blue;">endif</span><br /> <span style="color: blue;">if</span> %tform="2" <span style="color: blue;">then</span><br /> %tform= "6" <span style="color: blue;">then</span><br /> series temp = dlog({%j},2) 'log second difference<br /> <span style="color: blue;">endif</span><br /> <span style="color: blue;">if</span> %tform="2" <span style="color: blue;">then</span><br /> %tform= "7" <span style="color: blue;">then</span><br /> series temp = d({%j}/{%j}(-1) -1) 'whatever<br /> <span style="color: blue;">endif</span><br /> <br /> {%j} = temp<br /> {%j}.clearhistory<br /> d temp <br /> <span style="color: blue;">endif</span><br /> <span style="color: blue;">next</span><br /> <br /> <span style="color: green;">'drop </span><br /> group grp *<br /> grp.drop resid<br /> grp.drop sasdate<br /><br /> smpl 1960:03 @last<br /> </code></pre> This program processes and collects the variables in a group which we've labeled here <b class="wfobj">GRP</b>. Additionally, we've dropped the variable <b class="wfobj">SASDATE</b> from this group since it is a date variable. In other words, <b class="wfobj">GRP</b> is a collection of 127 variables. Furthermore, as suggested by the FRED-MD paper, the sample under consideration should start from March 1960, and so the final line of the code above sets that sample.<br /><br /> A brief glance at the variables indicates that certain variables have missing values. Unfortunately, neither the Bai and Ng (2002) nor the Ahn and Horenstein (2013) procedure handle missing values particularly well. Accordingly, as suggested in the original FRED-MD paper, missing values are initially set to the mean of non-missing observations for any given series. This is easily achieved with a quick program as follows: <pre><code><br /> <span style="color: green;">'impute missing values with mean of non-missing observations</span><br /> <span style="color: blue;">for</span> !k=1 to grp.count<br /> <span style="color: green;">'compute mean of non-missing observations</span><br /> series tmp = grp(!k)<br /> !mu = @mean(tmp)<br /> <br /> <span style="color: green;">'set missing observations to mean</span><br /> grp(!k) = @nan(grp(!k), !mu)<br /> <br /> <span style="color: green;">'clean up before next series</span><br /> smpl 1960:03 @last<br /> d tmp <br /> <span style="color: blue;">next</span><br /> </code></pre> The original FRED-MD paper next suggests a second stage updating of missing observations. Nevertheless, for sake of simplicity, we will skip this step and proceed to estimating the optimal number of factors.<br /><br /> Although we will later estimate a factor model which will handle factor selection within its scope, here we demonstrate automatic factor selection as a standalone exercise. To do so, we will proceed through the principal component dialog. In particular, we open the group <b class="wfobj">GRP</b>, and then proceed to click on <b>View/Principal Components...</b>.<br /><br /> Notice that the principal components dialog here is changed from previous versions. This is to allow for the additional selection procedures we've introduced in EViews 12. Because of these changes, we briefly pause to explain the options available to users. In particular, the method dropdown offers several factor selection procedures. The first two, <b>Bai and Ng</b> and <b>Ahn and Horenstein</b>, are automatic selection procedures. The remaining two, <b>Simple</b> and <b>User</b>, are legacy principal component methods that were available in EViews versions prior to 12.<br /><br /> Next, associated with each method is a criteria to use in selection. In case of Bai and Ng, this offers seven possibilities. One for each of the 6 criteria, and the default <b>Average of criteria</b> which provides a summary of each of the 6 criteria, as well as their average.<br /><br /> Also, associated with each method is a dropdown which determines how the maximum number of factors are determined. Here EViews offers 5 possibilities, the specifics of which can be obtained by referring to the <a href='http://www.eviews.com/help/helpintro.html#page/content/groups-Principal_Components.html'><b>EViews manual</b></a>. Recall that both the Bai and Ng (2002) as well as the Ahn and Horenstein (2013) methods require the specification of this parameter. Although EViews offers several automatic selection mechanisms, in keeping with the suggestions in the FRED-MD paper, exercises below will use a user-defined value of 8.<br /><br /> Finally, EViews offers the option of demeaning and standardizing the dataset across both time and factor dimension. In fact, since the FRED-MD paper suggests that data should be demeaned and standardized, exercises below will proceed by demeaning and standardizing each of the variables. We next demonstrate how to obtain the Bai and Ng (2002) estimate of the optimal number of factors.<br /><br /> <h4 class="subseccol", id="sec3.1">Factor Selection using Bai and Ng (2002)</h4> From the open principal component dialog, we proceed as follows:<br /><br /> <ol> <li>Change the <b>Method</b> dropdown to <b>Bai and Ng</b>.</li> <li>Set the <b>User maximum factors</b> to <b>8</b>.</li> <li>Check the <b>Time-demean</b> box.</li> <li>Check the <b>Time-standardize</b> box.</li> <li>Click on <b>OK</b>.</li> </ol><br /> <!-- :::::::::: FIGURE 1 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/autofactsel/images/pca_dialog.png"><img height="auto" src="http://www.eviews.com/blog/autofactsel/images/pca_dialog.png" title="Principal Components Dialog" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 1: Principal Components Dialog</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 1 :::::::::: --> Hitting OK, Eviews produces a spool output. The first part of this output is a summary of the principal component analysis. <!-- :::::::::: FIGURES 2a and 2b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 2a :::::::::: --> <center> <a href="http://www.eviews.com/blog/autofactsel/images/pca_bn1.png"><img height="auto" src="http://www.eviews.com/blog/autofactsel/images/pca_bn1.png" title="Bai and Ng Summary: PCA Results" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 2b :::::::::: --> <center> <a href="http://www.eviews.com/blog/autofactsel/images/pca_bn2.png"><img height="auto" src="http://www.eviews.com/blog/autofactsel/images/pca_bn2.png" title="Bai and Ng Summary: Factor Selection Results" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 2a: Bai and Ng Summary: PCA Results</small> </center> </td> <td class="nb"> <center> <small>Figure 2b: Bai and Ng Summary: Factor Selection Results</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 2a and 2b :::::::::: --> The second part of the output, <b>Component Selection Results</b>, displays the summary of the Bai and Ng factor selection procedure. In particular, we see that each of the 6 selection criteria selected 8 factors. Naturally, the average number of selected factors is also 8. This result corresponds to the findings in the original FRED-MD paper, although the latter insists on using the PCP2 criterion. Accordingly, we can repeat the exercise above and show the specifics of the PCP2 selection. To do so, from the open group window, we again click on <b>View/Principal Components...</b>, and proceed as follows: <ol> <li>Change the <b>Method</b> dropdown to <b>Bai and Ng</b>.</li> <li>Change the <b>Criterion</b> dropdown to <b>PCP2</b>.</li> <li>Set the <b>User maximum factors</b> to <b>8</b>.</li> <li>Check the <b>Time-demean</b> box.</li> <li>Check the <b>Time-standardize</b> box.</li> <li>Click on <b>OK</b>.</li> </ol><br /> <!-- :::::::::: FIGURE 3 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/autofactsel/images/pca_bn3.png"><img height="auto" src="http://www.eviews.com/blog/autofactsel/images/pca_bn3.png" title="Bai and Ng PCP2: Factor Selection Results" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 3: Bai and Ng PCP2: Factor Selection Results</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 3 :::::::::: --> The output above is a detailed look at the selection procedure. In particular, for each number of factors from 1 to 8, EViews displays the PCP2 statistic. Clearly, the minimum is achieved with 8 factors where the statistic equals 0.904325. Again, the number of factors selected matches that obtained in the FRED-MD paper.<br /><br /> <h4 class="subseccol", id="sec3.1">Factor Selection using Ahn and Horenstein (2013)</h4> Similar steps can be undertaken to obtain the Ahn and Horenstein (2013) factor selection results. From the open principal component dialog, we proceed as follows:<br /><br /> <ol> <li>Change the <b>Method</b> dropdown to <b>Ahn and Horenstein</b>.</li> <li>Set the <b>User maximum factors</b> to <b>8</b>.</li> <li>Check the <b>Time-demean</b> box.</li> <li>Check the <b>Time-standardize</b> box.</li> <li>Check the <b>Cross-demean</b> box.</li> <li>Check the <b>Cross-standardize</b> box.</li> <li>Click on <b>OK</b>.</li> </ol><br /> <!-- :::::::::: FIGURES 4a and 4b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 4a :::::::::: --> <center> <a href="http://www.eviews.com/blog/autofactsel/images/pca_ah1.png"><img height="auto" src="http://www.eviews.com/blog/autofactsel/images/pca_ah1.png" title="Ahn and Horenstein Summary: PCA Results" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 4b :::::::::: --> <center> <a href="http://www.eviews.com/blog/autofactsel/images/pca_ah2.png"><img height="auto" src="http://www.eviews.com/blog/autofactsel/images/pca_ah2.png" title="Ahn and Horenstein: Factor Selection Results" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 4a: Ahn and Horenstein: PCA Results</small> </center> </td> <td class="nb"> <center> <small>Figure 4b: Ahn and Horenstein: Factor Selection Results</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 4a and 4b :::::::::: --> The results of the Ahn and Horenstein (2013) procedure are markedly different. Unlike the preceding Bai and Ng exercises, here we have chosen to demean the factor (cross-sectional) dimension in addition to demeaning and standardizing the time dimension. This is in keeping with the suggestion in Ahn and Horenstein (2013) who suggest that the cross-sectional dimension should be demeaned to achieve superior results. In particular, the optimal number of factors selected is 1 using both the Eigenvalue Ratio and the Growth Ratio statistics. Clearly, this is very different from the 8 selected factors in the previous exercises.<br /><br /> <h4 class="subseccol", id="sec3.3">Factor Model Estimation</h4> Typically, the objective of factor selection mechanisms is not in finding the number of factors outside of some context. Rather, it's a precursor to some form of estimation such factor model or second generation panel unit root tests. Here, we estimate a factor model using the full FRED-MD dataset and specify that the number of factors should be selected with the Bai and Ng (2002) procedure.<br /><br /> We start by creating a factor object. This is easily done by issuing the following command: <pre><code><br /> factor fact<br /> </code></pre> This will create a factor object in the workfile called <b class="wfobj">FACT</b>. We double click it to open it and then proceed to click on the <b>Estimate</b> button to bring up the estimation dialog. <!-- :::::::::: FIGURES 5a and 5b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 5a :::::::::: --> <center> <a href="http://www.eviews.com/blog/autofactsel/images/fact_dialog1.png"><img height="auto" src="http://www.eviews.com/blog/autofactsel/images/fact_dialog1.png" title="Factor Dialog: Data Tab" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 5b :::::::::: --> <center> <a href="http://www.eviews.com/blog/autofactsel/images/fact_dialog2.png"><img height="auto" src="http://www.eviews.com/blog/autofactsel/images/fact_dialog2.png" title="Factor Dialog: Estimation Tab" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 5a: Factor Dialog: Data Tab</small> </center> </td> <td class="nb"> <center> <small>Figure 5b: Factor Dialog: Estimation Tab</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 5a and 5b :::::::::: --> The rest of the steps proceed as follows: <ol> <li>Under the <b>Data</b> tab, enter <b class="wfobj">GRP</b>.</li> <li>Click on the <b>Estimation</b> tab.</li> <li>From the <b>Number of factors</b> group, set the <b>Method</b> dropdown to <b>Bai and Ng</b>.</li> <li>From the <b>Max. Factors</b> dropdown select <b>User</b>.</li> <li>In the <b>User maximum factors</b> textbox write <b>8</b>.</li> <li>Check the <b>Time-demean</b> box.</li> <li>Check the <b>Time-standardize</b> box.</li> <li>Click on <b>OK</b>.</li> </ol><br /> This tells EViews to estimate a factor model of at most 8 factors, with the number of factors chosen from the full FRED-MD set of variables using the Bai and Ng (2002) procedure. The output is reproduced below:<br /><br /> <!-- :::::::::: FIGURES 6a and 6b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 6a :::::::::: --> <center> <a href="http://www.eviews.com/blog/autofactsel/images/fact_est1.png"><img height="auto" src="http://www.eviews.com/blog/autofactsel/images/fact_est1.png" title="Factor Estimation: Part 1" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 6b :::::::::: --> <center> <a href="http://www.eviews.com/blog/autofactsel/images/fact_est2.png"><img height="auto" src="http://www.eviews.com/blog/autofactsel/images/fact_est2.png" title="Factor Estimation: Part 2" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 6a: Factor Estimation: Part 1</small> </center> </td> <td class="nb"> <center> <small>Figure 6b: Factor Estimation: Part 2</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 6a and 6b :::::::::: --> <h4 class="subseccol", id="sec3.3">Forecasting Industrial Production</h4> Having estimated a factor model, we now repeat the exercise of forecasting industrial production. The exercise is considered in the original FRED-MD paper where the forecast dynamics are summarized as follows: $$ y_{t+h} = \alpha_h + \beta_h(L)\hat{f}_t + \gamma_h(L)y_t $$ In other words, this is an $h-$step-ahead AR forecast with a constant and estimated factor as exogenous variables. In particular, to maintain comparability with the original exercise, we consider an 11-month-ahead forecast where $\hat{f}_t$ is obtained from the previously estimated factor model. In other words, we'll forecast for the period of available data in 2020. This exercise is repeated for the first estimated factor, the sum of the first two estimated factors, and no estimated factors, respectively.<br /><br /> As a first step in this exercise, we must extract the estimated factors. Although the factors are unobserved, they may be estimated from the estimated factor model as scores. In particular, proceed as follows: <ol> <li>From the open factor model, click on <b>Proc</b> and then <b>Make Scores...</b>.</li> <li>Under the <b>Output specification</b> enter <b>1 2</b>.</li> <li>Click on <b>OK</b>.</li> </ol><br /> This will produce two series in the workfile: <b class="wfobj">F1</b> and <b class="wfobj">F2</b>.<br /><br /> Next, let's forecast industrial production by leveraging the EViews native autoregressive forecast engine. To do so, double click on the series <b class="wfobj">INDPRO</b> to open it. Next, click on <b>Proc/Automatic ARIMA Forecasting...</b> to open the dialog. We now proceed with the following steps: <ol> <li>In the <b>Estimation sample</b> textbox, enter <b>1960M03 2019M12</b>.</li> <li>Under <b>Forecast length</b> enter <b>11</b>.</li> <li>Under the <b>Regressors</b> textbox, enter <b>C F1</b>.</li> <li>Click on the <b>Options</b> tab.</li> <li>Under the <b>Output forecast name</b>, enter <b>INDPRO_F1</b>.</li> <li>Ensure the <b>Forecast comparison graph</b> is checked.</li> <li>Click on <b>OK</b>.</li> </ol><br /> <!-- :::::::::: FIGURES 8a and 8b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 8a :::::::::: --> <center> <a href="http://www.eviews.com/blog/autofactsel/images/forecast_dialog1.png"><img height="auto" src="http://www.eviews.com/blog/autofactsel/images/forecast_dialog1.png" title="Forecast Dialog: Specification" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 8b :::::::::: --> <center> <a href="http://www.eviews.com/blog/autofactsel/images/forecast_dialog2.png"><img height="auto" src="http://www.eviews.com/blog/autofactsel/images/forecast_dialog2.png" title="Forecast Dialog: Options" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 8a: Forecast Dialog: Specification</small> </center> </td> <td class="nb"> <center> <small>Figure 8b: Forecast Dialog: Options</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 8a and 8b :::::::::: --> The options above specify that we wish to forecast the last 11 months of available data. Since our available sample runs from March 1960 to November 2020, we will estimate on the sample 1960 March through December 2019, and forecast out to November 2020.<br /><br /> <!-- :::::::::: FIGURES 9a and 9b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 9a :::::::::: --> <center> <a href="http://www.eviews.com/blog/autofactsel/images/forecast_11m1.png"><img height="auto" src="http://www.eviews.com/blog/autofactsel/images/forecast_11m1.png" title="Forecast: Actuals vs Forecast" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 9b :::::::::: --> <center> <a href="http://www.eviews.com/blog/autofactsel/images/forecast_11m2.png"><img height="auto" src="http://www.eviews.com/blog/autofactsel/images/forecast_11m2.png" title="Forecast: Forecast Comparison Graph" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 9a: Forecast: Actuals vs Forecast</small> </center> </td> <td class="nb"> <center> <small>Figure 9b: Forecast: Forecast Comparison Graph</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 9a and 9b :::::::::: --> For comparison, the same type of forecast is produced using <b>C (F1 + F2)</b> as exogenous variables, and <b>C</b> as the only exogenous variable. All three forecasts are superimposed on top of the original curve for comparison. This is reproduced below. <!-- :::::::::: FIGURE 10 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/autofactsel/images/forecast_11m3.png"><img height="auto" src="http://www.eviews.com/blog/autofactsel/images/forecast_11m3.png" title="Forecast Comaprison" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 10: Forecast Comparison</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 10 :::::::::: --> <hr /> <h3 class="seccol", id="sec4">Files</h3> <ul> <li><a href="http://www.eviews.com/blog/autofactsel/workfiles/fred-md.wf1"'><b class="wf">FRED-MD.WF1</b></a></li> <li><a href="http://www.eviews.com/blog/autofactsel/workfiles/fred-md.prg"'><b class="wf">FRED-MD.PRG</b></a></li> </ul><br /><br /> <hr /> <h3 class="seccol", id="sec5">References</h3> <ol class="bib2xhtml"> <li id="bai-ng-2002"> Bai J and Ng S (2002), <i>"Determining the Number of Factors in Approximate Factor Models"</i>, Econometrica, Vol. 70, pp. 191-221. Wiley Online Library. </li> <li id="ahn-horenstein-2013"> Ahn SC and Horenstein AR (2013), <i>"Eigenvalue Ratio Test for the Number of Factors"</i>, Econometrica, Vol. 81, pp. 1203-1227. Wiley Online Library. </li> <li id="mcracken-ng-2013"> McCracken MW and Ng S (2016), <i>"FRED-MD: A Monthly Database for Macroeconomic Research"</i>, Econometrica, Vol. 34, pp. 574-589. Taylor & Francis. </li> </ol></span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com0tag:blogger.com,1999:blog-6883247404678549489.post-47321381959468296912020-12-21T09:09:00.000-08:002020-12-21T09:09:28.715-08:00Using Indicator Saturation to Detect Outliers and Structural Shifts<style> table { border: 0px solid black; border-collapse: separate; border-spacing: 10px; } td { //border: 1px solid black; } .nb { border: 0px solid black; } .step { counter-reset: section; list-style-type: none; } .step li::before { counter-increment: section; content: "Step "counter(section) ": "; } .seccol { } .subseccol { color: #fa5e5e } .wf { } .wfobj { } </style> <script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { rbrace: ['{\\left(#1\\right)}', 1], cbrace: ['{\\left\\{#1\\right\\}}', 1], sbrace: ['{\\left[#1\\right]}', 1], bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1], series: ['{\\left\\{#1_{#2}\\right\\}_{#2=#3}^{#4}}', 4], xsum: ['{\\sum_{#1=#2}^{#3}{#4}}', 4], var: ['{\\operatorname\{var\}}'], sign: ['{\\operatorname\{sign\}}'], diag: ['{\\operatorname\{diag\}}'], med: ['{\\operatorname\{median\}}'], vec: ['{\\operatorname\{vec\}}'], tr: ['{\\operatorname\{tr\}}'] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"> </script> <span style="font-family: "verdana" sans-serif"> One of the potential pitfalls when working with time series datasets is that the data may have temporary or permanent changes to its levels. These changes could be single time-period outliers, or a fundamental structural shift.<br /><br /> EViews 12 introduces a new technique to detect and model these outliers and structural changes through <a href='http://eviews.com/help/helpintro.html#page/content%2FRegress2-Indicator_Saturation.html%23'>indicator saturation</a>. in the recently released EViews 12, we thought we'd give another demonstration. <a name='more'></a><br /><br /> <h3 class="seccol">Table of Contents</h3> <ol> <li><a href="#sec1">Indicator Saturation</a> <li><a href="#sec2">AutoSearch/GETS</a> <li><a href="#sec3">An Application with Consumption and Income</a> </ol><br /> <h3 class="seccol", id="sec1">Indicator Saturation</h3> Identifying changes in data is essential if we are to properly estimate models based upon these data. One way to detect changes would be to include dummy or indicator variables for potential observations where the change occurs in your regression, and then decide whether that included indicator is a valid regressor. Such variables could include: <ul> <li><b>Impulse Indicators</b> (IIS): a dummy variable equal to zero everywhere other than a single value of one at period $ t $. This indicator can be used to model single observation outliers, and is equivalent to the <b>@isperiod</b> EViews function used at the date corresponding to $ t $.</li> <li><b>Step Indicators</b> (SIS): a step function variable equal to zero until $ t $ and one thereafter. This indicator can be used to model a shift in the intercept of an equation, and is equivalent to the <b>@after</b> EViews function used at the date corresponding to $ t $.</li> <li><b>Trend Indicators</b> (TIS): a trend-break variable that is equal to zero until period $ t $ and then a follows a trend afterward. This indicator can be used to model a change in the trend of an equation (or the introduction of a trend term if one didn’t previously exist), and is equivalent to the <b>@trendbr</b> function used at the date corresponding to t.</li> </ul><br /> The problem with the approach of including these variables in a traditional regression setting is that unless you know the specific dates where changes occur, you can quickly run into a situation where you have more variables than observations (since you’ll be adding at least one indicator variable for each observation in your estimation sample!).<br /><br /> Fortunately, recent advancements in variable selection techniques have meant that we can now perform variable selection on models with many more variables than observations, and so can saturate our regression with complex combinations of indicator variables and let the variable selection technique choose which are the most appropriate indicators to use.<br /><br /><br /> <h3 class="seccol", id="sec2">AutoSearch/GETS</h3> One of the new technologies introduced in EViews 12 is the <a href='http://eviews.com/help/helpintro.html#page/content%2FVarsel-Background.html%23ww277256'><b>AutoSearch/GETS</b></a> algorithm for variable selection.<br /><br /> AutoSearch/GETS is a method of variable selection that follows the steps suggested by AutoSEARCH algorithm of <a href='http://www.sucarrat.net/research/autofim.pdf'>Escribano and Sucarrat (2011)</a>, which in turn builds upon the work in <a href='http://www.sucarrat.net/research/autofim.pdf'>Hoover and Perez (1999)</a>, and is similar to the technology behind the <b>Autometrics™</b> module in <a href='https://www.doornik.com/products.html#PcGive'><b>PcGive™</b></a>.<br /><br /> Mechanically the algorithm is similar to a <a href='http://eviews.com/help/helpintro.html#page/content%2FVarsel-Background.html%23ww277180'>backwards uni-directional stepwise</a> method: <ol> <li>The model with all search variables (termed the general unrestricted model, GUM) is estimated, and checked with a set of diagnostic tests.</li> <li>A number of search paths are defined, one for each insignificant search variable in the GUM.</li> <li>For each path, the insignificant variable defined in 2) is removed and then a series of further variable removal steps is taken, each time removing the most insignificant variable, and each time checking whether the current model passes the set of diagnostic tests. If the diagnostic tests fail after the removal of a variable, that variable is placed back into the model and prevented from being removed again along this path. Variable removal finishes once there are no more insignificant variables, or it is impossible to removal a variable without failing the diagnostic tests.</li> <li>Once all paths have been calculated the final models produced by the paths are compared using an information criteria selection. The best model is then selected.</li> </ol><br /> One of the advantages of AutoSearch/GETS is that the set of candidate variables can be split into sets, with search performed on each sets one at a time, then the selected variables from each set can be combined into a final set to be searched. This allows you to test more candidate variables than you have observations without creating singularities (as long as enough candidate variables are rejected), which means it is a perfect algorithm for indicator saturation studies.<br /><br /><br /> <h3 class="seccol", id="sec3">An Application with Consumption and Income</h3> To demonstrate this feature, we will estimate a simple personal consumption equation, using log-difference of personal consumption as the dependent variable against a constant and log-differenced disposable income. This estimation is purely for demonstration of the saturation features in EViews 12, and should not be taken as worthy macroeconomic research!<br /><br /> Both data series were downloaded directly from the Federal Reserve of St Louis database, FRED, and contain monthly observations between 2002 and April 2020:<br /><br /> <!-- :::::::::: FIGURE 1 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/indicators/images/FRED.gif"><img height="auto" src="http://www.eviews.com/blog/indicators/images/FRED.gif" title="FRED" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 1: FRED</small> <small>(Click to expand)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 1 :::::::::: --> We begin by estimating a simple equation without any indicators included, using the following steps: <ol> <li><b>Quick/Estimate Equation</b> to bring up the equation estimation dialog.</li> <li>Enter our dependent variable <b>DLOG(CONS)</b> followed by a constant and our regressor <b>DLOG(INCOME)</b>.</li> <li>Clicking OK.</li> </ol><br /> <!-- :::::::::: FIGURES 2a and 2b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 2a :::::::::: --> <center> <a href="http://www.eviews.com/blog/indicators/images/SimpleEqDiag.png"><img height="auto" src="http://www.eviews.com/blog/indicators/images/SimpleEqDiag.png" title="Simple Estimation Dialog" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 2b :::::::::: --> <center> <a href="http://www.eviews.com/blog/indicators/images/SimpleEqRes.png"><img height="auto" src="http://www.eviews.com/blog/indicators/images/SimpleEqRes.png" title="Simple Estimation Output" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 2a: Simple Estimation Dialog</small> <small>(Click to expand)</small> </center> </td> <td class="nb"> <center> <small>Figure 2b: Simple Estimation Output</small> <small>(Click to expand)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 2a and 2b :::::::::: --> Note that the coefficient on log differenced income is negative and statistically significant. Also note we have an R-squared of 35%.<br /><br /> If we click on the <b>Resids</b> button we can view a graph of the equation residuals.<br /><br /> <!-- :::::::::: FIGURE 3 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/indicators/images/SimpleEqResid.png"><img height="auto" src="http://www.eviews.com/blog/indicators/images/SimpleEqResid.png" title="Estimation Residuals" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 3: Estimation Residuals</small> <small>(Click to expand)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 3 :::::::::: --> A quick eyeball test suggests that something happened towards the end of 2004, again in the middle of 2008 and then 2013. And obviously there was a huge shift at the start of the Covid-19 crisis in March/April 2020.<br /><br /> Now we’ll estimate a new equation where we will instruct EViews to detect for both impulse (outlier) and step-shift (change in intercept) indicators, with the following steps: <ol> <li><b>Quick/Estimate Equation</b>> to bring up the equation estimation dialog.</li> <li>Enter our dependent variable <b>DLOG(CONS)</b> followed by a constant and our regressor <b>DLOG(INCOME)</b>.</li> <li>Switch to the <b>Options Tab</b> and select <b>Auto-detection</b> under <b>Outliers/indicator saturation</b>.</li> <li>Press the <b>Options</b> button and select both <b>Impulse</b> and <b>Step-shift</b> indicators.</li> <li>Change the <b>Terminal condition p-value</b> to <b>0.01</b> (which will allow for more indicators entering the equation).</li> <li>Clicking OK twice.</li> </ol><br /> <!-- :::::::::: FIGURES 4a and 4b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 4a :::::::::: --> <center> <a href="http://www.eviews.com/blog/indicators/images/ImpulseEst.gif"><img height="auto" src="http://www.eviews.com/blog/indicators/images/ImpulseEst.gif" title="Impulse Estimation" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 4b :::::::::: --> <center> <a href="http://www.eviews.com/blog/indicators/images/ImpulseRes.png"><img height="auto" src="http://www.eviews.com/blog/indicators/images/ImpulseRes.png" title="Impulse Estimation Output" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 4a: Impulse Estimation</small> <small>(Click to expand)</small> </center> </td> <td class="nb"> <center> <small>Figure 4b: Impulse Estimation Output</small> <small>(Click to expand)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 4a and 4b :::::::::: --> You can see that five indicators have been added to the equation, with three single observation indicators (2018M12, 2020M03, 2020M04), and two level shift indicators (2008M5, 2013M1).<br /><br /> The impact of these variables on the log-differenced income coefficient is dramatic, as is resulting R-squared.<br /><br /> Viewing the residual graph shows that the large outliers have been removed, and the location of detected indicators, as shown by the vertical lines, corresponds to the outliers we eyeballed in the original equation.<br /><br /> <!-- :::::::::: FIGURE 5 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/indicators/images/ImpulseResid.png"><img height="auto" src="http://www.eviews.com/blog/indicators/images/ImpulseResid.png" title="Impulse Residuals" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 5: Impulse Residuals</small> <small>(Click to expand)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 5 :::::::::: --> </span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com1tag:blogger.com,1999:blog-6883247404678549489.post-58039045047729172952020-12-08T07:42:00.007-08:002020-12-08T08:04:04.203-08:00Nowcasting GDP with PMI using MIDAS-GETS<style> table { border: 0px solid black; border-collapse: separate; border-spacing: 10px; } td { border: 1px solid black; } .nb { border: 0px solid black; } .step { counter-reset: section; list-style-type: none; } .step li::before { counter-increment: section; content: "Step "counter(section) ": "; } .seccol { } .subseccol { color: #fa5e5e } .wf { } .wfobj { } </style> <script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { rbrace: ['{\\left(#1\\right)}', 1], cbrace: ['{\\left\\{#1\\right\\}}', 1], sbrace: ['{\\left[#1\\right]}', 1], bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1], series: ['{\\left\\{#1_{#2}\\right\\}_{#2=#3}^{#4}}', 4], xsum: ['{\\sum_{#1=#2}^{#3}{#4}}', 4], var: ['{\\operatorname\{var\}}'], sign: ['{\\operatorname\{sign\}}'], diag: ['{\\operatorname\{diag\}}'], med: ['{\\operatorname\{median\}}'], vec: ['{\\operatorname\{vec\}}'], tr: ['{\\operatorname\{tr\}}'] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"> </script> <span style="font-family: "verdana" sans-serif"> <b>Nowcasting</b>, <a href='https://en.wikipedia.org/wiki/Nowcasting_(economics)'>the act of predicting the current or near-future state of a macro-economic variable</a>, has become one of the more popular research topics performed in EViews over the past decade.<br /><br /> Perhaps the most important technique in nowcasting is mixed data sampling, or MIDAS. We have discussed <a href='https://en.wikipedia.org/wiki/Mixed-data_sampling'>MIDAS</a> estimation in EViews in a couple of prior guest <a href='http://blog.eviews.com/2018/12/nowcasting-gdp-on-daily-basis.html'>blog posts</a>, but with the introduction of a <a href='http://eviews.com/EViews12/ev12ecest_n.html#midas'>new MIDAS technique</a> in the recently released EViews 12, we thought we'd give another demonstration. <a name='more'></a><br /><br /> <h3 class="seccol">Table of Contents</h3> <ol> <li><a href="#sec1">MIDAS – A Brief Background</a> <ul> <li><a href="#sec1.1">MIDAS-GETS</a> </ul> <li><a href="#sec2">MIDAS as a Nowcasting Tool</a> <ul> <li><a href="#sec1.1">PMI as a Nowcasting Instrument</a> </ul> <li><a href="#sec3">Nowcasting Exercises</a> <ul> <li><a href="#sec3.1">MIDAS-PDL</a> <li><a href="#sec3.2">MIDAS-GETS</a> <li><a href="#sec3.3">MIDAS-GETS with Indicator Saturation</a> <li><a href="#sec3.4">Evaluating Nowcasting Models</a> </ul> </ol><br /> <h3 class="seccol", id="sec1">MIDAS – A Brief Background</h3> <b>MIxed DAta Sampling</b> (MIDAS) is a regression technique that handles the case where the dependent variable is sampled or reported at a lower frequency than that of one, or more, of the independent regressors. This is common in macroeconomics where a number of important indicators, such as GDP, are usually reported on a quarterly basis, and other indicators, such as unemployment or stock prices, are reported on a monthly or even weekly basis.<br /><br /> The traditional approach to dealing with this mixed-frequency problem is to aggregate the higher-frequency variable into the same frequency as the lower. For example, when dealing with quarterly GDP and monthly unemployment, it's common practice to use the average monthly unemployment rate over the three months in a quarter as a single quarterly observation. Whilst simple to implement, this approach loses fidelity in the higher-frequency variables. Any within-quarter movements in unemployment are lost, and the dataset is reduced by 2/3 (converting three observations into one).<br /><br /> MIDAS alleviates this issue by adding the individual components of the higher-frequency variable as independent regressors, allowing a separate coefficient for each component. For example, unemployment could have three separate regressors, one for the first month of the quarter, one for the second, and one for the third. This simple approach is called <b>U-MIDAS</b>.<br /><br /> A drawback of creating a regressor for each high-frequency component is that, in certain cases, one quickly saturates the equation with many regressors (curse of dimensionality). For instance, whereas monthly unemployment and quarterly GDP would generate 3 regressors for the one underlying variable, annual data would generate 12 regressors. If we had daily interest rates regressed with quarterly data, we would have over 90 regressors for the one underlying variable.<br /><br /> To mitigate this expansion of regressors, traditional MIDAS utilizes a selection of weighting schemes that parameterize the higher frequency variables into a smaller number of coefficients. The most common of these weighting schemes is <b>Almon/PDL</b> weighting.<br /><br /> A last note on MIDAS – although it is natural to want to include a number of high-frequency variables equal to the number of high-frequency periods per low frequency period (i.e. include three monthly variables since there are three months in a quarter), there is nothing that mathematically imposes this restriction in the MIDAS framework, and it is quite common to use many more variables than the natural number. <br /><br /> Going back to our unemployment/GDP example, you may want to utilize 9 months of unemployment data to explain GDP, and thus create 9 variables. In other words, you may determine that Q1 GDP is determined by unemployment in March, February, January (the three natural months), as well as 6 months previous (December, November, October, September, August, July). <br /><br /> Of course, you can also impose a lag structure to postulate that Q1 GDP is determined by February, January, …., June (a one month lag), or is determined by December, November, …, April (a three month lag). These 9 variables may then be reduced to a smaller number of coefficients using MIDAS weighting schemes, or, if the sample size permits, kept at 9 separate regressors.<br /><br /> <h4 class="subseccol", id="sec1.1">MIDAS-GETS</h4> EViews 12 introduces a new MIDAS estimation method, <a href='http://eviews.com/help/helpintro.html#page/content%2Fmidas-Background.html%23ww331980'><b>MIDAS-GETS</b></a>. Rather than using a weighting scheme to reduce the number of variables, MIDAS-GETS controls the curse of dimensionality with the <a href='http://eviews.com/help/helpintro.html#page/content%2FVarsel-Background.html%23ww277256'><b>Auto-Search/GETS</b></a> variable selection algorithm to select which of the high frequency variables to include in the regression.<br /><br /> Since the Auto-Search/GETS algorithm is also used in EViews' indicator saturation detection routines, <a href='http://eviews.com/help/helpintro.html#page/content%2FRegress2-Indicator_Saturation.html%23'>indicator saturation</a> is available to MIDAS-GETS too. This means that the estimation can automatically include indicator variables that allow for outliers and structural changes in the model, which can dramatically enhance the forecasting performance of a model.<br /><br /><br /> <h3 class="seccol", id="sec2">MIDAS as a Nowcasting Tool</h3> Although MIDAS was not necessarily introduced as a tool for nowcasting, its applicability to nowcasting is obvious; whilst traditional macroeconomic variables are typically sampled at low frequencies and with a reporting delay, high frequency data is available in a timely fashion that can often be used to estimate the current state of a low frequency variable.<br /><br /> More concretely, take Eurozone GDP. This important macro variable is released by <a href='https://ec.europa.eu/eurostat/news/release-calendar'>Eurostat</a> on a quarterly basis, usually 3 months after the quarter has ended. Thus, if you are at the end of July and want to know what the current GDP is, you must wait until December to receive the official statistics.<br /><br /> However, there may be monthly, or even daily variables, available without a delay. Unlike their latent counterparts, these can be used to estimate the current value of GDP immediately.<br /><br /> <h4 class="subseccol", id="sec2.1">PMI as a Nowcasting Instrument</h4> One of the more popular variables used in nowcasting exercises are economic surveys. Surveys can be released at a high frequency with little delay and are often highly correlated with more traditional macroeconomic variables. Here at EViews we're fans of the <a href='https://www.markiteconomics.com/'><b>Purchasing Manager's Index</b></a> (PMI). The latter is derived from surveys of senior executives at private sector companies, is released monthly, and reflects the current state of the economy (i.e., has little delay between the survey and the release). In particular, we like the Eurozone composite measure which consistently shows a high correlation with growth in Eurozone GDP:<br /><br /> <!-- :::::::::: FIGURE 1 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/midas_gets/images/correlation.png"><img height="auto" src="http://www.eviews.com/blog/midas_gets/images/correlation.png" title="Eurozone PMI" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 1: Eurozone PMI</small><br /> <small>(Click to expand)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 1 :::::::::: --> <h3 class="seccol", id="sec3">Nowcasting Exercises</h3> As a simple demonstration of nowcasting with various MIDAS approaches, we're going to run a little exercise that uses monthly Eurozone composite PMI to nowcast quarterly Eurozone GDP growth.<br /><br /> Specifically, we have an EViews workfile with two pages: the first contains quarterly data from 1998q3 to 2020q3 with Eurozone GDP Growth (<b class="wfobj">GDP_GR</b>), whereas the second contains monthly data over the same period with Eurozone Composite PMI (<b class="wfobj">PMICMPEMU</b>).<br /><br /> <!-- :::::::::: FIGURE 2 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/midas_gets/images/workfile.gif"><img height="auto" src="http://www.eviews.com/blog/midas_gets/images/workfile.gif" title="Workfile" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 2: Workfile</small><br /> <small>(Click to expand)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 2 :::::::::: --> <h4 class="subseccol", id="sec3.1">MIDAS-PDL</h4> To begin, we'll pretend we are currently at the start of March 2019 and wish to nowcast the current (2019Q1) value of Eurozone GDP growth. We have our February PMI data handy (and all previous months). We'll estimate a standard MIDAS equation in EViews, using data until Q4 2018 to estimate our model, then use the February PMI with that equation to nowcast Q1 2019. We'll assume that GDP growth is explained by 12 months of PMI data and by the previous quarterly value of GDP growth. The steps we perform are: <ol> <li>Ensure we have the Quarterly page selected.</li> <li>Quick->Estimate Equation</li> <li>Select <b>MIDAS</b> as the <b>Method</b>.</li> <li>Enter <b>GDP_GR C GDP_GR(-1)</b> as the dependent variable and quarterly regressors (a constant and the lagged value of GDP growth).</li> <li>Enter <b>Monthly\PMICMPEMU(-1)</b> as the high frequency regressor. The (-1) here indicates that we wish to use data up until the second month of the quarter (the default is the third/last month of the quarter, so by lagging it one month, we use data until the second month).</li> <li>Set the <b>Sample</b> to end in 2018q4.</li> </ol><br /> <!-- :::::::::: FIGURE 3 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/midas_gets/images/MIDASPDL.gif"><img height="auto" src="http://www.eviews.com/blog/midas_gets/images/MIDASPDL.gif" title="MIDAS PDL Estimation Dialog" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 3: MIDAS PDL Estimation Dialog</small><br /> <small>(Click to expand)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 3 :::::::::: --> The default MIDAS weighting method in EViews is PDL/Almon weighting with a polynomial degree of 3, which is what we'll use if we just click <b>OK</b>:<br /><br /> <!-- :::::::::: FIGURE 4 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/midas_gets/images/MIDASPDL.png"><img height="auto" src="http://www.eviews.com/blog/midas_gets/images/MIDASPDL.png" title="MIDAS PDL Estimation Output" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 4: MIDAS PDL Estimation Output</small><br /> <small>(Click to expand)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 4 :::::::::: --> Since this is a forecasting/nowcasting exercise, we won't delve into interpretation of these results, other than to note that all three MIDAS PDL terms are statistically significant.<br /><br /> Now, to perform the nowcast, we can simply use EViews' built in forecast engine and forecast for the “current” quarter (2019Q1). This is done with the following steps: <ol> <li>Click the <b>Forecast</b> button to bring up the forecast dialog.</li> <li>Change the <b>Forecast sample</b> to <b>2019Q1 2019Q1</b> (just a single period).</li> <li>Click <b>OK</b>.</li> </ol><br /> <!-- :::::::::: FIGURE 5 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/midas_gets/images/forecastdlg.png"><img height="auto" src="http://www.eviews.com/blog/midas_gets/images/forecastdlg.png" title="Forecast Dialog" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 5: Forecast Dialog</small><br /> <small>(Click to expand)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 5 :::::::::: --> The forecast will produce a new series in the workfile, <b class="wfobj">GDP_GRF</b> containing actual values for all observations other than 2019Q1, where it will contain the forecasted value. We can open this series together with the actual series in a group, and then graph it to see how close the single forecasted value is to the historical actual:<br /><br /> <!-- :::::::::: FIGURE 6 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/midas_gets/images/forecast.png"><img height="auto" src="http://www.eviews.com/blog/midas_gets/images/forecast.png" title="MIDAS Forecast" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 6: MIDAS Forecast</small><br /> <small>(Click to expand)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 6 :::::::::: --> The results seem a little underwhelming despite being just a single observation. Let's see if we can improve this forecast with the new MIDAS-GETS weighting method.<br /><br /> <h4 class="subseccol", id="sec3.2">MIDAS-GETS</h4> To perform the new estimation, we undertake the same steps as before, but additionally change the weighting method: <ol> <li>Quick->Estimate Equation</li> <li>Select <b>MIDAS</b> as the <b>Method</b>.</li> <li>Enter <b>GDP_GR C GDP_GR(-1)</b> as the dependent variable and quarterly regressors.</li> <li>Enter <b>Monthly\PMICMPEMU(-1)</b> as the high frequency regressor.</li> <li>Enter <b>12</b> as the <b>Fixed Lags</b> parameter to indicate each quarter is explained by 12 months of data.</li> <li>Set the <b>Sample</b> to end in 2018q4.</li> <li>Switch the <b>Options Tab</b>.</li> <li>Change <b>MIDAS weights</b> to <b>Auto/GETS</b>.</li> </ol><br /> <!-- :::::::::: FIGURES 7a and 7b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 7a :::::::::: --> <center> <a href="http://www.eviews.com/blog/midas_gets/images/MIDASGETS.gif"><img height="auto" src="http://www.eviews.com/blog/midas_gets/images/MIDASGETS.gif" title="MIDAS-GETS Estimation Dialog" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 7b :::::::::: --> <center> <a href="http://www.eviews.com/blog/midas_gets/images/MIDASGETS.png"><img height="auto" src="http://www.eviews.com/blog/midas_gets/images/MIDASGETS.png" title="MIDAS-GETS Estimation Output" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 7a: MIDAS-GETS Estimation Dialog</small><br /> <small>(Click to expand)</small> </center> </td> <td class="nb"> <center> <small>Figure 7b: MIDAS-GETS Estimation Output</small><br /> <small>(Click to expand)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 7a and 7b :::::::::: --> Again, we won't delve into interpretation of these results, other than to mention that out of the 12 months of possible PMI data that could be used to explain each quarter, the equation chose to use only the two most recent months (denoted lags). We'll follow the exact same steps as previously to produce a forecast from this equation:<br /><br /> <!-- :::::::::: FIGURE 8 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/midas_gets/images/forecast2.png"><img height="auto" src="http://www.eviews.com/blog/midas_gets/images/forecast2.png" title="MIDAS-GETS Forecast" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 8: MIDAS-GETS Forecast</small><br /> <small>(Click to expand)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 8 :::::::::: --> The nowcast looks better than the previous model's, although again it is only a single data point.<br /><br /> <h4 class="subseccol", id="sec3.3">MIDAS-GETS with Indicator Saturation</h4> Finally, we'll estimate a MIDAS-GETS model that includes indicator saturation. This will automatically model outliers and structural changes in our equation. We follow the same steps as before but use the Auto/GETS options button to include searching for indicator variables. We will, in this case, search for outliers by only selecting impulse indicators.<br /><br /> <!-- :::::::::: FIGURES 9a and 9b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 9a :::::::::: --> <center> <a href="http://www.eviews.com/blog/midas_gets/images/MIDASGETSIS.gif"><img height="auto" src="http://www.eviews.com/blog/midas_gets/images/MIDASGETSIS.gif" title="MIDAS-GETS (Indicator Saturation) Estimation Dialog" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 7b :::::::::: --> <center> <a href="http://www.eviews.com/blog/midas_gets/images/MIDASGETSIS.png"><img height="auto" src="http://www.eviews.com/blog/midas_gets/images/MIDASGETSIS.png" title="MIDAS-GETS (Indicator Saturation) Estimation Output" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 9a: MIDAS-GETS (Indicator Saturation) Estimation Dialog</small><br /> <small>(Click to expand)</small> </center> </td> <td class="nb"> <center> <small>Figure 9b: MIDAS-GETS (Indicator Saturation) Estimation Output</small><br /> <small>(Click to expand)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 9a and 9b :::::::::: --> The results are worth a quick mention. The GETS routine selected eight periods with outliers. In particular, it included dummy variables for 8 quarters (2001Q1, 2005Q3, 2008Q2, 2008Q3, 2009Q1, 2010Q2, 2011Q2, 2013Q2), <b>and</b> chose to include more months of PMI data: namely, the first and second months of the current quarter, as well as 6, 9 and 12 months prior. In concrete terms, this means, for example, in 2018Q1, the equation chose to use February 2018, January 2018, September 2017, June 2017 and March 2017 as regressors.<br /><br /> Forecasting is performed in the same way, and produces a similar looking forecast to the previous MIDAS-GETS model:<br /><br /> <!-- :::::::::: FIGURE 10 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/midas_gets/images/forecast3.png"><img height="auto" src="http://www.eviews.com/blog/midas_gets/images/forecast3.png" title="MIDAS-GETS (Indicator Saturation) Forecast" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 10: MIDAS-GETS (Indicator Saturation) Forecast</small><br /> <small>(Click to expand)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 10 :::::::::: --> <h4 class="subseccol", id="sec3.4">Evaluating Nowcasting Models</h4> The previous examples all performed a single point nowcast of GDP growth and a quick eyeball-test showed that MIDAS-GETS performed well. Here we'll demonstrate a formal nowcast evaluation exercise. In particular, we'll estimate a handful of different models on a rolling basis. The first estimation will again assume we are in February 2018, estimating on data from 1999Q3 through 2017Q4, and will then nowcast 2018Q1. We'll then move a quarter and assume we're in May 2018, estimate through 2018Q1 and nowcast 2018Q2. Next, we'll move another quarter and so on until 2019Q4, meaning we have eight rolling nowcasts.<br /><br /> We'll estimate and nowcast from six different equation specifications: <ol> <li>A simple AR(1) model with no PMI (GDP growth regressed against a lag and a constant).</li> <li>Simple AR(1) model with aggregated PMI (average of the available monthly PMI data).</li> <li>PDL/Almon MIDAS with 12 monthly lags of PMI and lagged GDP growth.</li> <li>U-MIDAS with 12 monthly lags of PMI and lagged GDP growth.</li> <li>MIDAS-GETS with 12 monthly lags of PMI and lagged GDP growth and no indicators.</li> <li>MIDAS-GETS with 12 monthly lags of PMI and lagged GDP growth with impulse indicators.</li> </ol><br /> Models 3, 5 and 6 are identical to those we estimated in the early examples. We've written a quick EViews program that will perform these nowcasts: <pre><code><br /> <span style="color: green;">'create gdp growth series</span><br /> series gdp_gr = @pca(eur_gdp)<br /> <br /> <span style="color: green;">'keep a list of equation names for easier referencing later</span><br /> %eqlist = "eq_umid eq_agg eq_pdl eq_simple eq_getsis eq_gets"<br /> <br /> <span style="color: green;">'create empty forecast series for each equation</span><br /> group forcs gdp_gr<br /> <span style="color: blue;">for</span> %j {%eqlist}<br /> series gdp_{%j}<br /> forcs.add gdp_{%j}<br /> <span style="color: blue;">next</span><br /> <br /> <span style="color: green;">'estimate/nowcast loop</span><br /> <span style="color: blue;">for</span> !i=0 <span style="color: blue;">to</span> 7<br /> <span style="color: green;">'estimate</span><br /> smpl @first 2017q4+!i <br /> equation eq_simple.ls gdp_gr c gdp_gr(-1)<br /> equation eq_agg.ls gdp_gr c gdp_gr(-1) agg_pmi<br /> equation eq_pdl.midas(fixedlag=12) gdp_gr c gdp_gr(-1) @ monthly\pmicmpemu(-1)<br /> equation eq_umid.midas(midwgt=umidas, fixedlag=12) gdp_gr c gdp_gr(-1) @ monthly\pmicmpemu(-1)<br /> equation eq_gets.midas(fixedlag=12, midwgt=autogets) gdp_gr c gdp_gr(-1) @ monthly\pmicmpemu(-1)<br /> equation eq_getsis.midas(fixedlag=12, midwgt=autogets, iis) gdp_gr c gdp_gr(-1) @ monthly\pmicmpemu(-1)<br /> <br /> <span style="color: green;">'nowcast</span><br /> smpl 2018q1+!i 2018q1+!i<br /> <span style="color: blue;">for</span> %j {%eqlist}<br /> {%j}.forecast temp<br /> gdp_{%j} = temp<br /> d temp<br /> <span style="color: blue;">next</span><br /> <span style="color: blue;">next</span><br /> </code></pre> Once we have the six nowcast series of eight periods each, we can use EViews' built in forecast evaluation engine to compare the nowcasts, by opening up the series containing the true value (GDP_GR) and clicking on View->Forecast Evaluation, and then giving the names of the nowcast series. The results of the is evaluation are:<br /><br /> <!-- :::::::::: FIGURE 11 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/midas_gets/images/evaluation.png"><img height="auto" src="http://www.eviews.com/blog/midas_gets/images/evaluation.png" title="MIDAS Evaluation" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 11: MIDAS Evaluation</small><br /> <small>(Click to expand)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 11 :::::::::: --> From the evaluation statistics, we see that the MIDAS-GETS nowcast, <b class="wfobj">GDP_EQ_GETSIS</b> performs very well, with the indicator saturation version giving the lowest RMSE, MAE and SMAPE. The non-indicator version, <b class="wfobj">GDP_EQ_GETS</b>, also performs better than the other traditional MIDAS methods.<br /><br /><br /> </span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com0tag:blogger.com,1999:blog-6883247404678549489.post-46714369548145411542020-12-02T08:06:00.001-08:002020-12-09T13:18:38.709-08:00Wavelet Analysis: Part II (Applications in EViews)<style> table { border: 0px solid black; border-collapse: separate; border-spacing: 10px; } td { border: 1px solid black; } .nb { border: 0px solid black; } .step { counter-reset: section; list-style-type: none; } .step li::before { counter-increment: section; content: "Step "counter(section) ": "; } .seccol { } .subseccol { color: #fa5e5e } .wf { } .wfobj { } </style> <script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { rbrace: ['{\\left(#1\\right)}', 1], cbrace: ['{\\left\\{#1\\right\\}}', 1], sbrace: ['{\\left[#1\\right]}', 1], bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1], series: ['{\\left\\{#1_{#2}\\right\\}_{#2=#3}^{#4}}', 4], xsum: ['{\\sum_{#1=#2}^{#3}{#4}}', 4], var: ['{\\operatorname\{var\}}'], sign: ['{\\operatorname\{sign\}}'], diag: ['{\\operatorname\{diag\}}'], med: ['{\\operatorname\{median\}}'], vec: ['{\\operatorname\{vec\}}'], tr: ['{\\operatorname\{tr\}}'] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"> </script> <span style="font-family: "verdana" sans-serif"> This is the second of two entries devoted to wavelets. <a href='http://blog.eviews.com/2020/11/wavelet-analysis-part-i-theoretical.html'>Part I</a> was devoted to theoretical underpinnings. Here, we demonstrate the use and application of these principles to empirical exercises using the wavelet engine released with EViews 12. <a name='more'></a><br /><br /> <h3 class="seccol">Table of Contents</h3> <ol> <li><a href="#sec1">Introduction</a> <li><a href="#sec2">Wavelet Transforms</a> <ul> <li><a href="#sec2.1">Example 1: Wavelet Transforms as Informal Tests for (Non-)Stationarity</a> <li><a href="#sec2.2">Example 2: MRA as Seasonal Adjustment</a> <li><a href="#sec2.3">Example 3: DWT vs. MODWT</a> </ul> <li><a href="#sec3">Variance Decomposition</a> <ul> <li><a href="#sec3.1">Example: MODWT Unbiased Variance Decomposition</a> </ul> <li><a href="#sec4">Wavelet Thresholding</a> <ul> <li><a href="#sec4.1">Example: Thresholding as Signal Extraction</a> </ul> <li><a href="#sec5">Outlier Detection</a> <ul> <li><a href="#sec5.1">Example: Bilen and Huzurbazar (2002) Outlier Detection</a> </ul> <li><a href="#sec6">Conclusion</a> <li><a href="#sec7">Files</a> <li><a href="#sec8">References</a> </ol><br /> <h3 class="seccol", id="sec1">Introduction to Wavelets</h3> The new EViews 12 release has introduced several new statistical and econometric procedures. Among them is an engine for wavelet analysis. This is a complement to the existing battery of techniques in EViews used to analyze and isolate features which characterize a time series. While there are undoubtedly numerous applications to wavelets such as regression, unit root testing, fractional integration order estimation, and bootstrapping (wavestrapping), here we highlight the new EViews wavelet engine. In particular, we focuses on four popular and most often used areas of wavelet analysis: <ul> <li>Transforms</li> <li>Variance decomposition</li> <li>Thresholding</li> <li>Outlier detection</li> </ul><br /><br /> <h3 class="seccol", id="sec2">Wavelet Transforms</h3> The first step in wavelet analysis is usually a wavelet transform of a time series of interest. This is similar in spirit to a Fourier transform. The time series is decomposed into its constituent spectral (frequency) features on a scale-by-scale basis. Recall that the idea of scale in wavelet analysis is akin to frequency in Fourier analysis. This is nothing more than a re-expression of time series observations in time, to their behaviour in the frequency domain. This allows us to see which scales (frequencies) dominate in terms of activity.<br /><br /> <h4 class="subseccol", id="sec2.1">Example 1: Wavelet Transforms as Informal Tests for (Non-)Stationarity</h4> Many important and routine tasks in time series analysis require classifying data as stationary or non-stationary. Any of the unit root tests available in EViews are designed to formally address such classifications. Nevertheless, wavelet transforms such as the discrete wavelet transform (DWT) or the maximum overlap discrete wavelet transform (MODWT) can also be used for a similar purpose. While formal wavelet-based unit root tests are available in the literature, here we focus on demonstrating how wavelets can be used as an exploratory tool for stationarity determination <i>in lieu</i> of a formal test.<br /><br /> Recall from the theoretical discussion of Mallat's algorithm in <a href='http://blog.eviews.com/2020/11/wavelet-analysis-part-i-theoretical.html'>Part I</a> that discrete wavelet transforms partition the frequency range into finer and finer blocks. For instance, at the first scale, the frequency range is split into two equal parts. The first, lower frequency part, is captured by the scaling coefficients and corresponds to the traditional (Fourier) frequency range $ \sbrace{0,\, \pi} $. The second, higher frequency part, is captured by the wavelet coefficients and corresponds to the traditional frequency range $ \sbrace{\pi,\, 2\pi} $. At the second stage, the lower frequency from the previous scale, namely the frequency region roughly corresponding to $ \sbrace{0,\, \pi} $ in the traditional Fourier context, is again split into two equal portions. Accordingly, the wavelet coefficients at scale 2 would roughly correspond to the traditional frequency region $ \sbrace{\frac{\pi}{2},\, \pi} $, whereas the scaling coefficients would roughly correspond to the traditional frequency region $ \sbrace{0,\, \frac{\pi}{2}} $, and so on.<br /><br /> This decomposition affords the ability to identify which features of the original time series data are dominant at which scale. In particular, if the spectra (read wavelet/scaling coefficient magnitudes) at a given scale are high, this would indicate that those coefficients are registering behaviours in the underlying data which dominate at said scale and frequency region. For instance, in the traditional Fourier context, if a series has very pronounced spectra near the frequency zero, this indicates that observations of that time series are very persistent (die off slowly). Naturally, one would classify such a series as non-stationary, possibly exhibiting a unit root. Alternatively, if a series has very pronounced spectra at higher frequencies, this indicates that the time series is driven by dynamics that frequently appear and disappear. In other words, the time series is driven by transient features and one would classify the time series as stationary. The analogue of this analysis in the context of wavelet analysis would proceed as follows.<br /><br /> At the first scale, if wavelet spectra dominate scaling spectra, the underlying series is dominated by higher frequency (transitory) forces and the series is most likely stationary. At scale two, if the scaling spectra dominate the wavelet spectra from the first and second scales, this indicates that lower frequency forces dominate higher frequency dynamics, providing evidence of non-stationarity. Naturally, this scale-based analysis carries on until the final decomposition scale.<br /><br /> To demonstrate the dynamics outlined above, we'll consider Canadian real exchange rate data extracted from the dataset in Pesaran (2007). This is a quarterly time series running from 1973Q1 to 1998Q4. The data can be found in <a href="http://www.eviews.com/blog/wavelets/workfiles/wavelets.wf1"'><b class="wf">WAVELETS.WF1</b></a>. The series we're interested in is <b class="wfobj">CANADA_RER</b>. We'll demonstrate with a discrete wavelet transform (DWT) and the Haar wavelet filter. To facilitate the discussion to follow, we will consider the transformation only up to the first scale.<br /><br /> To perform the transform, proceed in the following steps: <ol> <li>Double click on <b class="wfobj">CANADA_RER</b> to open the series window.</li> <li>Click on <b>View/Wavelet Analysis/Transforms...</b></li> <li>From the <b>Max scale</b> dropdown, select <b>1</b>.</li> <li>Click on <b>OK</b>.</li> </ol><br /> <!-- :::::::::: FIGURES 2a and 2b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 2a :::::::::: --> <center> <a href="http://www.eviews.com/blog/wavelets/images/transform_ex1_1.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/transform_ex1_1.png" title="Canadian RER: Discrete Wavelet Transform Part 1" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 2b :::::::::: --> <center> <a href="http://www.eviews.com/blog/wavelets/images/transform_ex1_2.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/transform_ex1_2.png" title="Canadian RER: Discrete Wavelet Transform Part 2" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 2a: Canadian RER: Discrete Wavelet Transform Part 1</small> </center> </td> <td class="nb"> <center> <small>Figure 2b: Canadian RER: Discrete Wavelet Transform Part 2</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 2a and 2b :::::::::: --> The output is a spool object with the spool tree listing the summary, original series, as well as wavelet and scaling coefficients for each scale (in this case just 1). The first of these is a summary of the wavelet transformation performed. Note here that since the number of available observations is 104 a dyadic adjustment using the series mean was applied to achieve dyadic length.<br /><br /> The first plot in the output is a plot of the original series, in addition to the padded values in case a dyadic adjustment was applied. The last two plots are respectively the wavelet and scaling coefficients. Recall that at the first scale, the wavelet decomposition effectively splits the frequency spectrum into two equal portions: the low and high frequency portions, respectively. Recall further that the low frequency portion is associated with the scaling coefficients $ \mathbf{V} $ whereas the high frequency portion is associated with the wavelet coefficients $ \mathbf{W} $.<br /><br /> Evidently, the spectra characterizing the wavelet coefficients are significantly less pronounced than those characterizing the scaling coefficients. This is an indication that the Canadian real exchange series is possibly non-stationary. Furthermore, observe that the wavelet plot has two dashed red lines. These represent the $ \pm 1 $ standard deviation of the coefficients at that scale. This is particularly useful in visualizing which wavelet coefficients should be shrunk to zero (are insignificant) in wavelet shrinkage applications. (We will return to this later when we discuss wavelet thresholding outright.) Recall that coefficients exceeding some threshold bound (in this case the standard deviation) ought to be retained, while the remaining coefficients are shrunk to zero. From this we see that the majority of wavelet coefficients at scale 1 can be discarded. This is further evidence that high frequency forces in the <b class="wfobj">CANADA_RER</b> series are not very pronounced.<br /><br /> To justify the intuition, we can perform a quick ADF unit root test on <b class="wfobj">CANADA_RER</b>. To do so, from the open <b class="wfobj">CANADA_RER</b> series window, proceed as follows: <ol> <li>Click on <b>View/Unit Root Tests/Standard Unit Root Test...</b></li> <li>Click on <b>OK</b>.</li> </ol><br /> <!-- :::::::::: FIGURE 3 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/wavelets/images/canada_rer_ur.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/canada_rer_ur.png" title="Canadian RER: Unit Root Test" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 3: Canadian RER Unit Root Test</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 3 :::::::::: --> Our intuition is indeed correct. From the unit root test output it is clear that the p-value associated with the ADF unit root test is 0.7643 -- too high to reject the null hypothesis of a unit root at any meaningful significance level.<br /><br /> While the wavelet decomposition is not a formal test, it is certainly a great way of identifying which scales (read frequencies) dominate the underlying series behaviour. Naturally, this analysis is not limited to the first scale. To see this, we will repeat the exercise above using the maximum overlap discrete wavelet transform (MODWT) with the Daubechies (daublet) filter of length 6. We will also perform the transform upto the maximum scale possible, and also indicate which and how many wavelet coefficient are affected by the boundary. (See <a href='http://blog.eviews.com/2020/11/wavelet-analysis-part-i-theoretical.html'>Part I</a> for a discussion of boundary conditions.)<br /><br /> From the open <b class="wfobj">CANADA_RER</b> series window, we proceed in the following steps: <ol> <li>Click on <b>View/Wavelet Analysis/Transforms...</b></li> <li>Change the <b>Decomposition</b> dropdown to <b>Overlap transform - MODWT</b>.</li> <li>Change the <b>Class</b> dropdown to <b>Daubechies</b>.</li> <li>From the <b>Length</b> dropdown select <b>6</b>.</li> <li>Click on <b>OK</b>.</li> </ol><br /> <!-- :::::::::: FIGURES 4a, 4b, and 4c :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 4a :::::::::: --> <center> <a href="http://www.eviews.com/blog/wavelets/images/transform_ex2_1.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/transform_ex2_1.png" title="Canadian RER: MODWT Part 1" width="240" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 4b :::::::::: --> <center> <a href="http://www.eviews.com/blog/wavelets/images/transform_ex2_2.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/transform_ex2_2.png" title="Canadian RER: MODWT Part 2" width="240" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 4c :::::::::: --> <center> <a href="http://www.eviews.com/blog/wavelets/images/transform_ex2_3.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/transform_ex2_3.png" title="Canadian RER: MODWT Part 3" width="240" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 4a: Canadian RER: MODWT Part 1</small> </center> </td> <td class="nb"> <center> <small>Figure 4b: Canadian RER: MODWT Part 2</small> </center> </td> <td class="nb"> <center> <small>Figure 4c: Canadian RER: MODWT Part 3</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 4a, 4b, and 4c :::::::::: --> As before, the output is a spool object with wavelet and scaling coefficients as individual spool elements. Since the MODWT is not an orthonormal transform and since it uses all of the available observations, wavelet and scaling coefficients are of input series length and do not require length adjustments. Notice the significantly more pronounced ''wave'' behvaviour across wavelet coefficients and scales. This is a consequence of the fact that the MODWT is not an orthonormal transform and is significantly more redundant than the DWT counterpart. In other words, patterns retain their momentum as they evolve.<br /><br /> Analogous to the DWT, the MODWT partitions the frequency range into finer and finer blocks. At the first scale, we see that only a few wavelet coefficients exhibit significant spikes (ie. exceed the threshold bounds). At scales two and three, it is evident that transient features persist, but after that, don't seem to contribute much. Alternatively, the scaling coefficients at the final scale (scale 6) are roughly twice as large (0.20) as the largest wavelet spectrum (0.10) which manifests at scales 1 and 2. These are all indications that lower frequency forces dominate those at higher frequencies and that the underlying series is most likely non-stationary.<br /><br /> Finally, notice that for each scale, those coefficients affected by the boundary are displayed in red, and their count reported in the legends. A vertical dashed black line shows the region upto which the boundary conditions persist. Boundary coefficients are an important consequence of longer filters and higher scales. Evidently, as the scale is increased, boundary coefficients consume the entire set of coefficients. Moreover, since the MODWT is a redundant transform, the number of boundary coefficients will always be greater than those in the orthonormal DWT. As before the $ \pm 1 $ standard deviation bounds are available for reference.<br /><br /> <h4 class="subseccol", id="sec2.2">Example 2: MRA as Seasonal Adjustment</h4> It's worth noting that multiresolution analysis (MRA) is often used as an intermediate step toward some final inferential procedure. For instance, if the objective is to run a unit root test on some series, we may we wish to do so on the true signal, having discarded the noise, in order to get a more reliable test. Similarly, we may wish to run regressions on series which have been <i>smoothed</i>. Discarding noise from regressors may prevent clouding of inferential conclusions. This is the idea behind most existing smoothing techniques in the literature.<br /><br /> In fact, wavelets are very well adapted to isolating many different kinds of trends and patterns, whether seasonal, non-stationary, non-linear, etc. Here we demonstrate their potential using an artificial dataset with a quarterly seasonality. In particular, we generate 128 random normal variates and excite every first quarter with a shock. These modified normal variates are then fed as innovations into a stationary autoregressive (AR) process. This is achieved with a few commands in the command window or an EViews program as follows: <pre><code><br /> rndseed 128 <span style="color: green;">'set the random seed</span><br /> wfcreate q 1989 2020 <span style="color: green;">'make quarterly workfile with 128 quarter</span><br /><br /> series eps = 8*(@quarter=1) + @rnorm <span style="color: green;">'create random normal innovations with each first quarter having mean 8</span><br /> series x <span style="color: green;">'create a series x</span><br /> x(1) = @rnorm <span style="color: green;">'set the first observation to a random normal value</span><br /><br /> smpl 1989q2 @last <span style="color: green;">'start the sample at the 2nd quarter</span><br /> x = 0.75*x(-1) + eps <span style="color: green;">'generate an AR process using eps as innovations</span><br /><br /> smpl @all <span style="color: green;">'reset the sample to the full workfile range</span><br /> </code></pre> To truly appreciate the idea behind MRA, one ought to set the maximum decomposition level to a lower value. This is because the smooth series extracts the ''signal'' from the original series for all scales beyond the maximum decomposition level, whereas the ''noise'' portion of the original series is decomposed on a scale-by-scale basis for all scales upto the maximum decomposition level. We now perform a MODWT MRA on the <b class="wfobj">X</b> series using a Daubechies filter of length 4 and maximum decomposition level 2, as follows: <ol> <li>Double click on <b class="wfobj">X</b> to open the series.</li> <li>Click on <b>View/Wavelet Analysis/Transforms...</b></li> <li>Change the <b>Decomposition</b> dropdown to <b>Overlap multires. - MODWT MRA</b>.</li> <li>Set the <b>Max scale</b> textbox to <b>2</b>.</li> <li>Change the <b>Class</b> dropdown to <b>Daubechies</b>.</li> <li>Click on <b>OK</b>.</li> </ol><br /> <!-- :::::::::: FIGURES 6a and 6b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 6a :::::::::: --> <center> <a href="http://www.eviews.com/blog/wavelets/images/transform_ex4_1.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/transform_ex4_1.png" title="Quarterly Seasonality: MODWT MRA Part 1" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 6b :::::::::: --> <center> <a href="http://www.eviews.com/blog/wavelets/images/transform_ex4_2.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/transform_ex4_2.png" title="Quarterly Seasonality: MODWT MRA Part 2" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 6a: Quarterly Seasonality: MODWT MRA Part 1</small> </center> </td> <td class="nb"> <center> <small>Figure 6b: Quarterly Seasonality: MODWT MRA Part 2</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 6a and 6b :::::::::: --> The output is again a spool object with smooth and detail series as individual spool elements. The first plot is that of the smooth series at the maximum decomposition level overlaying the original series for context. Any observations affected by boundary coefficients will be reported in red and their number reported in the legend. Furthermore, since observations affected by the boundary will be split between the beginning and end of original series observations, two dashed vertical lines are provided at each decomposition scale. These isolate the areas which partition the total set of observations into those affected by the boundary, and those which are not.<br /><br /> It is clear from the smooth series that seasonal patterns have been dropped from the underlying trend approximation of the original data. This is precisely what we want and the idea behind other well known seasonal adjustments techniques such as TRAMO/SEATS, X-12, X-13, STL Decompositions, etc., all of which can also be performed in EViews for comparison. In fact, the figure below plots our MRA smooth series against the STL decomposition trend series performed on the same data.<br /><br /> <!-- :::::::::: FIGURE 7 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/wavelets/images/transform_ex4_3.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/transform_ex4_3.png" title="MODWT MRA Smooth vs STL Trend" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 7: MODWT MRA Smooth vs. STL Trend</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 7 :::::::::: --> The two series are undoubtedly very similar, as they should be!<br /><br /> This figure above also suggests that the STL seasonal series should be very similar to the details from our MODWT MRA decomposition. Before demonstrating this, we remind readers that whereas the STL decomposition produces a single series estimate of the seasonal pattern, wavelet MRA procedures decompose noise (in this case seasonal patterns) on a scale by scale basis. Accordingly, at scale 1, the the MRA detail series captures all movements on a scale of 0 to 2 quarters. At scale 2, the MRA detail series captures movements on a scale of 2 to 4 quarters, and so on. In general, for each scale $ j $, the detail series capture patterns on a scale $ 2^{j-1} $ to $ 2^{j} $ units, whereas the smooth series captures patterns on a scale of $ 2^{j} $ units.<br /><br /> Finally, turning to the comparison of seasonal variation estimates between the MRA and STL, we need to sum all detail series to compound their effect and produce a single series estimate of noise. We can then compare this with single series estimate of seasonality from the STL decomposition.<br /><br /> <!-- :::::::::: FIGURE 8 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/wavelets/images/transform_ex4_4.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/transform_ex4_4.png" title="MODWT MRA Details vs STL Seasonality" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 8: MODWT MRA Details vs. STL Seasonality</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 8 :::::::::: --> As expected, the series are nearly identical.<br /><br /> To demonstrate this in the context of non-artificial data, we'll run a MODWT MRA on the Canadian real exchange rate data using a Least Asymmetric filter of length 12 and a maximum decomposition scale 3.<br /><br /> <!-- :::::::::: FIGURES 5a and 5b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 5a :::::::::: --> <center> <a href="http://www.eviews.com/blog/wavelets/images/transform_ex3_1.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/transform_ex3_1.png" title="Canadian RER: MODWT Multiresolution Analysis Part 1" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 5b :::::::::: --> <center> <a href="http://www.eviews.com/blog/wavelets/images/transform_ex3_2.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/transform_ex3_2.png" title="Canadian RER: MODWT Multiresolution Analysis Part 2" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 5a: Canadian RER: MODWT Multiresolution Analysis Part 1</small> </center> </td> <td class="nb"> <center> <small>Figure 5b: Canadian RER: MODWT Multiresolution Analysis Part 2</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 5a and 5b :::::::::: --> Recall that the main use for MRA is the separation of the true ''signal'' of the underlying series from its noise, at a given decomposition level. Here, the ''Smooths 3'' series is the signal approximation and from the plot seems to follow the contours of the original data. The remaining three series - ''Details 3'', ''Details 2'', and ''Details 1'' - approximate the noise at their scales. Clearly at the first scale, noise is rather negligible. This is an indication that the majority of the signal is in the lower frequency range. As we move to the second scale, the noise becomes more prominent, but still relatively negligible. Again, this confirms that the true signal is in a frequency range lower still, and so on. More importantly, this is indicative that the dynamics driving the noise are not particularly transitory. Accordingly, this would rule out traditional seasonality as a force driving the noise, but would not necessarily preclude the existence of non-stationary seasonality such as seasonal unit roots.<br /><br /> <h4 class="subseccol", id="sec2.3">Example 3: DWT vs. MODWT</h4> We have already mentioned that the primary difference between the DWT and MODWT is redundancy. The DWT is an orthonormal decomposition whereas the MODWT is not. This is certainly an advantage of the DWT over its MODWT counterpart since it guarantees that at each scale, the decomposition captures only those features which characterize that scale, and that scale alone. Nevertheless, the DWT requires input series to be of dyadic length, whereas the MODWT does not. This is an advantage of the MODWT since information is never dropped or added to derive the transform. Nevertheless, the MODWT has an additional advantage over the DWT and it has to do with spectral-time alignment - any pronounced observations in the time domain register as spikes in the wavelet domain at the same time spot. This is unlike the DWT where this alignment fails to hold. Formally, it is said that the MODWT is associated with a <b>zero-phase</b> filter, whereas the DWT does not. In practice, this means that outlying characteristics (spikes) in the DWT MRA will not align with outlying features of the original time series, whereas they will in the case of the MODWT MRA.<br /><br /> To demonstrate this difference we will generate a time series of length 128 and fill it with random normal observations. We will then introduce a large outlying observation at observation 64. We will then perform a DWT MRA and a MODWT MRA decomposition of the same data using a Daubechies filter of length 4 and study the differences. We will also only consider the first scale since the remaining scales do little to further the intuition.<br /><br /> We can begin by creating our artificial data by typing in the following set of commands in the command window: <pre><code><br /> wfcreate u 128<br /> series x = @rnorm<br /> x(64) = 40<br /> </code></pre> These commands create a workfile of length 128, and a series <b class="wfobj">X</b> filled with random normal variates. The 64th observation is then set to 40 - roughly 10 times as large as observations in the top 1\% of the Gaussian distribution.<br /><br /> We then generate a DWT MRA and a MODWT MRA transform of the same series. The output is summarized in the plots below.<br /><br /> <!-- :::::::::: FIGURES 9a and 9b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 9a :::::::::: --> <center> <a href="http://www.eviews.com/blog/wavelets/images/transform_ex5_1.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/transform_ex5_1.png" title="Outlying Observation: DWT MRA" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 9b :::::::::: --> <center> <a href="http://www.eviews.com/blog/wavelets/images/transform_ex5_2.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/transform_ex5_2.png" title="Outlying Observation: MODWT MRA" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 9a: Outlying Observation: DWT MRA</small> </center> </td> <td class="nb"> <center> <small>Figure 9b: Outlying Observation: MODWT MRA</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 9a and 9b :::::::::: --> Evidently the peak of the ''shark fin'' pattern in the DWT MRA smooth series does not align with the outlying observation that generated it in the original data. In other words, whereas the outlying observation is at time $ t = 64 $, the peak of the smooth series occurs at time $ t = 63 $. This in contrast to the MODWT MRA smooth series which clearly aligns its peak with the outlying observation in the original data.<br /><br /><br /> <h3 class="seccol", id="sec3">Variance Decomposition</h3> Another traditional application of wavelets is to variance decomposition. Just as wavelet transforms can decompose a series signal across scales, they can also decompose a series variance across scales. In particular, this is a decomposition of the amount of original variation attributed to a given scale. Naturally, the conclusions derived above on transience would hold here as well. For instance, if the contribution to overall variation is largest at scale 1, this would indicate that it is transitory forces which contribute most to overall variation. The opposite is true if higher scales are associated with larger contributions to overall variation.<br /><br /> <h4 class="subseccol", id="sec3.1">Example: MODWT Unbiased Variance Decomposition</h4> To demonstrate the procedure, we will use Japanese real exchange rate data from 1973Q1 to 1988Q4, again extracted from the Pesaran (2007) dataset. The series of interest is called <b class="wfobj">JAPAN_RER</b>. We will produce a scale-by-scale decomposition of variance contributions using the MODWT with a Daubechies filter of length 4. Furthermore, we'll produce a 95% confidence intervals using the asymptotic Chi-squared distribution with a band-pass estimate for the EDOF. The band-pass EDOF is preferred here since the sample size is less than 128 and the asymptotic approximation to the EDOF requires a sample size of at least 128 observations for decent results.<br /><br /> From the open series window, proceed in the following steps: <ol> <li>Click on <b>View/Wavelet Analysis/Variance Decomposition...</b></li> <li>Change the <b>CI type</b> dropdown to <b>Asymp. Band-Limited</b>.</li> <li>From the <b>Decomposition</b> dropdown select <b>Overlap transform - MODWT</b>.</li> <li>Set the <b>Class</b> dropdown to <b>Daubechies</b>.</li> <li>Click on <b>OK</b>.</li> </ol><br /> <!-- :::::::::: FIGURES 11a and 11b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 11a :::::::::: --> <center> <a href="http://www.eviews.com/blog/wavelets/images/vardecomp_ex1_1.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/vardecomp_ex1_1.png" title="Japanese RER: MODWT Variance Decomp. Part 1" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 11b :::::::::: --> <center> <a href="http://www.eviews.com/blog/wavelets/images/vardecomp_ex1_2.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/vardecomp_ex1_2.png" title="Japanese RER: MODWT Variance Decomp. Part 2" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 11a: Japanese RER: MODWT Variance Decomp. Part 1</small> </center> </td> <td class="nb"> <center> <small>Figure 11b: Japanese RER: MODWT Variance Decomp. Part 2</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 11a and 11b :::::::::: --> The output is a spool object with the spool tree listing the summary, spectrum table, variance distribution across-scales, confidence intervals (CIs) across scales, and the cumulative variance and CIs. The spectrum table lists the contribution to overall variance by wavelet coefficients at each scale. In particular, the column titled <b>Variance</b> shows the variance contributed to the total at a given scale. Columns titled <b>Rel. Proport.</b> and <b>Cum. Proport.</b> display, respectivel, the proportion of overall variance contributing to the total at a given scale and its cumulative total. Lastly, in case CIs are produced, the last two columns display, respectively, the lower and upper confidence interval values at a given scale.<br /><br /> The first plot is a histogram of variances at each given scale. It is clear that the majority of variation in the <b class="wfobj">JAPAN_RER</b> series comes from higher scales, or lower frequencies. This is indicative of persistent behaviour in the original data, and possibly evidence of a unit root. A quick unit root test on the series will confirm this intuition. The plot below summarizes the output of a unit root test on <b class="wfobj">JAPAN_RER</b>.<br /><br /> <!-- :::::::::: FIGURE 12 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/wavelets/images/japan_rer_ur.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/japan_rer_ur.png" title="Japanese RER: Unit Root Test" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 12: Japanese RER Unit Root Test</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 12 :::::::::: --> Returning to the wavelet variance decomposition output, following the distribution plot is a plot of the variance values along with their 95% confidence intervals at each scale. At last, the final plot displays variances and CIs accumulated across scales.<br /><br /><br /> <h3 class="seccol", id="sec4">Wavelet Thresholding</h3> A particularly important aspect of empirical work is discerning useful data from noise. In other words, if an observed time series is obscured by the presence of unwanted noise, it is critical to obtain an estimate of this noise and filter it from the observed data in order to retain the useful information, or the signal. Traditionally, this filtration and signal extraction was achieved using Fourier transforms or a number of previously mentioned routines such as the STL decomposition. While the former is typically better suited to stationary data, the latter can accommodate non-stationarities, non-linearities, and seasonalities of arbitrary type. This makes STL an attractive tool in this space and similar (but ultimately different) in function to wavelet thresholding. The following examples explores these nuances.<br /><br /> <h4 class="subseccol", id="sec4.1">Example: Thresholding as Signal Extraction</h4> Given a series of observed data, recall that STL decomposition produces three curves: <ul> <li>Trend</li> <li>Seasonality</li> <li>Remainder</li> </ul><br /> The last of these is obtained by subtracting from the original data the first two curves. As an additional byproduct, STL also produces a seasonally adjusted version of the original data which derives by subtracting from the original data the seasonality curve.<br /><br /> In contrast, recall from the theoretical discussion in <a href='http://blog.eviews.com/2020/11/wavelet-analysis-part-i-theoretical.html'>Part I</a> of this series that the principle governing wavelet-based signal extraction, otherwise known as <b>wavelet thresholding</b> or <b>wavelet shrinkage</b>, is to <i>shrink</i> any wavelet coefficients not exceeding some <b>threshold</b> to zero and then exploit the MRA to synthesize the signal of interest using the modified wavelet coefficients. This produces two curves: <ul> <li>Signal</li> <li>Residual</li> </ul> where the latter is just the original data minus the signal estimate.<br /><br /> Because wavelet thresholding treats any insignificant transient features as noise, it is very likely that any reticent cylclicality would be treated as noise and driven to zero. In this regard, the extracted signal, while perhaps free of cyclical dynamics, would really be so only by technicality, and not by intention. This is in contrast to STL which derives an explicit estimate of seasonal features, and then removes those from the original data to derive the seasonally adjusted curve. Nevertheless, in many instances, the STL seasonally adjusted curve may behave quite similarly to the signal extracted via wavelet thresholding. To demonstrate this, we'll use French real exchange rate data from 1973Q1 to 1988Q4 extracted from the Pesaran (2007) dataset. The series of interest is called <b class="wfobj">FRANCE_RER</b>. We'll also start with performing a MODWT threshold using a Least Asymmetric filter of length 12, and maximum decomposition level 1.<br /><br /> Double click on the <b class="wfobj">FRANCE_RER</b> series to open its window and proceed as follows: <ol> <li>Click on <b>View/Wavelet Analysis/Thresholding (Denoising)...</b></li> <li>Change the <b>Decomposition</b> dropdown to <b>Overlap transform - MODWT</b>.</li> <li>Set the <b>Max scale</b> to <b>1</b>.</li> <li>Change the <b>Class</b> dropdown to <b>Least Asymmetric</b>.</li> <li>Set the <b>Length</b> dropdown to <b>12</b>.</li> <li>Click on <b>OK</b>.</li> </ol><br /> <!-- :::::::::: FIGURE 14 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/wavelets/images/threshold_ex1_1.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/threshold_ex1_1.png" title="French RER: MODWT Thresholding" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 14: French RER: MODWT Thresholding</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 14 :::::::::: --> The output is a spool object with the spool tree listing the summary, denoised function, and noise. The table is a summary of the thresholding procedure performed. The first plot is the de-noised function (signal) superimposed over the original series for context. The second plot is the noise process extracted from the original series.<br /><br /> Next, let's derive the STL decomposition of the same data. The plots below superimpose the wavelet signal estimate on top of the STL seasonally adjusted curve, as well as the wavelet thresholded noise on top of the STL remainder series.<br /><br /> <!-- :::::::::: FIGURES 15a and 15b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 11a :::::::::: --> <center> <a href="http://www.eviews.com/blog/wavelets/images/threshold_ex1_2.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/threshold_ex1_2.png" title="French RER: STL Seas. Adj. vs. Wavelet Tresh. Signal" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 11b :::::::::: --> <center> <a href="http://www.eviews.com/blog/wavelets/images/threshold_ex1_3.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/threshold_ex1_3.png" title="French RER: STL Remainder vs. Wavelet Tresh. Noise" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 15a: Japanese RER: STL Seas. Adj. vs. Wavelet Tresh. Signal</small> </center> </td> <td class="nb"> <center> <small>Figure 15b: Japanese RER: STL Remainder vs. Wavelet Tresh. Noise</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 15a and 15b :::::::::: --> Clearly the STL seasonally adjusted series is very similar to the wavelet signal curve. However, this is really only because the cyclical components in the underlying data are negligible. This can be confirmed by looking at the magnitude of the STL seasonality curve. Nevertheless, a close inspection of the STL remainder and wavelet threshold noise series reveals noticeable differences. It is these differences that drive any differences in the STL seasonal adjustment and wavelet threshold signal curves.<br /><br /><br /> <h3 class="seccol", id="sec5">Outlier Detection</h3> A particularly important and useful application of wavelets is <b>outlier detection</b>. While the subject matter has received some attention over the years starting with Greenblatt (1996), we focus here on a rather simple and appealing contribution by Bilen and Huzurbazar (2002). The appeal of their approach is that it doesn't require model estimation, is not restricted to processes generated via ARIMA, and works in the presence of both additive and innovational outliers. The approach does assume that wavelet coefficients are approximately independent and identically normal variates. This is a rather weak assumption since the independence assumption (the more difficult to satisfy) is typically guaranteed using the DWT. While EVIews offers the ability to perform this procedure using a MODWT, it's generally better suited to the orthonormal transform.<br /><br /> Bilen and Huzurbazar (2002) also suggest that Haar is the preferred filter here. This is because the latter yields coefficients large in magnitude in the presence of jumps or outliers. They also suggest that the transformation be carried out only at the first scale. Nevertheless, EViews does offer the ability to stray away from these suggestions.<br /><br /> The overall procedure works on the principle of thresholding and the authors suggest the use of the universal threshold. The idea here is that extreme (outlying) values will register as noticeable spikes in the spectrum. As such, those values would be candidates for outlying observations. In particular, if $ m_{j} $ denotes the number of wavelet coefficients at scale $ \lambda_{j} $, the entire algorithm is summarized (and generalized) as follows: <ol> <li>Apply a wavelet transform to the original data up to some scale $ J \leq M $.</li><br /> <li>Specify a threshold value $ \eta $.</li><br /> <li>For each $ j = 1, \ldots, J $:</li><br /> <ol> <li>Find the set of indices $ S = \cbrace{s_{1}, \ldots, s_{m_{j}}} $ such that $ |W_{i, j}| > \eta $ for $ i = 1, \ldots, m_{j} $.</li><br /> <li>Find the exact location of the outlier among original observations. For instance, if $ s_{i} $ is an index associated with an outlier:</li><br /> <ul> <li> If the wavelet transform is the DWT, the original observation associated with that outlier is either $ 2^{j}s_{i} $ or $ (2^{j}s_{i} - 1) $. To discern between the two, let $ \tilde{\mu} $ denote the mean of the original observations with observations located at $ 2s_{i} $ and $ (2s_{i} - 1) $. That is: $$ \tilde{\mu} = \frac{1}{T-2}\sum_{t \neq 2^{j}s_{i}\, ,\, (2^{j}s_{i} - 1)}{y_{t}} $$ If $ |y_{2^{j}s_{i}} - \tilde{\mu}| > |y_{2^{j}s_{i} - 1} - \tilde{\mu}| $, the location of the outlier is $ 2^{j}s_{i} $, otherwise, the location of the outlier is $ (2^{j}s_{i} - 1) $. </li><br /> <li>If the wavelet transform is the MODWT, the outlier is associated with observation $ i $.</li> </ul> </ol> </ol><br /> <h4 class="subseccol", id="sec5.1">Example: Bilen and Huzurbazar (2002) Outlier Detection</h4> To demonstrate outlier detection, data is obtained from the <b>US Geological Survey</b> website <a href='https://www.usgs.gov/'>https://www.usgs.gov/</a>. As discussed in Bilen and Huzurbazar (2002), data collected in this database comes from many different sources and is generally notorious for input errors. Here we focus on a monthly dataset, collected at irregular intervals from May 19876 to June 2020, measuring water conductance at the Green River near Greendale, UT. The dataset is identified by site number 09234500.<br /><br /> A quick summary of the series indicates that there is a large drop from typical values (500 to 800 units) in September 1999. The value recorded at this date is roughly 7.4 units. This is an unusually large drop and is almost certainly an outlying observation.<br /><br /> In an attempt to identify the aforementioned outlier, and perhaps uncover others, we use aforementioned wavelet outlier detection method. We stick with the defaults suggested in the paper and use a DWT transform with a Haar filter, universal threshold, a mean median absolute deviation estimator for wavelet coefficient variance, and a maximum decomposition scale set to unity.<br /><br /> To proceed, either download the data from the source, or open the tab <b>Outliers</b> in the workfile provided. The series we're interested in is <b class="wfobj">WATER_CONDUCTANCE</b>. Next, open the series window and proceed as follows: <ol> <li>Click on <b>View/Wavelet Analysis/Outlier Detection...</b></li> <li>Set the <b>Max scale</b> dropdown to <b>1</b>.</li> <li>Under the <b>Threshold</b> group, set the <b>Method</b> dropdown to <b>Hard</b>.</li> <li>Under the <b>Wavelet coefficient variance</b> group, set the <b>Method</b> dropdown to <b>Mean Med. Abs. Dev.</b>.</li> <li>Click on <b>OK</b>.</li> </ol><br /> <!-- :::::::::: FIGURE 16 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/wavelets/images/outliers_ex1_1.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/outliers_ex1_1.png" title="Water Conductance: Outlier Detection" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 16: Water Conductance: Outlier Detection</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 16 :::::::::: --> The output is a spool object with the spool tree listing the summary, outlier table, and outlier graphs for each scale (in this case just one). The first of these is a summary of the outlier detection procedure performed. Next is a table listing the exact location of a detected outlier along with its value and absolute deviation from the series mean and median, respectively. The plot that follows is that of the original series with red dots identifying outlying observations along with a dotted vertical line at said locations for easier identification.<br /><br /> Evidently, the large outlying observation in September 1999 is accurately identified. In addition there are three other possible outlying observations identified in September 1988, January 1992, and June 2020.<br /><br /><br /> <h3 class="seccol", id="sec6">Conclusion</h3> In this first entry of our series on wavelets, we provided a theoretical overview of the most important aspects in wavelet analysis. Here we demonstrated how these principles are applied to real and artificial data using the new EViews 12 wavelet engine.<br /><br /><br /> <hr /> <h3 class="seccol", id="sec7">Files</h3> <ul> <li><a href="http://www.eviews.com/blog/wavelets/workfiles/wavelets.wf1"'><b class="wf">WAVELETS.WF1</b></a></li> <li><a href="http://www.eviews.com/blog/wavelets/workfiles/wavelets.prg"'><b class="wf">WAVELETS.PRG</b></a></li> </ul><br /><br /> <hr /> <h3 class="seccol", id="sec8">References</h3> <ol class="bib2xhtml"> <li id="bilen-2002" class="entry"> Bilen C and Huzurbazar S (2002), <i>"Wavelet-based detection of outliers in time series"</i>, Journal of Computational and Graphical Statistics. Vol. 11(2), pp. 311-327. Taylor & Francis. </li> <li id="greenblatt-1996" class="entry"> Greenblatt SA (1996), <i>"Wavelets in econometrics"</i>, In Computational Economic Systems. , pp. 139-160. Springer. </li> <li id="pesaran-2007" class="entry"> Pesaran MH (2007), <i>"A simple panel unit root test in the presence of cross-section dependence"</i>, Journal of applied econometrics. Vol. 22(2), pp. 265-312. Wiley Online Library. </li> </ol></span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com3tag:blogger.com,1999:blog-6883247404678549489.post-41385365046285526582020-11-30T09:06:00.006-08:002020-12-02T08:09:49.844-08:00Wavelet Analysis: Part I (Theoretical Background)<style> table { border: 0px solid black; border-collapse: separate; border-spacing: 10px; } td { border: 1px solid black; } .nb { border: 0px solid black; } .step { counter-reset: section; list-style-type: none; } .step li::before { counter-increment: section; content: "Step "counter(section) ": "; } .seccol { } .subseccol { color: #fa5e5e } </style> <script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { rbrace: ['{\\left(#1\\right)}', 1], cbrace: ['{\\left\\{#1\\right\\}}', 1], sbrace: ['{\\left[#1\\right]}', 1], bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1], series: ['{\\left\\{#1_{#2}\\right\\}_{#2=#3}^{#4}}', 4], xsum: ['{\\sum_{#1=#2}^{#3}{#4}}', 4], var: ['{\\operatorname\{var\}}'], sign: ['{\\operatorname\{sign\}}'], diag: ['{\\operatorname\{diag\}}'], med: ['{\\operatorname\{median\}}'], vec: ['{\\operatorname\{vec\}}'], tr: ['{\\operatorname\{tr\}}'] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"> </script> <span style="font-family: "verdana" sans-serif"> This is the first of two entries devoted to wavelets. Here, we summarize the most important theoretical principles underlying wavelet analysis. This entry should serve as a detailed background reference when using the new wavelet features released in EViews 12. In <a href='http://blog.eviews.com/2020/12/wavelet-analysis-part-ii-applications.html'>Part II</a> we will apply these principles and demonstrate how they are used with the new EViews 12 wavelet engine. <a name='more'></a><br /><br /> <h3 class="seccol">Table of Contents</h3> <ol> <li><a href="#sec1">Introduction to Wavelets</a> <li><a href="#sec2">Wavelet Transforms</a> <ul> <li><a href="#sec2.1">Discrete Wavelet Filters</a> <li><a href="#sec2.2">Mallat's Pyramid Algorithm</a> <li><a href="#sec2.3">Boundary Conditions</a> <li><a href="#sec2.4">Variance Decomposition</a> <li><a href="#sec2.5">Multiresolution Analysis</a> </ul> <li><a href="#sec3">Practical Considerations</a> <ul> <li><a href="#sec3.1">Choice of Wavelet Filter</a> <li><a href="#sec3.2">Handling Boundary Conditions</a> <li><a href="#sec3.3">Adjusting Non-Dyadic Time Series Lengths</a> </ul> <li><a href="#sec4">Wavelet Thresholding</a> <ul> <li><a href="#sec4.1">Thresholding Rule</a> <li><a href="#sec4.2">Optimal Threshold</a> <li><a href="#sec4.3">Wavelet Coefficient Variance</a> <li><a href="#sec4.4">Thresholding Implementation</a> </ul> <li><a href="#sec5">Conclusion</a> <li><a href="#sec6">References</a> </ol><br /> <h3 class="seccol", id="sec1">Introduction to Wavelets</h3> What characterizes most economic time series are time-varying features such as non-stationarity, volatility, seasonality, and structural discontinuities. Wavelet analysis is a natural framework for analyzing these phenomena without imposing any simplifying assumptions such as stationarity. In particular, wavelet filters can decompose and reconstruct a time series (as well as its correlation structure) across timescales so that constituent elements at one scale are uncorrelated with those at another. This is clearly useful in isolating features which materialize only at certain timescales.<br /><br /> Wavelet analysis is also, in many respects, like Fourier spectral analysis. Both methods can represent a time series signal in a different space by re-expressing a signal as a linear combination of basis functions. In the context of Fourier analysis, these basis functions are sines and cosines. While these basis functions approximate global variation well, they are poorly adapted to capturing local variation, otherwise known as time-variation in time series analysis. To see this, observe that trigonometric basis functions are sinusoids of the form: $$ R\cos\left(2\pi(\omega t + \phi)\right) $$ where $ R $ is the <b>amplitude</b>, $ \omega $ is the <b>frequency</b> (in cycles per unit time) or <b>period</b> $ \frac{1}{\omega} $ (in units of time), and $ \phi $ is the <b>phase</b>. Accordingly, if the time variable $ t $ is shifted and scaled to $ u = \frac{t - a}{b} $, the associated sinusoid becomes: $$ R\cos\left(2\pi(\omega^{\star} u + \phi^{\star})\right) $$ where $ \omega^{\star} = \omega b $ and $ \phi^{\star} = \phi + \omega a $.<br /><br /> Evidently, the amplitude $ R $ is invariant to shifts in location and scale. Furthermore, notice that if $ b > 1 $, the frequency $ \omega^{\star} $ increases, but time $ u $ decreases, and vice versa. Accordingly, frequency information is gained when time information is lost, and vice versa.<br /><br /> Ultimately, trigonometric functions are ideally adapted to stationary processes characterized by impulses which wane with time, but are otherwise poorly adapted to discontinuous, non-linear, and non-stationary processes whose impulses persist and evolve with time. To surmount this fixed time-frequency relationship, a new set of basis functions are needed.<br /><br /> In contrast to Fourier transforms, wavelet transforms rely on a reference basis function called the <b>mother wavelet</b>. The latter is stretched (scaled) and shifted across time to capture time-dependent features. Thus, the wavelet basis functions are localized both in scale and time. In this sense, the wavelet basis function scale is the analogue of frequency in Fourier transforms. The fact that the wavelet basis function is also shifted (translated) across time, implies that wavelet basis functions are similar in spirit to performing a Fourier transform on a moving and overlapping window of subsets of the entire time series signal.<br /><br /> In particular, the mother wavelet function $ \psi(t) $ is any function satisfying: $$ \int_{-\infty}^{\infty} \psi(x) dx = 0 \qquad\qquad \int_{-\infty}^{\infty} \psi(x)^{2} dx = 1 $$ In other words, wavelets are functions that have mean zero and unit energy. Here, the term <i>energy</i> originates from the signal processing literature and is formalized as $ \int_{-\infty}^{\infty} |f(t)^{2}| dt$ for some function $ f(t) $. In fact, the concept is interchangeable with the idea of <b>variance</b> for non-complex functions.<br /><br /> From the mother wavelet, the wavelet basis functions are now derived as: $$ \psi_{a,b}(t) = \frac{1}{\sqrt{b}}\psi\left(\frac{t - a}{b}\right) $$ where $ a $ is the <b>location constant</b>, whereas $ b $ is the <b>scaling factor</b> which corresponds to the notion of frequency in Fourier analysis. Observe further that the analogue of the amplitude $ R $ in Fourier analysis, here captured by the term $ \frac{1}{\sqrt{b}} $, is in fact a function of the scale $ b $. Accordingly, wavelet basis functions will adapt to scale-dependent phenomena much better than their trigonometric counterparts.<br /><br /> Since wavelet basis functions are <i>de facto</i> location and scale transformations of a single function, they are also an ideal tool for <b>multiresolution analysis</b> (MRA) - the ability to analyze a signal at different frequencies with varying resolutions. In fact, MRA is in some sense the inverse of the wavelet transform. It can derive representations of the original time-series data, using only those features which are characteristic at a given timescale. For instance, a highly noisy but persistent time series, can be decomposed into a portion which represents only the noise (features captured at high frequency), and a portion which represents only the persistent signal (features captured at low frequencies). Thus, moving along the time domain, MRA allows one to zoom to a desired level of detail such that high (low) frequencies yield good (poor) time resolutions and poor (good) frequency resolutions. Since economic time series often exhibit multiscale features, wavelet techniques can effectively decompose these series into constituent processes associated with different timescales.<br /><br /><br /><br /> <h3 class="seccol", id="sec2">Wavelet Transforms</h3> In the context of continuous functions, the <b>continuous wavelet transform</b> (CWT) of a time series $ y(t) $ is defined as: $$ W(a, b) = \int_{-\infty}^{\infty} y(t)\psi_{a,b}(t) \,dt $$ Moreover, the inverse transformation to reconstruct the original process is given as: $$ y(t) = \int_{-\infty}^{\infty} \int_{0}^{\infty} W(a,b)\psi_{a,b}(t) \,da \,db $$ See Percival and Walden (2000) for a detailed discussion.<br /><br /> Since continuous functions are rarely observed, the CWT is empirically rarely exploited and a discretized analogue known as the <b>discrete wavelet transform</b> (DWT) is used. In its most basic form, the series length, $ T = 2^{M} $ for $ M \geq 0 $, is assumed <b>dyadic</b> (a power of 2), and the DWT manifests as a collection of CWT <i>slices</i> at nodes $ (a, b) \equiv (a_{k}, b_{j}) $ such that $ a_{k} = 2^{j}k $ and $ b_{j} = 2^{j} $ where $ j = 1, \ldots, M $. In other words, the discrete wavelet basis functions assume the form: $$ \psi_{k,j}(t) = 2^{-j/2}\psi\left( 2^{-j}t - k \right) $$ Unlike the CWT which is highly redundant in both location and scale, the DWT can be designed as an orthonormal transformation. If the location discretization is restricted to the index $ k = 1, \ldots, 2^{-j}T $, at each scale $ \lambda_{j} = 2^{j - 1} $, half the available observations are lost in exchange for <b>orthonormality</b>. This is the classical DWT framework. Alternatively, if the location index is restricted to the full set of available observations with $ k = 1, \ldots, T $, the discretized transform is no longer orthonormal, but does not suffer from observation loss. The latter framework is typically referred to as the <b>maximal overlap discrete wavelet transform</b> (MODWT), and sometimes as the <b>non-decimated</b> DWT. Since the DWT is formally characterized by wavelet filters, we devote some time to those next.<br /><br /> <h4 class="subseccol", id="sec2.1">Discrete Wavelet Filters</h4> Formally, the DWT is characterized via $h = \rbrace{h_{0}, \ldots, h_{L-1}}$ and $g = \rbrace{ g_{0}, \ldots, g_{L-1} }$ -- the wavelet (high pass) and scaling (low pass) filters of length $L$, respectively, for some $ L \geq 1 $. Recall that the low and high pass filters are defined in the context of <b>frequency response functions</b>, otherwise known as <b>transfer functions</b>. The latter are Fourier transforms of impulse response functions. Since the impulse response function describes, in the time domain, the evolution (response) of a time series signal to a given stimulus (impulse), the transfer function describes, in the frequency domain, the response of a time series signal to a given impulse in the frequency domain. In this regard, when the magnitude of the transfer function, otherwise known as the <b>gain function</b>, is large at low frequencies and small at high frequencies, the filter associated with that transfer function is said to be a <b>low-pass filter</b>. Otherwise, when the gain function is small at low frequencies but high at high frequencies, the transfer function is associated with a <b>high-pass</b> filter.<br /><br /> Like traditional time series filters which are used to extract features (eg. trends, seasonalities, business cycles, noise, etc.), wavelets filters perform a similar role. They are designed to capture low and high frequencies, and have a particular length. This length governs how much of the original series information is used to extract low and high frequency phenomena. This is very similar to the role of the autoregressive (AR) order in traditional time series models where higher AR orders imply more historical observations influence the present.<br /><br /> The simplest and shortest wavelet filter is of length $ L = 2 $ and is called the <b>Haar</b> wavelet. Formally, it is characterized by its high-pass filter definition: \begin{align*} h_{l} = \begin{cases} \frac{1}{\sqrt{2}} \quad \text{if} \quad l = 0\\ \frac{-1}{\sqrt{2}} \quad \text{if} \quad l = 1 \end{cases} \end{align*} This is a sequence of rescaled rectangular functions and is therefore ideally suited to analyzing signals with sudden and discontinuous changes. In this regard, it is ideally suited for outlier detection. Unfortunately, this filter is typically too simple for most other applications.<br /><br /> To help mitigate the limitations of the Haar filter, Daubechies (1992) introduced a family of filters (known as <b>daublets</b>) of even length that are indexed by the polynomial degree they are able to capture -- rather the number of vanishing moments. Thus, the Haar filter, which is of length 2, can only capture constants and linear functions. The Daubechies wavelet filter of length 4 can capture everything from a constant to a cubic function, and so on. Accordingly, higher filter lengths are associated with higher smoothness. Unlike the Haar filter which has a closed form solution in the time domain, the Daubechies family of wavelet filters have a closed form solution only in the frequency domain.<br /><br /> Unfortunately, Daubechies filters are typically not symmetric. If a more symmetric version of the daublet filters is required, then the class known as <b>least asymmetric</b>, or <b>symmlets</b>, is used. The latter define a family of wavelet filters which are as close to symmetric as possible.<br /><br /> <!-- :::::::::: FIGURE 1 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/wavelets/images/wavelet_haar.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/wavelet_haar.png" title="Haar Wavelet" width="540" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 1: Haar Wavelet</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 1 :::::::::: --> <!-- :::::::::: FIGURE 2 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/wavelets/images/wavelet_d8.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/wavelet_d8.png" title="Daublet (L=8) Wavelet" width="540" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 2: Daubechies - Daublet (L=8) Wavelet</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 2 :::::::::: --> <!-- :::::::::: FIGURE 3 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/wavelets/images/wavelet_la8.png"><img height="auto" src="http://www.eviews.com/blog/wavelets/images/wavelet_la8.png" title="Symmlet (L=8) Wavelet" width="540" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 3: Least Asymmetric - Symmlet (L=8) Wavelet</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 3 :::::::::: --> <h4 class="subseccol", id="sec2.2">Mallat's Pyramid Algorithm</h4> In practice, DWT coefficients are derived through the <b>pyramid algorithm</b> of Mallat (1989). In case of the classical DWT with $T=2^{M}$, let $\mathbf{y} = \series{y}{t}{1}{T}$ and define $\mathbf{W} = \sbrace{\mathbf{W}_{1}, \ldots, \mathbf{W}_{M}, \mathbf{V}_{M}}^{\top}$ as the matrix of DWT coefficients. Here, $\mathbf{W}_{j}$ is a vector of wavelet coefficients of length $T/2^{j}$ and is associated with changes on a scale of length $\lambda_{j} = 2^{j-1}$. Moreover, $\mathbf{V}_{M}$ is a vector of scaling coefficients of length $T/2^{j}$ and is associated with averages on a scale of length $\lambda_{M} = 2^{M-1}$. $\mathbf{W}$ now follows from $\mathbf{W} = \mathcal{W}\mathbf{y}$ where $\mathcal{W}$ is some $T\times T$ orthonormal matrix generating the DWT coefficients. The algorithm can now be formalized as follows.<br /><br /> If $\mathbf{W}_{j} = \rbrace{W_{1,1} \ldots W_{T/2^{j},j}}^{\top}$ and $\mathbf{V}_{j} = \rbrace{V_{1,1} \ldots V_{T/2^{j},j}}^{\top}$, the $j^{th}$ iteration of the algorithm convolves an input signal with filters $h$ and $g$ respectively to derive the $j^{th}$ level DWT matrix $\sbrace{\mathbf{W}_{1}, \ldots \mathbf{W}_{j}, \mathbf{V}_{j}}^{\top}$. Explicitly, the convolution is formalized as: \begin{align*} W_{t,1} &= \xsum{l}{0}{L-1}{h_{l}y_{2t-l\hspace{-5pt}\mod T}} && V_{t,1} = \xsum{l}{0}{L-1}{g_{l} y_{2t-l\hspace{-5pt}\mod T}} && j=1\\ W_{t,j} &= \xsum{l}{0}{L-1}{h_{l} V_{2t-l\hspace{-5pt}\mod T,j-1}} && V_{t,j} = \xsum{l}{0}{L-1}{g_{l} V_{2t-l\hspace{-5pt}\mod T,j-1}} && j=2,\ldots,M \end{align*} where $t=1,\ldots,T/2^{j}$. In particular, each iteration therefore convolves the scaling coefficients from the preceding iteration, namely $V_{t,j-1}$, with both the high and low pass filters, and the input signal in the first iteration is $y_{t}$. The entire algorithm continues until the $M^{th}$ iteration although it can be stopped earlier.<br /><br /> In effect, at each scale, the DWT algorithm partitions the frequency spectrum into equal subsets -- the low and high frequencies. At the first scale, low-frequency phenomena of the original signal $ \mathbf{y} $ are captured by $ \mathbf{V}_{1} $, whereas high frequency phenomena are captured by $ \mathbf{W}_{1} $. At scale 2, the same procedure is performed not on the original time series signal, but on the low-frequency components $ \mathbf{V}_{1} $. This in turn generates $ \mathbf{V}_{2} $, which is in a sense those phenomena that would be captured in the first quarter of the frequency spectrum, as well as $ \mathbf{W}_{2} $ -- the high-frequency components at scale 2, or those phenomena that would be captured in the second quarter of the frequency range. This continues at finer and finer levels as we increase scale. In this regard, increasing scale can isolate increasingly more persistent (lower frequency) features of the original time-series signal, with the wavelet coefficients $ \mathbf{W}_{j} $ capturing the remaining, cumulated, ``noisy'' features.<br /><br /> <h4 class="subseccol", id="sec2.3">Boundary Conditions</h4> It's important to note that both the DWT and the MODWT make use of <b>circular filtering</b>. When a filtering operation reaches the beginning or end of an input series, otherwise known as the <b>boundaries</b>, the filter treats the input time series as periodic with period $ T $. In other words, we assume that $ y_{T-1}, y_{T-2}, \ldots $ are useful surrogates for unobserved values $ y_{-1}, y_{-2}, \ldots $. Those wavelet coefficients which are affected are also known as <b>boundary coefficients</b>. Note that the number of boundary coefficients only depends on the filter length $ L $ and is independent of the input series length $ T $. Furthermore, the number of boundary coefficients increases with filter length $ L $. In particular, the formula for the number of boundary coefficients for the DWT and MODWT respectively, are given by: \begin{align*} \kappa_{\text{DWT}, j} &\equiv L_{j}^{\prime}\\ \kappa_{\text{MODWT}, j} &\equiv \min \cbrace{L_{j}, T} \end{align*} where $ L_{j}^{\prime} = \left\lceil (L - 2)\rbrace{1 - \frac{1}{2^{j}}} \right\rceil $ and $ L_{j} = (L - 1)(2^{j - 1} - 1) $.<br /><br /> Furthermore, both DWT and MODWT boundary coefficients will appear at the beginning of $ \mathbf{W}_{j} $ and $ \mathbf{V}_{j} $. Refer to Percival and Walden (2000) for further details.<br /><br /> <h4 class="subseccol", id="sec2.3">Variance Decomposition</h4> The orthonormality of the DWT generating matrix $\mathcal{W}$ has important implications. First, $\mathcal{W}\times\mathcal{W} = I_{T}$, is an identity matrix of dimension $T$. More importantly, $\norm{\mathbf{y}}^{2} = \norm{\mathbf{W}}^{2}$. To see this, recall that $\mathbf{y} = \mathcal{W}^{\top}\mathbf{W}$ and $\norm{\mathbf{y}}^{2} = \mathbf{y}^{\top}\mathbf{y}$. The DWT is therefore an energy (variance) preserving transformation. Coupled with this preservation of energy is also the decomposition of energy on a scale by scale basis. The latter formalizes as: \begin{align} \norm{\mathbf{y}}^{2} = \xsum{j}{1}{M}{\norm{\mathbf{W}_{j}}^{2}} + \norm{\mathbf{V}_{M}}^{2} \label{eq2.5.1} \end{align} where $\norm{\mathbf{W}_{j}}^{2} = \xsum{t}{t}{T/2^{j}}{W^{2}_{t,j}}$ and $\norm{\mathbf{V}_{M}}^{2} = \xsum{t}{t}{T/2^{M}}{V^{2}_{t,M}}$. Thus, $\norm{\mathbf{W}_{j}}^{2}$ quantifies the energy of $ y_{t} $ accounted for at scale $\lambda_{j}$. This decomposition is known as the <b>wavelet power spectrum</b> (WPS) and is arguably the most insightful of the properties of the DWT.<br /><br /> The WPS bares resemblance to the <b>spectral density function</b> (SDF) used in Fourier analysis. Whereas the SDF decomposes the variance of an input series across frequencies, in wavelet analysis, the variance of an input series is decomposed across scales $ \lambda_{j} $. One of the advantages of the WPS over the SDF is that the latter requires an estimate of the input series mean, whereas the former does not. In particular, note that the total variance in $ \mathbf{y} $ can be decomposed as: $$ \xsum{j}{0}{\infty}{\nu^{2}(\lambda_{j})} = \var(\mathbf{y}) $$ where $ \nu^{2}(\lambda_{j}) $ is the contribution to $ \var(\mathbf{y}) $ due to scale $ \lambda_{j} $ and is estimated as: $$ \hat{\nu}^{2}(\lambda_{j}) \equiv \frac{1}{T} \xsum{t}{1}{T}{W_{t,j}^{2}} $$ Note that $ \hat{\nu}^{2}(\lambda_{j}) $ is the energy of $ y_{t} $ at scale $ \lambda_{j} $ divided by the number of observations. Unfortunately, this estimator is biased due to the presence of boundary coefficients. To derive an unbiased estimate, boundary coefficients should be dropped from consideration. Accordingly, an unbiased estimate of variance contributed at scale $ \lambda_{j} $ is given by: $$ \tilde{\nu}^{2}(\lambda_{j}) \equiv \frac{1}{M_{j}} \xsum{t}{\kappa_{j} + 1}{T}{W_{t,j}^{2}}$$ where $ M_{j} = T - \kappa_{j}$ and $ \kappa_{j} \equiv L_{j}^{\prime} $ when wavelet coefficients are derived using the DWT, whereas $ \kappa_{j} \equiv L_{j} $ in case wavelet coefficients derive from the MODWT.<br /><br /> It is also possible to derive confidence intervals for the contribution to the overall variance at each scale. In particular, dealing with unbiased estimators $ \tilde{\nu}(\lambda_{j}) $ and a level of significance $ \alpha \in (0,1) $, a confidence interval for $ \nu(\lambda_{j}) $ with coverage $ 1 - 2\alpha $ is given by: \begin{align*} \sbrace{\tilde{\nu}^{2}(\lambda_{j}) - \Phi^{-1}(1 - \alpha) \rbrace{\frac{2A_{j}}{M_{j}}}^{1/2} \quad ,\quad \tilde{\nu}^{2}(\lambda_{j}) + \Phi^{-1}(1 - \alpha) \rbrace{\frac{2A_{j}}{M_{j}}}^{1/2}} \end{align*} Above, $ A_{j} $ is the integral of the squared spectral density function of wavelet coefficients $ \mathbf{W_{j}} $ excluding any boundary coefficients. As shown in Percival and Walden (2000), $ A_{j} $ can be estimated as the sum of squared serial correlations among $ \mathbf{W_{j}} $ excluding any boundary coefficients. In other words: $$ \hat{A}_{j} = \frac{1}{M_{j}}\xsum{t}{\kappa_{j}}{T - |\tau|}{W_{j, t}W_{j, t+ |\tau|}} \, \quad 0 \leq |\tau| \leq M_{j} - 1 $$ Unfortunately, as argued in Priestley (1981), there is no condition that prevents the lower bound of the confidence interval above from becoming negative. Accordingly, Percival and Walden (2000) suggest the approximation: $$ \frac{\eta \tilde{\nu}^{2}(\lambda_{j})}{\nu^{2}(\lambda_{j})} \stackrel{d}{=} \chi^{2}_{\eta} $$ where $ \eta $ is known as the <b>equivalent degrees of freedom</b> (EDOF) and is formalized as: $$ \eta = \frac{2 E\rbrace{\tilde{\nu}^{2}(\lambda_{j})}^{2}}{\var \rbrace{\tilde{\nu}^{2}(\lambda_{j})}} $$ The confidence interval of interest with coverage $ 1 - 2\alpha $ can now be stated as: \begin{align*} \sbrace{\frac{\eta \tilde{\nu}^{2}(\lambda_{j})}{Q_{\eta}(1 - \alpha)} \,,\, \frac{\eta \tilde{\nu}^{2}(\lambda_{j})}{Q_{\eta}(\alpha)}} \end{align*} where $ Q_{\eta}(1 - \alpha) $ is the $ \alpha- $ quantile for the $ \chi^{2}_{\eta} $ distribution.<br /><br /> Remaining is the issue of EDOF estimation. Two suggestions in Percival and Walden (2000): \begin{align*} \eta_{1} \equiv \frac{M_{j}\tilde{\nu}^{4}(\lambda_{j})}{\hat{A}_{j}}\\ \eta_{2} \equiv \max \cbrace{2^{-j}M_{j} \, , \, 1} \end{align*} The first estimate above relies on large sample theory and in practice requires a sample of at least $ T = 128 $ to yield a decent approximation. The second assumes that the SDF of the wavelet coefficients at scale $ \lambda_{j} $ is a band-pass. See Percival and Walden (2000) for details.<br /><br /> <h4 class="subseccol", id="sec2.4">Multiresolution Analysis</h4> Similar to Fourier, spline, and linear approximations, a principal feature of the DWT is the ability to approximate an input series as a function of wavelet basis functions. In wavelet theory this is known as <b>multiresolution analysis</b> (MRA) and refers to the approximation of an input series at each scale (and up to all scales) $ \lambda_{j} $.<br /><br /> To formalize matters, recall that $ \mathbf{W} = \mathcal{W}\mathbf{y} $ and partition the rows of $ \mathcal{W} $ commensurate with the row partition of $ \mathbf{W} $ into $ \mathbf{W}_{1}, \ldots, \mathbf{W}_{M} $ and $ \mathbf{V}_{M} $. In other words, let $ \mathcal{W} = \sbrace{\mathcal{W}_{1}, \ldots, \mathcal{W}_{M}, \mathcal{V}_{M}}^{\top} $, where $ \mathcal{W}_{j} $ and $ \mathcal{V}_{j} $ have dimensions $ 2^{-j}T \times T $. Then, note that for any $ m \in \cbrace{1, \ldots, M} $: \begin{align*} \mathbf{y} &= \mathcal{W}^{\top}\mathbf{W}\\ &= \xsum{j}{1}{m}{\mathcal{W}^{\top}\mathbf{W}_{j}} + \mathcal{V}^{\top}\mathbf{V}_{m}\\ &= \xsum{j}{1}{m}{\mathcal{D}_{j}} + \mathcal{S}_{m} \end{align*} where $ \mathcal{D}_{j} = \mathcal{W}^{\top}_{j} \mathbf{W}_{j} $ and $ \mathcal{V}_{m} = \mathcal{V}^{\top}_{m} \mathbf{V}_{m} $ are $ T- $ dimensional vectors, respectively called the $ j^{\text{th}} $ level <b>detail</b> and $ m^{\text{th}} $ level <b>smooth</b> series. Furthermore, since the low-pass (high-pass) wavelet coefficients are associated with changes (averages) at scale $ \lambda_{j} $, the detail and smooth series are associated with changes and average at scale $ \lambda_{j} $, respectively, in the input series $ \mathbf{y} $.<br /><br /> The MRA is typically used to derive approximations for the original series using its lower and upper frequency components. Since upper frequency components are associated with transient features and are captured by the wavelet coefficients, the detail series will in fact extract those features of the original series which are typically associated with ``noise''. Alternatively, since lower frequency components are associated with perpetual features and are captured by the scaling coefficients, the smooth series will in fact extract those features of the original series which are typically associated with the ``signal''.<br /><br /> It's worth noting that because wavelet filtering can result in boundary coefficients, the detail and smooth series will have observations affected by the same. The latter are given as: \begin{align*} \text{DWT} &\quad t = \begin{cases} 1, \ldots, 2^{j}L_{j}^{\prime} &\quad \text{lower portion}\\ T - \rbrace{L_{j} + 1 - 2^{j}} + 1, \ldots, T &\quad \text{upper portion} \end{cases}\\ \\ \text{MODWT} &\quad t = \begin{cases} 1, \ldots, L_{j} &\quad \text{lower portion}\\ T - L_{j} + 1, \ldots, T &\quad \text{upper portion} \end{cases} \end{align*} <br /><br /> <h3 class="seccol", id="sec3">Practical Considerations</h3> The exposition above introduces basic theory underlying wavelet analysis. Nevertheless, there are several practical (empirical) considerations which should be addressed. We focus here on three in particular: <ul> <li>Wavelet filter selection</li> <li>Handling boundary conditions</li> <li>Non-dyadic series length adjustments</li> </ul><br /> <h4 class="subseccol", id="sec3.1">Choice of Wavelet Filter</h4> The type of wavelet filter is typically chosen to mimic the data to which it is applied. Shorter filters don't approximate the ideal band pass filter well, but longer ones do. On the other hand, if the data derives from piecewise constant functions, the Haar wavelet or other shorter wavelets may be more appropriate. Alternatively, if the underlying data is smooth, longer filters may be more appropriate. In this regard, it's important to note that longer filters expose more coefficients to boundary condition effects than shorter ones. Accordingly, the rule of thumb strategy is to use the filter with the smallest length that gives reasonable results. Furthermore, since the MODWT is not orthogonal and its wavelet coefficients are correlated, wavelet filter choice is not as vital as in the case of the orthogonal DWT. Nevertheless, if alignment to time is important (i.e. zero phase filters), the least asymmetric family of filters may be a good choice.<br /><br /> <h4 class="subseccol", id="sec3.2">Handling Boundary Conditions</h4> As previously mentioned, wavelet filters exhibit boundary conditions due to circular recycling of observations. Although this may be an appropriate assumption for some series such as those naturally exhibiting cyclical effects, it is not appropriate in all circumstances. In this regard, another popular approach is to reflect the original series to generate a series of length $ 2T $. In other words, wavelet filtering proceeds on observations $ y_{1}, \ldots, y_{T}, y_{T}, y_{T-1}, \ldots, y_{1} $. In either case, any proper wavelet analysis ought, at the very least, quantify how many wavelet coefficients are affected by boundary conditions.<br /><br /> <h4 class="subseccol", id="sec3.3">Adjusting Non-dyadic Length Time Series</h4> Recall that the DWT requires an input series of dyadic length. Naturally, this condition is rarely satisfied in practice. In this regard, there are two broad strategies. Either shorten the input series to dyadic length at the expense of losing observations, or ``pad'' the input series with observations to achieve dyadic length. In the context of the latter strategy, although the choice of padding values is ultimately arbitrary, there are three popular choices, neither of which has proven superior: <ul> <li>Pad with zeros</li> <li>Pad with mean</li> <li>Pad with median</li> </ul><br /><br /> <h3 class="seccol", id="sec4">Wavelet Thresholding</h3> A key objective in any empirical work is to discriminate noise from useful information. In this regard, suppose that the observed time series $ y_{t} = x_{t} + \epsilon_{t} $ where $ x_{t} $ is an unknown signal of interest obscured by the presence of unwanted noise $ \epsilon_{t} $. Traditionally, signal discernment was typically achieved using discrete Fourier transforms. Naturally, this assumes that any signal is an infinite superposition of sinusoidal functions; a strong assumption in empirical econometrics where most data exhibits unit roots, jumps, kinks, and various other non-linearities.<br /><br /> The principle behind wavelet-based signal extraction, otherwise known as <b>wavelet shrinkage</b>, is to <i>shrink</i> any wavelet coefficients not exceeding some <b>threshold</b> to zero and then exploit the MRA to synthesize the signal of interest using the modified wavelet coefficients. In other words, only those wavelet coefficients associated with very pronounced spectra are retained with the additional benefit of deriving a very sparse wavelet matrix.<br /><br /> To formalize the idea, let $ \mathbf{x} = \series{x}{t}{1}{T} $ and $ \mathbf{\epsilon} = \series{\epsilon}{t}{1}{T} $. Next, recall that the DWT can be represented as $ T\times T $ orthonormal matrix $ \mathcal{W} $, yielding: $$ \mathbf{z} \equiv \mathcal{W}\mathbf{y} = \mathcal{W}\mathbf{x} + \mathcal{W}\mathbf{\epsilon} $$ where $ \mathcal{W}\mathbf{\epsilon} \sim N(0, \sigma^{2}_{\epsilon}) $. The idea now is to shrink any coefficients not surpassing a threshold to zero.<br /><br /> <h4 class="subseccol", id="sec4.1">Thresholding Rule</h4> While there are several thresholding rules, by far, the two most popular are: <ul> <li><b>Hard Tresholding Rule</b> (``kill/keep'' strategy), formalized as: $$ \delta_{\eta}^{H}(x) = \begin{cases} x \quad \text{if } |x| > \eta\\ 0 \quad \text{otherwise} \end{cases} $$ </li> <li> <b>Soft Thresholding Rule</b>, formalized as: $$ \delta_{\eta}^{S}(x) = \sign(x)\max\cbrace{0 \,,\, |x| - \eta} $$ </li> </ul> where $ \eta $ is the threshold limit.<br /><br /> <h4 class="subseccol", id="sec4.2">Optimal Threshold</h4> The threshold value $ \eta $ is key to wavelet shrinkage. In particular, optimal thresholding is achieved when $ \eta = \sigma_{\epsilon} $ where $ \sigma_{\epsilon} $ is the standard deviation of the noise process $ \mathbf{\epsilon} $. In this regard, several threshold strategies have emerged over the years. <ul> <li> <b>Universal Threshold</b>, proposed in Donoho and Johnstone (1994), and formalized as: $$ \eta^{\text{U}} = \hat{\sigma}_{\epsilon} \sqrt{2\log(T)} $$ where $ \hat{\sigma}_{\epsilon} $ is estimated using wavelet coefficients only at scale $ \lambda_{1} $, regardless of what scale is under consideration. When this threshold rule is coupled with soft thresholding, the combination is commonly referred to as <b>VisuShrink</b>.<br /><br /> </li> <li> <b>Adaptive Universal Threshold</b> is identical to the universal threshold above, but estimates $ \hat{\sigma}_{\epsilon} $ using those wavelet coefficients associated with the scale under consideration. In other words: $$ \eta^{\text{AU}} = \hat{\sigma}_{\epsilon, j} \sqrt{2\log(T)} $$ where $ \sigma_{\epsilon, j} $ is the variance of the wavelet coefficients at scale $ \lambda_{j} $.<br /><br /> </li> <li> <b>Minimax Estimation</b> proposed in Donoho and Johnstone (1994), and is formalized as the solution to: $$ \inf_{\hat{\mathbf{x}}}\sup_{\mathbf{x}} R(\hat{\mathbf{x}}, \mathbf{x}) $$ Unfortunately, a closed form solution is not available, although tabulated values exist. Furthermore, when this threshold is coupled with soft thresholding, the combination is commonly referred to as <b>RiskShrink</b>.<br /><br /> </li> <li> <b>Stein's Unbiased Risk Estimate</b> (SURE), formalized as the solution to: $$ \min_{\hat{\mathbf{\mu}}} \norm{\mathbf{\mu} - \hat{\mathbf{\mu}}}^{2} $$ where $ \mathbf{\mu} = (\mu_{1}, \ldots, \mu_{s})^{\top} $ and $ \mu_{k} $ is the mean of some variable of interest $ q_{k} ~ N(\mu_{k}, 1) $, for $ k = 1, \ldots, s $. In the framework of wavelet coefficients, $ q_{k} $ would represent the standardized wavelet coefficients at a given scale.<br /><br /> Furthermore, while the optimal threshold $ \eta $ based on this rule depends on the thresholding rule used, the solution may not be unique and so the SURE threshold value is the minimum such $ \eta $. In case of the soft thresholding rule, the solution was proposed in Donoho and Johnstone (1994). Alternatively, for the hard thresholding rule, the solution was proposed in Jansen (2010).<br /><br /> </li> <li> <b>False Discovery Rate</b> (FDR), proposed in Abramovich and Benjamini (1995), determines the threshold value through a multiple hypotheses testing problem. The procedure is summarized in the following algorithm:<br /><br /> <ol> <li> For each $ W_{t,j} \in \mathbf{W}_{j} $ consider the hypothesis $ H_{t,j}: W_{t,j} = 0 $ and its associated two-sided $ p- $value: $$ p_{t,j} = 2\rbrace{1 - \Phi\rbrace{\frac{|W_{t,j}|}{\sigma_{\epsilon, j}}}} $$ where as before, $ \sigma_{\epsilon, j} $ is the variance of the wavelet coefficients at scale $ \lambda_{j} $ and $ \Phi(\cdot) $ is the standard Gaussian CDF.<br /><br /> </li> <li> Sort the $ p_{t,j} $ in ascending order so that: $$ p_{(1)} \leq p_{(2)} \leq \ldots \leq p_{(m_{j})} $$ where $ m_{j} $ denotes the cardinality (number of elements) in $ \mathbf{W}_{j} $. For instance, when $ \mathbf{W}_{j} $ are derived from a DWT, then $ m_{j} = T/2^{j} $.<br /><br /> </li> <li> Let $ \alpha $ define the significance level of the hypothesis tests and let $ i^{\star} $ denote the largest $ i \in \cbrace{1, \ldots, m_{j}} $ such that $ p_{(i)} \leq (\frac{i}{m_{j}})\alpha $. For this $ i^{\star} $, the quantity: $$ \eta^{\text{FDR}}_{j} = \sigma_{\epsilon, j}\Phi^{-1}\rbrace{1 - \frac{p_{i^{\star}}}{2}} $$ is the optimal threshold for wavelet coefficients at scale $ \lambda_{j} $.<br /> </li> </ol> </li> </ul> For further details, Donoho, Johnstone, et. al. (1998), Gencay, Selcu, and Whitcher (2001), and Percival and Walden (2000).<br /><br /> <h4 class="subseccol", id="sec4.3">Wavelet Coefficient Variance</h4> Before summarizing the entire threshold procedure, there remains the issue of how to estimate the variance of the wavelet coefficients, $ \sigma^{2}_{\epsilon} $. If the assumption is that the observed data $ \mathbf{y} $ is obscured by some noise process $ \mathbf{\epsilon} $, the usual estimator of variance will exhibit extreme sensitivity to noisy observations. Accordingly, let $ \mu_{j} $ and $ \zeta_{j} $ denote the mean and median, respectively, of the wavelet coefficients $ \mathbf{W}_{j} $ at scale $ \lambda_{j} $, and let $ m_{j} $ denote its cardinality (total number of coefficients at said scale). Then, several common estimators have been proposed in the literature: <ul> <li> <b>Mean Absolute Deviation</b> formalized as: $$ \hat{\sigma}_{\epsilon, j} = \frac{1}{m_{j}}\xsum{i}{1}{m_{j}}{|W_{i, j} -\mu_{j}|} $$<br /><br /> </li> <li> <b>Median Absolute Deviation</b> formalized as: $$ \hat{\sigma}_{\epsilon, j} = \med\rbrace{|W_{1, j} -\zeta_{1}|, \ldots, |W_{m_{j}, j} -\zeta_{m_{j}}|} $$<br /><br /> </li> <li> <b>Mean Median Absolute Deviation</b> formalized as: $$ \hat{\sigma}_{\epsilon, j} = \frac{1}{m_{j}}\xsum{i}{1}{m_{j}}{|W_{i, j} -\zeta_{j}|} $$<br /><br /> </li> <li> <b>Median (Gaussian)</b> formalized as: $$ \hat{\sigma}_{\epsilon, j} = \frac{\med\rbrace{|W_{1, j}|, \ldots, |W_{m_{j}, j}|}}{0.6745} $$<br /><br /> </li> </ul> <h4 class="subseccol", id="sec4.4">Thresholding Implementation</h4> The previous sections were devoted to describing thresholding rules and optimal threshold values. Here the focus is on summarizing thresholding implementations.<br /><br /> Effectively all wavelet thresholding procedures follow the algorithm below: <ol> <li> Compute a wavelet transformation of the original data up to some scale $ J^{\star} < J $. In other words, derive a partial wavelet transform and derive the wavelet and scaling coefficients $ \mathbf{W}_{1}, \ldots, \mathbf{W}_{J^{\star}}, \mathbf{V}_{J^{\star}} $.<br /><br /> </li> <li> Select an optimal threshold $ \eta $ from one of the methods discussed earlier.<br /><br /> </li> <li> Threshold the coefficients at each scale $ \lambda_{j} $ for $ j \in \cbrace{1, \ldots, J^{\star}} $ using the threshold value selected in 2 and some thresholding rule (hard or soft). This will generate a set of modified (thresholded) wavelet coefficients $ \mathbf{W}^{\text{(T)}}_{1}, \ldots, \mathbf{W}^{\text{(T)}}_{J^{\star}} $. Observe that scaling coefficients $ \mathbf{V}_{J^{\star}} $ are <b>not</b> thresholded.<br /><br /> </li> <li> Use MRA with the thresholded coefficients to reconstruct the signal (original data) as follows: \begin{align*} \hat{\mathbf{y}} &= \xsum{j}{1}{J^{\star}}{\mathcal{W}^{\top}\mathbf{W}^{\text{(T)}}_{j}} + \mathcal{V}^{\top}\mathbf{V}_{J^{\star}}\\ &= \xsum{j}{1}{J^{\star}}{\mathcal{D}^{\text{(T)}}_{j}} + \mathcal{S}_{J^{\star}} \end{align*}<br /><br /> </li> </ol> <h3 class="seccol", id="sec5">Conclusion</h3> In this first entry of our series on wavelets, we provided a theoretical overview of the most important aspects in wavelet analysis. In <a href='http://blog.eviews.com/2020/12/wavelet-analysis-part-ii-applications.html'>Part II</a>, we will see how to apply these concepts by using the new wavelet features released with EViews 12.<br /><br /><br /> <hr /> <h3 class="seccol", id="sec6">References</h3> <ol class="bib2xhtml"> <li id="abramovich-1995"> Abramovich F and Benjamini Y (1995), <i>"Thresholding of wavelet coefficients as multiple hypotheses testing procedure"</i>, In Wavelets and Statistics. , pp. 5-14. Springer. </li> <li id="daubechies-1992"> Daubechies I (1992), <i>"Ten lectures on wavelets, CBMS-NSF Conference Series in Applied Mathematics"</i>, SIAM Ed. , pp. 122-122. </li> <li id="donoho-1994"> Donoho DL and Johnstone IM (1994), <i>"Ideal spatial adaptation by wavelet shrinkage"</i>, biomeliika. Vol. 81(3), pp. 425-455. Oxford University Press. </li> <li id="donoho-1995"> Donoho DL and Johnstone IM (1995), <i>"Adapting to unknown smoothness via wavelet shrinkage"</i>, Journal of the american statistical association. Vol. 90(432), pp. 1200-1224. Taylor & Francis Group. </li> <li id="donoho-1998"> Donoho DL, Johnstone IM and others (1998), <i>"Minimax estimation via wavelet shrinkage"</i>, The annals of Statistics. Vol. 26(3), pp. 879-921. Institute of Mathematical Statistics. </li> <li id="gencay-2001"> Gençay R, Selçuk F and Whitcher BJ (2001), <i>"An inlioduction to wavelets and other filtering methods in finance and economics"</i> Academic press. </li> <li id="jansen-2010"> Jansen M (2010), <i>"Minimum risk methods in the estimation of unknown sparsity"</i>, Technical report. </li> <li id="mallat-1989"> Mallat S (1989), <i>"A theory for multiresolution signal decomposition: The wavelet representation"</i>, Pattern Analysis and Machine Intelligence, IEEE liansactions on. Vol. 11(7), pp. 674-693. </li> <li id="percival-2000"> Percival D and Walden A (2000), <i>"Wavelet methods for time series analysis"</i> Vol. 4 Cambridge Univ Pr. </li> <li id="priestley-1981"> Priestley MB (1981), <i>"Speclial analysis and time series: probability and mathematical statistics"</i> (04; QA280, P7.) </li> </ol></span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com1tag:blogger.com,1999:blog-6883247404678549489.post-6385533183570778612020-07-16T09:48:00.003-07:002020-07-17T07:32:26.402-07:00Time Series Methods for Modelling the Spread of Epidemics<style> table { border: 0px solid black; border-collapse: separate; border-spacing: 10px; } td { border: 1px solid black; } .nb { border: 0px solid black; } .step { counter-reset: section; list-style-type: none; } .step li::before { counter-increment: section; content: "Step "counter(section) ": "; } </style> <script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { lb: "{\\left(}", rb: "{\\right)}", bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"> </script> <span style="font-family: "verdana" sans-serif"> <i>Authors and guest post by Eren Ocakverdi</i><br /><br /> This blog piece intends to introduce two new add-ins (i.e. <a href='http://www.eviews.com/Addins/seirmodel.aipz'>SEIRMODEL</a> and <a href='http://www.eviews.com/Addins/tsepigrowth.aipz'>TSEPIGROWTH</a>) to EViews users’ toolbox and help close the gap between epidemiological models and time series methods from a practitioner’s point of view. <a name='more'></a><br /><br /> <h3>Table of Contents</h3> <ol> <li><a href="#sec1">Introduction</a> <li><a href="#sec2">Susceptible-Exposed-Infected-Recovered (SEIR) model</a> <li><a href="#sec3">Observational Models</a> <li><a href="#sec4">Application to COVID-19 Data from Turkey</a> <li><a href="#sec5">Files</a> <li><a href="#sec6">References</a> </ol><br /> <h3 id="sec1">Introduction</h3> Spread of infectious diseases are usually described through compartmental models in mathematical epidemiology instead of observational time series models since analytical derivation of their dynamics are quite straightforward. These are merely structural models that divide the population into several states and then define the equations that govern the transition behavior from one state to another. In other words, <i>state space</i> models.<br /><br /> <h3 id="sec2">Susceptible-Exposed-Infected-Recovered (SEIR) model</h3> I have written an add-in (<a href='http://www.eviews.com/Addins/seirmodel.aipz'>SEIRMODEL</a>) for interested EViews users, who would want to carry out their own analyses and gain basic insights into the systemic nature of an epidemic. The add-in implements a deterministic version of the SEIR model, which does not take into account vital dynamics like birth and death. Still, it offers a simplified framework for those who are not familiar with these concepts.<br /><br /> In order to run simulations, users need to provide required inputs (e.g. population size, calibration parameters, initial conditions etc.), details of which can be found in the documentation file that comes with the add-in:<br /><br /> <!-- :::::::::: FIGURE 1 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/tsepigrowth/seir_dialog.png"><img height="auto" src="http://www.eviews.com/blog/tsepigrowth/seir_dialog.png" title="SEIR Add-In Dialog" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 1: SEIR Add-In Dialog</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 1 :::::::::: --> The default output is a chart showing the evolution of compartments/states during the spread of the epidemic. You can also save these series for further analysis.<br/><br/> <!-- :::::::::: FIGURE 2 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/tsepigrowth/seir_output.png"><img height="auto" src="http://www.eviews.com/blog/tsepigrowth/seir_output.png" title="SEIR Add-In: Output" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 2: SEIR Add-In Output</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 2 :::::::::: --> <h3 id="sec3">Observational Models</h3> Structural modelling of epidemics becomes increasingly complex when the heterogeneity in the population, mobility issues, interactions, etc. are considered in the computations. Functions fitted to observed data for calibration purposes are mostly nonlinear, which can further complicate the estimation process. Harvey and Kuttman (2020) recently proposed useful observational time series methods particularly for generalized logistic and Gompertz growth curves. I have written an add-in (<a href='http://www.eviews.com/Addins/tsepigrowth.aipz'>TSEPIGROWTH</a>) that implements those methods outlined in the paper.<br/><br/> Suppose we wanted to fit these nonlinear curves to the number of infected individuals from the simulation of our earlier SEIR model:<br /><br /> <!-- :::::::::: FIGURES 3a and 3b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 3a :::::::::: --> <center> <a href="http://www.eviews.com/blog/tsepigrowth/seir_logistic.png"><img height="auto" src="http://www.eviews.com/blog/tsepigrowth/seir_logistic.png" title="SEIR: Generalized Logistic Fit" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 3b :::::::::: --> <center> <a href="http://www.eviews.com/blog/tsepigrowth/seir_gompertz.png"><img height="auto" src="http://www.eviews.com/blog/tsepigrowth/seir_gompertz.png" title="SEIR: Gompertz Growth Curve Fit" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 3a: SEIR: Generalized Logistic Fit</small> </center> </td> <td class="nb"> <center> <small>Figure 3b: SEIR: Gompertz Growth Curve Fit</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 3a and 3b :::::::::: --> Above, c(4) denotes the growth rate parameter. At this point I would also suggest EViews users to try the <a href="http://www.eviews.com/Addins/GBASS.aipz">GBASS</a> add-in, which incorporates the generalized BASS model developed for modelling how new products (or new viruses for that matter!) get adopted into a population.<br /><br /> If we wanted to take the other venue offered by Harvey and Kuttman (2020) and estimate these parameters via observational methods, then we could simply run the add-in:<br /><br /> <!-- :::::::::: FIGURE 4 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/tsepigrowth/tsepigrowth_dialog.png"><img height="auto" src="http://www.eviews.com/blog/tsepigrowth/tsepigrowth_dialog.png" title="TSEPIGROWTH Add-In: Dialog" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 4: TSEPIGROWTH Add-In Dialog</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 4 :::::::::: --> Output from the state space specification of these models are as follows:<br /><br /> <!-- :::::::::: FIGURES 5a and 5b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 3a :::::::::: --> <center> <a href="http://www.eviews.com/blog/tsepigrowth/tsepigrowth_logistic_ss.png"><img height="auto" src="http://www.eviews.com/blog/tsepigrowth/tsepigrowth_logistic_ss.png" title="TSEPIGROWTH: Generalized Logistic SS Model" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 3b :::::::::: --> <center> <a href="http://www.eviews.com/blog/tsepigrowth/tsepigrowth_gompertz_ss.png"><img height="auto" src="http://www.eviews.com/blog/tsepigrowth/tsepigrowth_gompertz_ss.png" title="TSEPIGROWTH: Gompertz Growth Curve SS Model" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 5a: TSEPIGROWTH: Generalized Logistic SS Model</small> </center> </td> <td class="nb"> <center> <small>Figure 5b: TSEPIGROWTH: Gompertz Growth Curve SS Model</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 5a and 5b :::::::::: --> Here, the final value of the state variable <i>CHANGE</i>, corresponds to the growth rate parameter and is more or less close to that of fitted nonlinear curves.<br/><br/> <h3 id="sec4">Application to COVID-19 Data From Turkey</h3> Examples above may be important or useful from a pedagogical point of view, but we need to try these models on actual data to gain more insight from a practical perspective. Naturally, COVID-19 data would be the most recent and most appropriate place to start. Users can visit the <a href='http://blog.eviews.com/2020/03/mapping-covid-19.html'>previous blog post</a> to learn how to fetch COVID-19 data from various sources. Here, I’ll use another data source provided by the WHO.<br /><br /> First, we fit a Gompertz curve to the level and make forecasts until the end of year. Next, we do the same exercise with the observational counterparts of the Gompertz model that focus on estimation of the growth rate.<br /><br /> The chart below visually compares the fitted values of growth:<br /><br /> <!-- :::::::::: FIGURE 6 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/tsepigrowth/grfit.png"><img height="auto" src="http://www.eviews.com/blog/tsepigrowth/grfit.png" title="Gompertz Fit Curves" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 6: Gompertz Fit Curves</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 6 :::::::::: --> The next plot displays the forecasted values for the level: <!-- :::::::::: FIGURE 7 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/tsepigrowth/grfcast.png"><img height="auto" src="http://www.eviews.com/blog/tsepigrowth/grfcast.png" title="Gompertz Forecast Curves" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 7: Gompertz Forecast Curves</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 7 :::::::::: --> These forecasts indicate different saturation levels, of which the nonlinear curve is the lowest. This is mainly because the inflection point of the fitted nonlinear curve implies levelling off at an earlier date. The first observational model has a deterministic trend, but performs better since it focuses on the growth rate. There is an obvious change in trend at the beginning of June as Turkey then announced the first phase of COVID-19 restriction easing and marked the start of the normalization process. Observational models allow us to model this change explicitly as a slope intervention: <!-- :::::::::: FIGURE 8 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/tsepigrowth/policyss.png"><img height="auto" src="http://www.eviews.com/blog/tsepigrowth/policyss.png" title="Policy Intervention SS Model" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 8: Policy Intervention SS Model</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 8 :::::::::: --> The coefficient <i>C(3)</i> verifies that the growth rate has risen significantly as of June. Dynamic versions of the observational model of Gompertz fits a flexible trend to data so it adapts to changes in growth rates without any need for explicit modelling of the intervention. It also allows the analysis of the impact of policy/intervention from a counterfactual perspective. The plot below compares the out-of-sample forecasts of the dynamic model before and after the normalization period. The shift in the forecasted level of total cases is obvious! <!-- :::::::::: FIGURE 9 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/tsepigrowth/policygrfcast.png"><img height="auto" src="http://www.eviews.com/blog/tsepigrowth/policygrfcast.png" title="Policy Intervention Out of Sample Forecast" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 9: Policy Intervention Out of Sample Forecast</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 9 :::::::::: --> <h3 id="sec5">Files</h3> <ul> <li><a href="http://www.eviews.com/blog/tsepigrowth/tsepigrowth_blog.prg">tsepigrowth_blog.prg</a> </ul> <br /><br /> <hr /> <h3 id="sec6">References</h3> <ol class="bib2xhtml"> <li><a name="harvey-2020"></a>Harvey, A. C. and Kattuman, P.: Time Series Models Based on Growth Curves with Applications to Forecasting Coronavirus <cite>Covid Economics: Vetted and Real-Time Papers</cite>, 24(1) 126–157, 2020. </li> </ol></span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com3tag:blogger.com,1999:blog-6883247404678549489.post-33436183732368941762020-04-01T06:44:00.000-07:002020-04-01T06:44:12.331-07:00Mapping COVID-19: Follow-up<style> table { border: 0px solid black; border-collapse: separate; border-spacing: 10px; } td { border: 1px solid black; } .nb { border: 0px solid black; } .step { counter-reset: section; list-style-type: none; } .step li::before { counter-increment: section; content: "Step "counter(section) ": "; } </style> <script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { lb: "{\\left(}", rb: "{\\right)}", bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"> </script> <span style="font-family: "verdana" sans-serif"> As a follow up to our <a href="http://blog.eviews.com/2020/03/mapping-covid-19.html">previous blog entry</a> describing how to import Covid-19 data into EViews and produce some maps/graphs of the data, this post will produce a couple more graphs similar to ones we've seen become popular across social media in recent days. <a name='more'></a><br /><br /> <h3>Table of Contents</h3> <ol> <li><a href="#sec1">Deaths Since First Death</a> <li><a href="#sec2">One Week Difference</a> </ol><br /> <h3 id="sec1">Deaths Since First Death</h3> The first is a graph showing the 3 day moving average of the number of deaths per day since the first death was recorded in a country, for countries with a current number of deaths greater than 160:<br /><br /> <!-- :::::::::: FIGURE 1 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/3dma.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/3dma.png" title="3-Day moving average" width="480" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 1: 3-Day moving average</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 1 :::::::::: --> The graph shows that for most countries the growth rate of deaths (approximated by using log-scaling) is increasing, but at a slower rate. The code to produce this graph, including importing the death data from Johns Hopkins is:<br /><br /> <pre style="overflow:auto"><br /> <font color="green">'import the death data from Johns Hopkins</font><br /> %url = "https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_global.csv"<br /> <br /> <font color="green">'load up the url as a new page</font><br /> pageload(page=temp) {%url}<br /> <br /> <font color="green">'stack the page into a 2d panel</font><br /> pagestack(page=stack) _? @ *? * <br /> <br /> <font color="green">'do some renaming and make the date series</font><br /> rename country_region country <br /> rename province_state province<br /> rename _ deaths<br /> series date = @dateval(var01, "MM_DD_YYYY")<br /> <br /> <font color="green">'structure the page </font><br /> pagestruct province country @date(date)<br /> <br /> <font color="green">'delete the original page</font><br /> pagedelete temp<br /> <br /> <font color="green">'create the panel page</font><br /> pagecreate(id, page=panel) country @date @srcpage stack<br /> <br /> <font color="green">'copy the deaths series to the panel page</font><br /> copy(c=sum) stack\deaths * @src @date country @dest @date country<br /> pagedelete stack<br /> <br /> <font color="green">'contract the page to only include countries with greater than 160 deaths</font><br /> pagecontract if @maxsby(deaths,country)>160<br /> <br /> <font color="green">'create a series containing the number of days since the first death was recorded in each country. This series is equal to 0 if the number of deaths on a date is equal to the minimum number of deaths for that country (nearly always 0, but for China, the data starts after the first recorded death), and then counts up by one for dates after the minimum.</font><br /> series days = @recode(deaths=@minsby(deaths,country), 0, days(-1)+1)<br /> <br /> <font color="green">'contract the page so that days before the second recorded death in each country are removed</font><br /> pagecontract if days>0<br /> <br /> <font color="green">'restructure the page to be based on this day count rather than actual dates</font><br /> pagestruct(freq=u) @date(days) country<br /> <br /> <font color="green">'set sample to be first 45 days</font><br /> smpl 1 45<br /> <br /> <font color="green">'make a graph of the 3 day moving average of deaths</font><br /> freeze(d_graph) @movav(log(deaths),3).line(m, panel=c)<br /> d_graph.addtext(t, just(c)) Deaths Since First Death\n(3 day moving average, log scale)<br /> d_graph.addtext(br) Days<br /> d_graph.addtext(l) log(deaths)<br /> d_graph.legend columns(5)<br /> d_graph.legend position(-0.6,3.72)<br /> show d_graph<br /> </pre> <h3 id="sec2">One Week Difference</h3> The second graph is an interesting approach plotting the one-week difference in the number of new confirmed cases of COVID-19 against the total number of confirmed cases for each country, with both shown using log-scales. We have only included countries with more than 140 deaths, and have highlighted just three countries – China, South Korea and the US.<br /><br /> <!-- :::::::::: FIGURE 2 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/weekdiff.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/weekdiff.png" title="One week difference" width="480" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 2: One week difference</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 2 :::::::::: --> The code to generate this graph is:<br /><br /> <pre style="overflow:auto"><br /> <font color="green">'names of the three topics/files</font><br /> %topics = "confirmed deaths recovered"<br /><br /> <font color="green">'loop through the topics</font><br /> for %topic {%topics}<br /> <br /> <font color="green">'build the url by taking the base url and then adding the topic in the middle</font><br /> %url = "https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_" + %topic + "_global.csv"<br /> <br /> <font color="green">'load up the url as a new page</font><br /> pageload(page=temp) {%url}<br /> <br /> <font color="green">'stack the page into a 2d panel</font><br /> pagestack(page=stack_{%topic}) _? @ *? *<br /> <br /> <font color="green">'do some renaming and make the date series</font><br /> rename country_region country <br /> rename province_state province<br /> rename _ {%topic}<br /> series date = @dateval(var01, "MM_DD_YYYY")<br /> <br /> <font color="green">'structure the page</font><br /> pagestruct province country @date(date)<br /> <br /> <font color="green">'delete the original page</font><br /> pagedelete temp<br /> next<br /> <br /> <font color="green">'create the panel page</font><br /> pagecreate(id, page=panel) country @date @srcpage stack_{%topic}<br /> <br /> <font color="green">'loop through the topics copying each from the 2D panel</font><br /> for %topic {%topics}<br /> copy(c=sum) stack_{%topic}\{%topic} * @src @date country @dest @date country<br /> pagedelete stack_{%topic}<br /> next<br /> <br /> <font color="green">'contract the page to only include countries with more than 140 deaths</font><br /> pagecontract if @maxsby(deaths, country)>140<br /> <br /> <font color="green">'make a group, called DATA, containing confirmed cases and the one week difference in confirmed cases</font><br /> group data confirmed confirmed-confirmed(-7)<br /> <br /> <font color="green">'set the sample to remove periods with fewer than 50 cases</font><br /> smpl if confirmed > 50<br /> <br /> <font color="green">'produce a panel plot of confirmed against 7 day difference in confirmed</font><br /> freeze(c_graph) data.xyline(panel=c)<br /> <br /> <font color="green">' Add titles</font><br /> c_graph.addtext(t) "COVID-19: New vs. Total Cases\n(Countries with >140 deaths)"<br /> c_graph.addtext(bc, just(c)) "Total Confirmed Cases\n(log scale)"<br /> c_graph.addtext(l, just(c))"New Confirmed Cases (in the past week)\n(log scale)"<br /> c_graph.setelem(1) legend("")<br /> <br /> <font color="green">' Adjust axis to use logs</font><br /> c_graph.axis(b) log<br /> c_graph.axis(l) log<br /> <br /> <font color="green">' Adjust lines - remove lines after this if you want to show all countries</font><br /> c_graph.legend -display<br /> for !i = 1 to @rows(@uniquevals(country))<br /> c_graph.setelem(!i) linewidth(.75) linecolor(@rgb(192,192,192))<br /> next<br /><br /> c_graph.setelem(8) linecolor(@rgb(128,64,0))<br /> c_graph.setelem(3) linecolor(@rgb(0,64,128))<br /> c_graph.setelem(15) linecolor(@rgb(0,128,0))<br /> <br /> <font color="green">'add some text</font><br /> c_graph.addtext(3.29, 1.92, font(Calibri,10)) "S. Korea"<br /> c_graph.addtext(4.87, 2.35, font(Calibri,10)) "China"<br /> c_graph.addtext(5.31, 0.23, font(Calibri,10)) "United States"<br /> <br /> show c_graph<br /> </pre></span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com8tag:blogger.com,1999:blog-6883247404678549489.post-11407133675307775742020-03-30T17:28:00.001-07:002020-04-01T07:55:06.046-07:00Mapping COVID-19<style> table { border: 0px solid black; border-collapse: separate; border-spacing: 10px; } td { border: 1px solid black; } .nb { border: 0px solid black; } .step { counter-reset: section; list-style-type: none; } .step li::before { counter-increment: section; content: "Step "counter(section) ": "; } </style> <script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { lb: "{\\left(}", rb: "{\\right)}", bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"> </script> <span style="font-family: "verdana" sans-serif"> With the world currently experiencing the Covid-19 crisis, many of our users are working remotely (aside: for details on how to use EViews at home, visit our <a href="http://www.eviews.com/covid">Covid licensing page</a>) anxious to follow data on how the virus is spreading across parts of the world. There are many sources of information on Covid-19, and we thought we’d demonstrate how to fetch some of these sources directly into EViews, and then display some graphics of the data. (Please visit our <a href="http://blog.eviews.com/2020/04/mapping-covid-19-follow-up.html">follow up post</a> for a few more graph examples). <a name='more'></a><br /><br /> <h3>Table of Contents</h3> <ol> <li><a href="#sec1">Johns Hopkins Data</a> <li><a href="#sec2">European Centre for Disease Prevention and Control Data</a> <li><a href="#sec3">New York Times US County Data</a> <li><a href="#sec4">Sneak Peaks</a> </ol><br /> <h3 id="sec1">Johns Hopkins Data</h3> To begin we'll retrieve data from the Covid-19 Time Series collection from <a href="https://github.com/CSSEGISandData/COVID-19/tree/master/csse_covid_19_data/csse_covid_19_time_series">Johns Hopkins Whiting School of Engineering Center for Systems Science and Engineering</a>. These data are organized into three csv files, one containing confirmed cases, on containing deaths, and one recoveries at both country and state/province levels. Each file is organized such that the first column contains state/province name (where applicable), the second column the country name, the third and fourth contain average latitude and longitude, and then the remaining columns containing daily values.<br /><br /> There are a number of different approaches that could be used to import these data into an EViews workfile. We’ll demonstrate an approach that will stack the data into a single panel workfile. We’ll start with importing the confirmed cases data. EViews is able to directly open CSV files over the web using the <b>File->Open->Foreign Data as Workfile</b> menu item:<br /><br /> <!-- :::::::::: FIGURE 1 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/jhopenpath.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/jhopenpath.png" title="JH open path" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 1: JH open path</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 1 :::::::::: --> Which results in the following workfile:<br /><br /> <!-- :::::::::: FIGURE 2 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/jhwf.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/jhwf.png" title="JH workfile" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 2: JH workfile</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 2 :::::::::: --> Each day of data has been imported into its own series, with the name of the series being the date. There are also series containing the country/region name and the province/state name, as well as latitude and longitude.<br /><br /> To create a panel, we’ll want to stack these date series into a single series, which we can do simply with the <b>Proc->Reshape Current Page->Stack in New Page…</b><br /><br /> <!-- :::::::::: FIGURE 3 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/jhstackdialog.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/jhstackdialog.png" title="JH stack data dialog" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 3: JH stack data dialog</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 3 :::::::::: --> Since all of the series we wish to stack have a similar naming structure – they all start with an “_” we can instruct EViews to stack using “_?” as the identifier, where ? is a wildcard. This results in the following stacked workfile page:<br /><br /> <!-- :::::::::: FIGURE 4 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/jhstackwf.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/jhstackwf.png" title="JH stack data workfile" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 4: JH stack data workfile</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 4 :::::::::: --> Which is close to what we want, we simply need to tidy up some of the variable names, and instruct EViews to structure the page as a true panel. The date information has been imported into the alpha series VAR01, which we can convert into a true date series with:<br /><br /> <pre style="overflow:auto"><br /> series date = @dateval(var01, "MM_DD_YYYY")<br /> </pre> The actual cases data is stored in the series currently named "_", which we can rename to something more meaningful with:<br /><br /> <pre style="overflow:auto"><br /> rename _ cases<br /> </pre> And then finally we can structure the page as a panel by clicking on <b>Proc->Structure/Resize</b> current page, selecting Dated Panel as the structure type and filling in the date and filling in the cross-section and date information:<br /><br /> <!-- :::::::::: FIGURE 5 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/jhstructuredialog.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/jhstructuredialog.png" title="JH workfile restructure" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 5: JH workfile restructure</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 5 :::::::::: --> When asked if we wish to remove blank values, we select no. We now have a 2-dimensional panel, with two sets of cross-sectional identifiers – one for province/state and the other for country:<br /><br /> <!-- :::::::::: FIGURE 6 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/jh3dpanel.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/jh3dpanel.png" title="JH 2D Panel" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 6: JH 2D Panel</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 6 :::::::::: --> If we want to sum up the state level data to create a traditional panel with just country and time, we can do so by creating a new panel page based upon the indices of this page. Click on the <b>New Page</b> tab at the bottom of the workfile and select <b>Specify by Identifier Series</b>. In the resulting dialog we enter the country series as the cross-section identifier we wish to keep:<br /><br /> <!-- :::::::::: FIGURE 6 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/jhpagebyid.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/jhpagebyid.png" title="JH page by ID" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 6: JH page by ID</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 6 :::::::::: --> Which results in a panel. We can then copy the cases series from our 2D panel page to the new panel page with standard copy and paste, but ensuring to change the Contraction method to Sum in the Paste Special dialog:<br /><br /> <!-- :::::::::: FIGURE 7 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/jhpastedialog.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/jhpastedialog.png" title="JH paste dialog" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 7: JH paste dialog</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 7 :::::::::: --> <!-- :::::::::: FIGURE 8 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/jhpanelwf.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/jhpanelwf.png" title="JH panel workfile" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 8: JH panel workfile</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 8 :::::::::: --> With the data in a standard panel workfile, all of the standard EViews tools are now available. We can view a graph of the cases by country by opening the cases series, clicking on <b>View->Graph</b>, and then selecting <b>Individual cross sections</b> as the <b>Panel option</b>.<br /><br /> <!-- :::::::::: FIGURE 9 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/jhallcxgraph.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/jhallcxgraph.png" title="JH graph of all cross-sections" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 9: JH graph of all cross-sections</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 9 :::::::::: --> This graph may be a little unwieldy, so we can reduce the number of cross-sections down to, say, only countries that have, thus far, experienced more than 10,000 cases by using the smpl command:<br /><br /> <pre style="overflow:auto"><br /> smpl if @maxsby(cases, country_region)>10000<br /> </pre> <!-- :::::::::: FIGURE 9 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/jhmaxsbygraph.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/jhmaxsbygraph.png" title="JH cross-sections with more than 10000 cases" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 9: JH cross-sections with more than 10000 cases</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 9 :::::::::: --> Of course, all of this could have been done in an EViews program, and it could be automated to combine all three data files, ending up with a panel containing cases, deaths and recoveries. The following EViews code produces such a panel:<br /><br /> <pre style="overflow:auto"><br /> <font color="green">'close all existing workfiles</font><br /> close @wf<br /> <br /> <font color="green">'names of the three topics/files</font><br /> %topics = "confirmed deaths recovered"<br /> <br /> <font color="green">'loop through the topics</font><br /> for %topic {%topics}<br /> <font color="green">'build the url by taking the base url and then adding the topic in the middle</font><br /> %url = "https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_" + %topic + "_global.csv"<br /> <br /> <font color="green">'load up the url as a new page</font><br /> pageload(page=temp) {%url}<br /> <br /> <font color="green">'stack the page into a 3d panel</font><br /> pagestack(page=stack_{%topic}) _? @ *? *<br /> <br /> <font color="green">'do some renaming and make the date series</font><br /> rename country_region country <br /> rename province_state province<br /> rename _ {%topic}<br /><br /> series date = @dateval(var01, "MM_DD_YYYY")<br /><br /> <font color="green">'structure the page</font><br /> pagestruct province country @date(date)<br /> <br /> <font color="green">'delete the original page</font><br /> pagedelete temp<br /><br /> <font color="green">'create the 2D panel page</font><br /> pagecreate(id, page=panel) country @date @srcpage stack_{%topic}<br /> next<br /> <br /> <font color="green">'loop through the topics copying each from the 3D panel into the 2D panel</font><br /> for %topic {%topics}<br /> copy(c=sum) stack_{%topic}\{%topic} * @src @date country @dest @date country<br /> pagedelete stack_{%topic}<br /> next<br /> </pre> <h3 id="sec2">European Centre for Disease Prevention and Control Data</h3> The second repository we'll use is data provided by the <a href="https://www.ecdc.europa.eu/en/publications-data/download-todays-data-geographic-distribution-covid-19-cases-worldwide">ECDC's Covid-19 Data site</a>. They provide an extremely easy to use data for each country, along with population data. Importing these data into EViews is trivial – you can open the XLSX file directly using the <b>File->Open-Foreign Data as Workfile</b> dialog and entering the URL to the XLSX in the <b>File name</b> box:<br /><br /> <!-- :::::::::: FIGURE 10 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/ecdcopenpath.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/ecdcopenpath.png" title="ECDC open path" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 10: ECDC open path</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 10 :::::::::: --> The resulting workfile will look like this:<br /><br /> <!-- :::::::::: FIGURE 11 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/ecdcwf.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/ecdcwf.png" title="ECDC workfile" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 11: ECDC workfile</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 11 :::::::::: --> All we need to do is structure it as a panel, which we can do by clicking on <b>Proc->Structure/Resize Current Page</b> and then entering the cross-section and date identifiers (we also choose to keep an unbalanced panel by unchecking the <b>Balance between starts & ends</b> box).<br /><br /> <!-- :::::::::: FIGURE 12 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/ecdcstructuredialog.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/ecdcstructuredialog.png" title="ECDC structure WF dialog" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 12: ECDC strcture WF dialog</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 12 :::::::::: --> The result is an EViews panel workfile:<br /><br /> <!-- :::::::::: FIGURE 13 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/ecdcseries.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/ecdcseries.png" title="ECDC series" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 13: ECDC series</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 13 :::::::::: --> The data provided by ECDC contains the number of new cases and deaths each day. Most presentation of Covid-19 data has been with the total number of cases and deaths per country. We can create the totals with the <b>@cumsum</b> function which will produce the cumulative sum, resetting to zero as the start of each cross-section.<br /><br /> <pre style="overflow:auto"><br /> series ccases = @cumsum(cases)<br /> series cdeaths = @cumsum(deaths)<br /> </pre> With this panel we can perform standard panel data analysis, or produce graphs (see the Johns Hopkins examples above). However, since the ECDC have included standard <a href="https://en.wikipedia.org/wiki/List_of_ISO_3166_country_codes"></a>ISO country codes for the countries, we can also tie the data to a geomap.<br /><br /> We found a simple <a href="http://thematicmapping.org/downloads/world_borders.php">shapefile</a> of the world <a href="http://thematicmapping.org/downloads/world_borders.php">online</a>, and downloaded it to our computer. In EViews we then click on <b>Object->New Object->GeoMap</b> to create a new geomap, and then drag the <b>.prj</b> file we downloaded onto the geomap.<br /><br /> In the properties box that appears, we tie the countries defined in the shapefile to the identifiers in the workfile. Since the shapefile uses ISO codes, and we have those in the <b>countriesandterritories</b> series, we can use those to map the workfile to the shapefile:<br /><br /> <!-- :::::::::: FIGURE 14 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/geomapprops.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/geomapprops.png" title="Geomap properties" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 14: Geomap properties</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 14 :::::::::: --> Which results in the following global geomap:<br /><br /> <!-- :::::::::: FIGURE 15 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/geopmapglobal.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/geomapglobal.png" title="Global geomap" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 15: Global geomap</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 15 :::::::::: --> We can use the <b>Label:</b> dropdown to remove the country labels to give a clearer view of the map (note this feature is a recent addition, you may need to update your copy of EViews to see the <b>None</b> option).<br /><br /> To add some color information to the map we click on <b>Properties</b> and then the <b>Color</b> tab. We'll add two custom color settings – a gradient fill so show differences in the number of cases, and a single solid color for countries with a large number of cases:<br /><br /> <!-- :::::::::: FIGURES 16a and 16b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 16a :::::::::: --> <center> <a href="http://www.eviews.com/blog/covid19/images/ecdcgeomaprange.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/ecdcgeomaprange.png" title="ECDC geomap color range" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 16b :::::::::: --> <center> <a href="http://www.eviews.com/blog/covid19/images/ecdcgeomapthresh.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/ecdcgeomapthresh.png" title="ECDC geomap color threshold" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 3a: ECDC geomap color range</small> </center> </td> <td class="nb"> <center> <small>Figure 3b: ECDC geomap color threshold</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 16a and 16b :::::::::: --> And then entering <b>ccases</b> as the coloring series. This results in a map:<br /><br /> <!-- :::::::::: FIGURE 17 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/ecdcgeomap.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/ecdcgeomap.png" title="ECDC geomap" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 17: ECDC geomap</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 17 :::::::::: --> Again, this could all be done programmatically with the following program (note the ranges for coloring will need to be changed as the virus becomes more wide spread):<br /><br /> <pre style="overflow:auto"><br /> <font color="green">'download data</font><br /> wfopen https://www.ecdc.europa.eu/sites/default/files/documents/COVID-19-geographic-disbtribution-worldwide.xlsx<br /> rename countryterritorycode iso3<br /> pagecontract if iso3<>""<br /> pagestruct(bal=m) iso3 @date(daterep)<br /> <br /> <font color="green">'make cumulative data</font><br /> series ccases = @cumsum(cases)<br /> series cdeaths = @cumsum(deaths)<br /> <br /> <font color="green">'make geomap for cases</font><br /> geomap cases_map<br /> cases_map.load ".\World Map\TM_WORLD_BORDERS_SIMPL-0.3.prj"<br /> cases_map.link iso3 iso3<br /> cases_map.options -legend<br /> cases_map.setlabel none<br /> cases_map.setfillcolor(t=custom) mapser(ccases) naclr(@RGB(255,255,255)) range(lim(0,12000,cboth), rangeclr(@grad(@RGB(255,255,255),@RGB(0,0,255))), outclr(@trans,@trans), name("Range")) thresh(12000, below(@trans), above(@RGB(0,0,255)), name("Threshold"))<br /> <br /> <font color="green">'make geomaps for deaths</font><br /> geomap deaths_map<br /> deaths_map.load ".\World Map\TM_WORLD_BORDERS_SIMPL-0.3.prj"<br /> deaths_map.link iso3 iso3<br /> deaths_map.options -legend<br /> deaths_map.setlabel none<br /> deaths_map.setfillcolor(t=custom) mapser(cdeaths) naclr(@RGB(255,255,255)) range(lim(1,500,cboth), rangeclr(@grad(@RGB(255,128,128),@RGB(128,64,64))), outclr(@trans,@trans), name("Range")) thresh(500,cleft,below(@trans),above(@RGB(128,0,0)),name("Threshold")) <br /> </pre> <h3 id="sec3">New York Times US County Data</h3> The final data repository we will look at is the <a href="https://github.com/nytimes/covid-19-data/blob/master/us-counties.csv">New York Times</a> data for the United States at county level. These data are also trivial to import into EViews, you can again just enter the URL for the CSV file to open it. Rather than walking through the UI steps, we'll simply post the two lines of code required to import and structure as a panel:<br /><br /> <pre style="overflow:auto"><br /> <font color="green">'retrieve data from NY Times github</font><br /> wfopen(page=covid) https://raw.githubusercontent.com/nytimes/covid-19-data/master/us-counties.csv<br /> <br /> <font color="green">'structure as a panel based on date and FIPS ID</font><br /> pagestruct(dropna) fips @date(date)<br /> </pre> Note that the New York Times have conveniently provided the <a href="https://en.wikipedia.org/wiki/FIPS_county_code">FIPS code</a> for each county, which means we can also produce some geomaps. We've downloaded a US county map from the <a href="https://dataverse.tdl.org/dataset.xhtml?persistentId=doi:10.18738/T8/CPTP8C">Texas Data Repository</a>, and then linked the <b>FIPS</b> series in the workfile with the <b>FIPS_BEA</b> attribute of the map:<br /><br /> <!-- :::::::::: FIGURE 17 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/images/geomapfipsprops.png"><img height="auto" src="http://www.eviews.com/blog/covid19/images/geomapfipsprops.png" title="Geomap FIPS properties" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 17: Geomap FIPS properties</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 17 :::::::::: --> The full code to produce such a map is:<br /><br /> <pre style="overflow:auto"><br /> <font color="green">'retrieve data from NY Times github</font><br /> wfopen(page=covid) https://raw.githubusercontent.com/nytimes/covid-19-data/master/us-counties.csv<br /> <br /> <font color="green">'structure as a panel based on date and FIPS ID</font><br /> pagestruct(dropna) fips @date(date)<br /> <br /> <font color="green">'set displaynames for use in geomaps</font><br /> cases.displayname Confirmed Cases<br /> deaths.displayname Deaths<br /> <br /> <font color="green">'make geomap</font><br /> geomap cases_map<br /> cases_map.load ".\Us County Map\CountiesBEA.prj"<br /> cases_map.link fips_bea fips<br /> cases_map.options -legend<br /> cases_map.setlabel none<br /> cases_map.setfillcolor(t=custom) mapser(cases) naclr(@RGB(255,255,255)) range(lim(1,200,cboth), rangeclr(@grad(@RGB(204,204,255),@RGB(0,0,255))), outclr(@trans,@trans), name("Range")) thresh(200, below(@trans), above(@RGB(0,0,255)), name("Threshold")) <br /> </pre> <h3 id="sec4">Sneak Peaks</h3> One of the features our engineering team have been working on for the next major release of EViews is the ability to produce animated graphs and geomaps (the keen eyed amongst you may have noticed the <b>Animate</b> button on a few of our screenshots). Whilst this feature is a little far away from release, the Covid-19 data does give an interesting set of testing procedures, and we thought we'd share some of the results.<br /><br /> <!-- :::::::::: ANIMATION 1 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/animations/cases_map.gif"><img height="auto" src="http://www.eviews.com/blog/covid19/animations/cases_map.gif" title="US counties cases evolution (wait for it...)" width="680" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Animation 1: US counties cases evolution</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: ANIMATION 1 :::::::::: --> <!-- :::::::::: ANIMATION 2 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/covid19/animations/cases_map.gif"> <video width="680" controls> <source src= "http://www.eviews.com/blog/covid19/animations/graph01.mp4" type="video/mp4" title="Confirmed cases"> </video> </a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Animation 2: Confirmed cases</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: ANIMATION 1 :::::::::: --></span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com9tag:blogger.com,1999:blog-6883247404678549489.post-86065470278518103992020-02-25T07:58:00.001-08:002020-03-04T09:27:00.165-08:00Beveridge-Nelson Filter<style> table { border: 0px solid black; border-collapse: separate; border-spacing: 10px; } td { border: 1px solid black; } .nb { border: 0px solid black; } .step { counter-reset: section; list-style-type: none; } .step li::before { counter-increment: section; content: "Step "counter(section) ": "; } </style> <script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { lb: "{\\left(}", rb: "{\\right)}", bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"> </script> <span style="font-family: "verdana" sans-serif"> <i>Authors and guest post by Benjamin Wong (Monash University) and Davaajargal Luvsannyam (The Bank of Mongolia)</i><br /><br /> Analysis of macroeconomic time series often involves decomposing a series into a trend and cycle components. In this blog post, we describe the Kamber, Morley, and Wong (2018) Beveridge-Nelson (BN) filter and the associated EViews add-in. <a name='more'></a><br /><br /> <h3>Table of Contents</h3> <ol> <li><a href="#sec1">Introduction</a> <li><a href="#sec2">The BN Decomposition</a> <li><a href="#sec3">The BN Filter</a> <li><a href="#sec4">Why Use the BN Filter</a> <li><a href="#sec5">BN Filter Implementation</a> <li><a href="#sec6">Conclusion</a> <li><a href="#sec7">Files</a> <li><a href="#sec8">References</a> </ol><br /> <h3 id="sec1">Introduction</h3> In this blog entry, we will discuss the Beveridge-Nelson (BN) filter - the Kamber, Morley, and Wong (2018) modification of the well-known Beveridge and Nelson (1981) decomposition. In particular, we will discuss the application of both procedures to estimating the <i>output gap</i>, which the US Bureau of Economic Analysis (BEA) and the Congressional Budget Office (CBO) define as the proportional deviation of the real actual <i>gross domestic product</i> (GDP) from the real potential GDP.<br /><br /> The analysis to follow will use quarterly data from the post World War II period 1947Q1 to 2019Q3 and will be downloaded from the FRED database. In this regard, we begin by creating a new quarterly workfile as follows: <ol> <li>From the main EViews window, click on <b>File/New/Workfile...</b>. <li>Under <b>Frequency</b> select <b>Quarterly</b>. <li>Set the <b>Start date</b> to <i>1947Q1</i> and the set the <b>End date</b> to <i>2019Q3</i> <li>Hit <b>OK</b>. </ol> Next, we fetch the GDP data as follows: <ol> <li>From the main EViews window, click on <b>File/Open/Database...</b>. <li>From the <b>Database/File Type</b> dropdown, select <b>FRED Database</b>. <li>Hit <b>OK</b>. <li>From the FRED database window, click on the <b>Browse</b> button. <li>Next, click on <b>All Series Search</b> and in the <b>Search for</b> box,type <i>GDPC1</i>. (This is the real actual seasonally adjusted GDP) <li>Drag the series over to the workfile to make it available for analysis. <li>Again, in the <b>Search for</b> box, type <i>GDPPOT</i>. (This is the real potential seasonally unadjusted GDP estimated by the CBO) <li>Drag the series over to the workfile to make it available for analysis. <li>Close the FRED windows as they are no longer needed. </ol> <!-- :::::::::: FIGURES 1a and 1b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 1a :::::::::: --> <center> <a href="http://www.eviews.com/blog/bnfilter/fredbrowse.png"><img height="auto" src="http://www.eviews.com/blog/bnfilter/fredbrowse.png" title="FRED Browse" width="180" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 1b :::::::::: --> <center> <a href="http://www.eviews.com/blog/bnfilter/fredsearch.png"><img height="auto" src="http://www.eviews.com/blog/bnfilter/fredsearch.png" title="FRED Search" width="180" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 1a: FRED Browse </small> </center> </td> <td class="nb"> <center> <small>Figure 1b: FRED Search</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 2a and 2b :::::::::: --> Next, rename the series <b>GDPC1</b> to <b>GDP</b> by issuing the following command: <pre><br /> rename gdpc1 gdp<br /> </pre> We now show how to obtain the implied estimate of the output gap from the CBO to provide the user some perspective on how to obtain the output gap. In particular, the CBO implied estimate of the output gap is defined using the formula: $$ CBOGAP = 100\left(\frac{GDP - GDPPOT}{GDPPOT}\right) $$ For reference, we will create this series in EViews and call it <b>CBOGAP</b>. This is done by issuing the following command: <pre><br /> series cbogap = 100*(gdp-gdppot)/gdppot<br /> </pre> We also plot <b>CBOGAP</b> below: <!-- :::::::::: FIGURE 2 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/bnfilter/gap.png"><img height="auto" src="http://www.eviews.com/blog/bnfilter/gap.png" title=" CBO implied estimate of the output gap" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 2: CBO implied estimate of the output gap</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 2 :::::::::: --> <h3 id="sec2">BN Decomposition</h3> Recall here that for any time series $ y_{t} $, the BN decomposition determines a trend process $ \tau_{t} $ and a cycle process $ c_{t} $, such that $ y_{t} = \tau_{t} + c_{t} $. In this regard, the trend component $ \tau_{t} $ is the deviation of the long-horizon conditional forecast of $ y_{t} $ from its deterministic drift $ \mu $. In other words: $$ \tau_{t} = \lim_{h\rightarrow \infty} E_{t}\left(y_{t+h} - h\mu\right) \quad \text{where} \quad \mu = E(\Delta y_{t}) $$ On the other hand, the cyclical component is the deviation of the underlying process from its long-horizon forecast. Intuitively, when $ y_{t} $ represents the GDP of some economy, the cycle process $ c_{t} = y_{t} - \tau_{t}$ is interpreted as the <i>output gap</i>.<br /><br /> In practice, in order to capture the autocovariance structure of $ \Delta y_{t} $, the BN decomposition starts by first fitting an autoregressive moving-average (ARMA) model to $ \Delta(y) $ and then proceeds to derive $ \tau_{t} $ and $ c_{t} $. For instance, when the model of choice is AR(1), the BN decomposition derives from the following steps:<br /><br /> <ol class="step"> <li>Fit an AR(1) model to $ \Delta y_{t} $: $$ \Delta y_{t} = \widehat{\alpha} + \widehat{\phi}\Delta y_{t-1} + \widehat{\epsilon}_{t} $$ <li> Estimate the deterministic drift as the unconditional mean process: $$ \widehat{\mu} = \frac{\widehat{\alpha}}{1 - \widehat{\phi}} $$ <li> Estimate the BN trend process: $$ \widehat{\tau}_{t} = \left(y_{t} + \left(\frac{\widehat{\phi}}{1 - \widehat{\phi}}\right) \Delta y_{t}\right) - \left(\frac{\widehat{\phi}}{1 - \widehat{\phi}}\right) \widehat{\mu}$$ <li> Estimate the BN cycle component: $$ \widehat{c}_{t} = y_{t} - \widehat{\tau}_{t} $$ </ol><br /> As an illustrative example, consider the BN decomposition of US quarterly real GDP. To conform with the Kamber, Morley, and Wong (2018) paper, we will also transform the raw US real GDP as 100 times its logarithm. In this regard, we generate a new EViews series object <b>LOGGDP</b> by issuing the following command: <pre><br /> series loggdp = 100 * log(gdp)<br /> </pre> At last, following the 4 steps outlined earlier, we derive the BN decomposition in EViews as follows: <pre><br /> series dy = d(loggdp)<br /> equation ar1.ls dy c dy(-1) 'Step 1<br /> scalar mu = c(1)/(1-c(2)) 'Step 2<br /> series bntrend = loggdp + (dy - mu)*c(2)/(1 - c(2)) 'Step 3<br /> series bncycle = loggdp - bntrend 'Step 4<br /> </pre> The BN trend and cycle series are displayed in Figure 2 below.<br /><br /> <!-- :::::::::: FIGURES 3a and 3b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 3a :::::::::: --> <center> <a href="http://www.eviews.com/blog/bnfilter/bntrend.png"><img height="auto" src="http://www.eviews.com/blog/bnfilter/bntrend.png" title="BN Trend" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 3b :::::::::: --> <center> <a href="http://www.eviews.com/blog/bnfilter/bncycle.png"><img height="auto" src="http://www.eviews.com/blog/bnfilter/bncycle.png" title="BN Cycle" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 3a: BN Trend</small> </center> </td> <td class="nb"> <center> <small>Figure 3b: BN Cycle</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 3a and 3b :::::::::: --> To see how the BN decomposition estimate of the output gap compares to the CBO implied estimate of the output gap, we plot both series on the same graph.<br /><br /> <!-- :::::::::: FIGURE 4 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/bnfilter/bncvsgap.png"><img height="auto" src="http://www.eviews.com/blog/bnfilter/bncvsgap.png" title="BN Cycle vs CBO implied output gap estimate" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 4: BN Cycle vs CBO implied output gap estimate</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 4 :::::::::: --> Evidently, the BN cycle series lacks persistence (very noisy), lacks amplitude (low variance), and in general, does not exhibit the characteristics found in the CBO implied estimate of the output gap, <b>CBOGAP</b>.<br /><br /> <h3 id="sec3">The BN Filter</h3> First, to explain why the BN estimate of output gap lacks the persistence of its true counterpart, recall the formula for the BN cycle component for an AR(1) model: $$ c_{t} = y_{t} - \tau_{t} = -\frac{\phi}{1-\phi}(\Delta y_{t} - \mu)$$ Clearly, when $ \phi $ is small, $ \Delta y_{t} $ is not very persistent. Since $ c_{t} $ is only as persistent as $ \Delta y_{t} $, the cycle component itself lacks the persistence one expects of the true output gap series.<br /><br /> Next, to explain why $ c_{t} $ lacks the expected amplitude, define the signal-to-noise ratio $ \delta $ for any time series as the ratio of the variance of trend shocks relative to the overall forecast error variance. In other words: $$ \delta \equiv \frac{\sigma^{2}_{\Delta \tau}}{\sigma^{2}_{\epsilon}} = \psi(1)^{2} $$ which follows since $ \Delta\tau = \psi(1)\epsilon_{t} $ and $ \psi(1) = \lim_{h\rightarrow \infty} \frac{\partial y_{t+h}}{\partial \epsilon_{t}} $. Intuitively, $ \psi(1) $ is the <i>long-run multiplier</i> that captures the permanent effect of the forecast error on the long-horizon conditional expectation of $ y_{t} $. Quite generally, as demonstrated in Kamber, Morley, and Wong (2018), for any AR(p) model: \begin{align} \Delta y_{t} = c + \sum_{k=1}^{p}\phi_{k}\Delta y_{t-k} + \epsilon_{t} \label{eq1} \end{align} the signal-to-noise ratio is given by the relation \begin{align} \delta = \frac{1}{(1-\phi(1))^{2}} \quad \text{where} \quad \phi(1) = \phi_{1} + \ldots + \phi_{p}\label{eq2} \end{align} In particular, when the forecasting model is AR(1), as was the case in the BN decomposition above, the signal-to-noise ratio is simply $ \delta = \frac{1}{(1-\phi)^{2}} $ and in the case of the US GDP growth process, it is $ \delta = \frac{1}{(1-0.36)^{2}} = 2.44$. In other words, the BN trend shocks exhibit higher volatility than quarter-to-quarter forecast errors and the signal-to-noise ratio is therefore relatively high. In fact, in the case of a freely estimated AR$ (p) $ model of output growth, $ \phi(1) < 1 $, which implies that $ \delta > 1 $. In other words,the trend will be more volatile than the cycle, and at odds if one expects the cycle shocks (the output gap amplitude) to explain the majority of the systematic forecast variance.<br /><br /> To correct for the aforementioned shortcomings of the BN decomposition, Kamber, Morley, and Wong (2018) exploit the relationship between the signal-to-noise ratio and the AR coefficients in equation \eqref{eq2}. In particular, they note that equation \eqref{eq2} implies that: \begin{align} \phi(1) = 1 - \frac{1}{\sqrt{\delta}} \end{align} In this regard, the idea underlying the BN filter is to fix a specific value to the signal-to-noise ratio, say $ \delta = \bar{\delta} $. Subsequently, the BN decomposition is derived from an AR model, the AR coefficients of which are forced to sum to $ \bar{\phi}(1) \equiv 1 - \frac{1}{\sqrt{\bar{\delta}}} $. In other words, the BN decomposition is derived while imposing a particular signal-to-noise ratio.<br /><br /> It is important to note here that estimation of the BN decomposition under a particular signal-to-noise ratio restriction is in fact straightforward and does not require complicated non-linear routines. To see this, observe that equation \eqref{eq1} can be rewritten as: \begin{align} \Delta y_{t} = c + \rho \Delta y_{t-1} + \sum_{k=1}^{p-1}\phi^{\star}_{k}\Delta^{2} y_{t-k} + \epsilon_{t} \label{eq3} \end{align} where $ \rho = \phi_{1} + \ldots + \phi_{p} $ and $ \phi^{\star}_{k} = -\left(\phi_{k+1} + \ldots + \phi_{p}\right) $. Then, imposing the restriction $ \rho = \bar{\rho} \equiv \bar{\phi}(1) $ reduces the regresion in \eqref{eq3} to: \begin{align} \Delta y_{t} - \bar{\rho} \Delta y_{t-1} = c + \sum_{k=1}^{p-1}\phi^{\star}_{k}\Delta^{2} y_{t-k} + \epsilon_{t} \label{eq4} \end{align} In other words, $ \bar{\rho}\Delta y_{t-1} $ is brought to the left hand side and the regressand in the regression \eqref{eq4} becomes $ \Delta \bar{y}_{t} \equiv \Delta y_{t} - \bar{\rho} \Delta y_{t-1} $.<br /><br /> <h3 id="sec4">Why Use the BN Filter?</h3> Before we demonstrate the BN Filter add-in, we quickly outline two reasons why the BN filter might be a reasonable approach, particularly when estimating the output gap. <ol> <li>When analyzing GDP growth, standard ARMA model selection often favours low order AR variants, which, as discussed earlier, produce high signal-to-noise ratios. <li>Unlike alternative low signal-to-noise ratio procedures such as deterministic quadratic detrending, the Hodrick-Prescott (HP) filter, and the bandpass (BP) filter, which often require large number of estimation revisions (as new data comes in) and are typically unreliable in out-of-sample forecasts (see Orphanides and Van Norden, (2003)), Kamber, Morley and Wong (2018) argue that the BN filter exhibits better out-of-sample performance and generally requires fewer estimation revisions to match observable data characteristics.<br /><br /> </ol> To further drive this latter point, we demonstrate the impact of ex-post estimation of the output gap using the HP filter. In particular, we will first estimate the output gap (the cycle component) of the <b>LOGGDP</b> series for the period 1947Q1 to 2008Q3 and call it <b>HPCYCLE</b>, and then again for the period 1947Q1 to 2019Q3 and call it <b>HPCYLCE_EXPOST</b>.<br /><br /> To estimate the HP filter cycle component for the period 1947Q1 to 2008Q3, we first set the sample accordingly by issuing the command: <pre><br /> smpl @first 2008Q3<br /> </pre> Next, we estimate the HP filter cycle series as follows: <ol> <li>From the workfile, double click on the series <b>LOGGDP</b> to open the series. <li>In the series window, click on <b>Proc/Hodrick-Prescott Filter...</b> <li>In the <b>Cylce series</b> text box, type <i>hpcycle</i>. <li>Hit <b>OK</b>. </ol> <!-- :::::::::: FIGURE 5 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/bnfilter/hpfilter.png"><img height="auto" src="http://www.eviews.com/blog/bnfilter/hpfilter.png" title="HP Filter" width="180" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 5: HP Filter</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 5 :::::::::: --> The steps are repeated for the sample period 1947Q1 to 2019Q3. A plot of both cycle series on the same graph is presented below.<br /><br /> <!-- :::::::::: FIGURE 6 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/bnfilter/hpcycleexpost.png"><img height="auto" src="http://www.eviews.com/blog/bnfilter/hpcycleexpost.png" title="HP Cycle vs HP Cycle Ex Post" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 6: HP Cycle vs HP Cycle Ex Post</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 6 :::::::::: --> Evidently, the ex-post HP filter estimation of the output gap diverges from its shorter period counterpart starting from 2006Q1. It is precisely this drawback that we will see is not nearly as pronounced in BN filter estimates.<br /><br /> <h3 id="sec5">BN Filter Implementation</h3> To implement the BN Filter, we need to download and install the add-in from the EViews website. The latter can be found at <a href="https://www.eviews.com/Addins/BNFilter.aipz">https://www.eviews.com/Addins/BNFilter.aipz</a>. We can also do this from inside EViews itself: <ol> <li>From the main EViews window, click on <b>Add-ins/Download Add-ins...</b> <li>Click on the the BNFilter add-in. <li>Click on <b>Install</b>. </ol> <!-- :::::::::: FIGURE 5 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/bnfilter/addin.png"><img height="auto" src="http://www.eviews.com/blog/bnfilter/addin.png" title="Install Add-in" width="180" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 5: Install Add-in</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 5 :::::::::: --> At last, we will demonstrate how to apply the BN Filter add-in using an AR(12) model. To do so, proceed as follows: <ol> <li>From the workfile window, double click on <b>LOGGDP</b> to open the spreadsheet view of the series. <li>To access the BN filter dialog, click on <b>Proc/Add-ins/BN Filter</b> <li>Stick with the defaults and hit <b>OK</b>. </ol><br /> <!-- :::::::::: FIGURE 6 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/bnfilter/bnfilter.png"><img height="auto" src="http://www.eviews.com/blog/bnfilter/bnfilter.png" title="BN Filter Dialog" width="180" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 6: BN Filter Dialog</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 6 :::::::::: --> The signal-to-noise ratio, while not specified above, is chosen using the Kamber, Morley, and Wong (2018) automatic selection procedure which balances the trade off between fit and amplitude. Typically, the signal-to-noise ratio for the US using such a procedure is about 0.25, which implies a quarter of the shocks to US GDP are permanent. Below, we show the BN Filter cycle series both alone and in comparison to the CBO implied estimate of the output gap <b>CBOGAP</b>.<br /><br /> <!-- :::::::::: FIGURES 7a and 7b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 7a :::::::::: --> <center> <a href="http://www.eviews.com/blog/bnfilter/bnfcycle.png"><img height="auto" src="http://www.eviews.com/blog/bnfilter/bnfcycle.png" title="BN Filter Cycle" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 7b :::::::::: --> <center> <a href="http://www.eviews.com/blog/bnfilter/bnfcvsgap.png"><img height="auto" src="http://www.eviews.com/blog/bnfilter/bnfcvsgap.png" title="BN Filter Cycle vs. CBO implied output gap estimate" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 7a: BN Filter Cycle</small> </center> </td> <td class="nb"> <center> <small>Figure 7b: BN Filter Cycle vs CBO implied output gap estimate</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 7a and 7b :::::::::: --> We also plot a comparison of the BN Filter cycle series with the HP filtered cycle.<br /><br /> <!-- :::::::::: FIGURE 8 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/bnfilter/bnfcvshpc.png"><img height="auto" src="http://www.eviews.com/blog/bnfilter/bnfcvshpc.png" title="BN Filter Cycle vs HP Filter Cycle" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 8: BN Filter Cycle vs HP Filter Cycle</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 8 :::::::::: --> As we can see, the BN filter estimate of the the US output gap using an AR(12) model resembles what we would get for an output gap that has a low signal-to-noise ratio. The amplitude is reasonably large, we see business cycles, and the troughs line up with the recessions dated by the NBER. The amplitude of the output gap estimated using the BN Filter is comparable to that of the cycle obtained by the HP filter, as well as the implied estimated of the CBO, which is unlike what we see in Figure 4.<br /><br /> The BN filter add-in also accommodates the ability to incorporate knowledge about structural breaks. In particular, we will use 2006Q1 as a structural break which is consistent with the date found by a Bai and Perron (2003) test, used by Kamber, Morley and Wong (2018), and is consistent with independent work by Eo and Morley (2019). The following steps demonstrate the outcome: <ol> <li>From the workfile window, double click on <b>LOGGDP</b> to open the spreadsheet view of the series. <li>To access the BN filter dialog, click on <b>Proc/Add-ins/BN Filter</b> <li>Select the <b>Structural Break</b> box. <li>In the <b>Date of structural break</b> text box, enter <i>2006Q1</i>. <li>Hit <b>OK</b>. </ol><br /> <!-- :::::::::: FIGURE 9 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/bnfilter/bnfcyclesb.png"><img height="auto" src="http://www.eviews.com/blog/bnfilter/bnfcyclesb.png" title="BN Filter Cycle (Structural Break)" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 9: BN Filter Cycle (Structural Break)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 9 :::::::::: --> Now we see a more positive output gap post-2006 as the structural break accounts for the fact that the average GDP growth rate has fallen.<br /><br /> Suppose however that we were ignorant about the actual date of the break. This might be the case in practice as it could take a decade or more before one could empirically identify a structural break date. In this case, a possible option is to use a rolling window for the average growth rate. In this example, we use a backward window of 40 quarters as the average growth rate. The idea is that if there were breaks, they would be reflected in this window. When this is the case, we proceed as follows: <ol> <li>From the workfile window, double click on <b>LOGGDP</b> to open the spreadsheet view of the series. <li>To access the BN filter dialog, click on <b>Proc/Add-ins/BN Filter</b> <li>Select the <b>Dynamic mean adjustment</b> box. <li>Hit <b>OK</b>. </ol><br /> <!-- :::::::::: FIGURES 10a and 10b :::::::::: --> <center> <table> <tr> <td> <!-- :::::::::: FIGURE 10a :::::::::: --> <center> <a href="http://www.eviews.com/blog/bnfilter/bnfcycledma.png"><img height="auto" src="http://www.eviews.com/blog/bnfilter/bnfcycledma.png" title="BN Filter Cycle (Dynamic Mean Adjustment)" width="360" /></a><br /> </center> </td> <td> <!-- :::::::::: FIGURE 9b :::::::::: --> <center> <a href="http://www.eviews.com/blog/bnfilter/bnfcsbvsedma.png"><img height="auto" src="http://www.eviews.com/blog/bnfilter/bnfcsbvsedma.png" title="BN Filter Cycle (Known vs Unknown Structural Break)" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 10a: BN Filter Cycle (Dynamic Mean Adjustment)</small> </center> </td> <td class="nb"> <center> <small>Figure 10b: BN Filter Cycle (Known vs Unknown Structural Break)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURES 10a and 10b :::::::::: --> Evidently, the estimated output gap looks similar to the one estimated with an explicit structural break in 2006Q1. In general, this suggests that using a backward window to adjust for the mean growth rate might be a useful real-time strategy for dealing with breaks.<br /><br /> Users are not constrained to the automatic option, which balances the trade off between fit and amplitude. The BN filter add-in also allows users to specify a desired signal-to-noise ratio. For instance, the following example compares the difference in setting the signal-to-noise ratio $ \delta $, to 0.05 (which implies 5% of the variance is permanent), against the default 0.25 which we derived earlier by leaving $ \delta $ unspecified, and so uses the procedure which balances the trade off between fit and amplitude. To do so, we proceed as follows: <ol> <li>From the workfile window, double click on <b>LOGGDP</b> to open the spreadsheet view of the series. <li>To access the BN filter dialog, click on <b>Proc/Add-ins/BN Filter</b> <li>Select the <b>Structural Break</b> box. <li>In the <b>Date of structural break</b> text box, enter <i>2006Q1</i>. <li>Hit <b>OK</b>. </ol><br /> The plot below summarizes the exercise.<br /><br /> <!-- :::::::::: FIGURE 11 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/bnfilter/bnfc25vs5.png"><img height="auto" src="http://www.eviews.com/blog/bnfilter/bnfc25vs5.png" title="BN Filter Cycle (delta = 0.25 vs delta = 0.05)" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 11: BN Filter Cycle (delta = 0.25 vs delta = 0.05</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 11 :::::::::: --> Unsurprisingly, specifying $ \delta = 0.05 $ results in an output gap with a larger amplitude than the default as the new specification implies a smaller proportion of the shocks to the forecast error is parsed to the trend, and so a larger proportion is parsed to the cycle, leading to a larger amplitude cycle.<br /><br /> Finally, we come back to the issue of revision. As we mentioned earlier, the BN filter should produce output gaps that are less revised as long as the AR forecasting model is stable, especially when compared to the heavily revised HP Filter. Here, we show the output gap estimated using the BN filter with data up to 2008Q3, and one ex-post up to 2019Q3. Clearly, the output gap is hardly revised, which address a key critique of Orphanides and Van Norden (2003).<br /><br /> <!-- :::::::::: FIGURE 11 :::::::::: --> <center> <table> <tr> <td> <center> <a href="http://www.eviews.com/blog/bnfilter/bnfcexpost.png"><img height="auto" src="http://www.eviews.com/blog/bnfilter/bnfcexpost.png" title="BN Filter Cycle (Ex-Post)" width="360" /></a><br /> </center> </td> </tr> <tr> <td class="nb"> <center> <small>Figure 11: BN Filter Cycle (Ex-Post)</small> </center> </td> </tr> </table> <br /> </center> <!-- :::::::::: FIGURE 11 :::::::::: --> <h3 id="sec6">Conclusion</h3> In this blog post we have outlined the BN filter add-in associated with the work of Kamber, Morley and Wong (2018). In general, we hope the ease of using the add-in, together with some of the useful properties of the BN Filter will encourage practitioners to explore using the procedure in their work.<br /><br /> <h3 id="sec7">Files</h3> <ul> <li><a href="http://www.eviews.com/blog/bnfilter/bnfilter_blog.prg">bnfilter_blog.prg</a> </ul> <br /><br /> <hr /> <h3 id="sec8">References</h3> <ol class="bib2xhtml"> <li><a name="bai-2003"></a>Bai J. and Perron P.: Computation and analysis of multiple structural change models <cite>Journal of Applied Econometrics</cite>, 18(1) 1–22, 2003. </li> <li><a name="beveridge-1981"></a>Beveridge S. and Nelson C. R.: A new approach to decomposition of economic time series into permanent and transitory components with particular attention to measurement of the business cycle <cite>Journal of Monetary Economics</cite>, 7(2) 151–174, 1981. </li> <li><a name="eo-2019"></a>Eo Y. and Morley J.: Why has the US economy stagnated since the Great Recession <cite>University of Sydeny Working Papers 2017-14</cite>, 2019. </li> <li><a name="kamber-2018"></a>Kamber G., Morley J., and Wong B.: Intuitive and reliable estimates of the output gap from a Beveridge-Nelson filter <cite>The Review of Economics and Statistics</cite>, 100(3) 550–566, 2018. </li> <li><a name="orphanides-2002"></a>Orphanides A and Van Norden S.: The unreliability of output-gap estimates in real time <cite>The Review of Economics and Statistics</cite>, 84(4) 569–583, 2002. </li> <li><a name="watson-1986"></a>Watson M.: Univariate detrending methods with stochastic trends <cite>Journal of Monetary Economics</cite>, 18(1) 49–75, 1986. </li> </ol></span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com22tag:blogger.com,1999:blog-6883247404678549489.post-68080614447683089002019-12-04T09:39:00.000-08:002019-12-04T09:39:38.953-08:00Sign and Zero Restricted VAR Add-In<style> table, th, td { border: 1px solid black; border-collapse: collapse; } th { padding: 5px; text-align: middle; } td { padding: 5px; text-align: left; } </style> <script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { lb: "{\\left(}", rb: "{\\right)}", bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"></script> <span style="font-family: "verdana" sans-serif"><i>Authors and guest post by Davaajargal Luvsannyam and Ulziikhutag Munkhtsetseg</i><br /><br /> In our previous <a href="http://blog.eviews.com/2019/10/sign-restricted-var-add-in.html">blog entry</a>, we discussed the sign restricted VAR (SRVAR) add-in for EViews. Here, we will discuss imposing a further zero restrictions on the impact period of the impulse response function (IRF) using the ARW and SRVAR add-ins in tandem.<a name='more'></a><br /><br /> <h3>Table of Contents</h3><ol> <li><a href="#sec1">Introduction</a> <li><a href="#sec2">Orthogonal Reduced-Form Parameterization</a> <li><a href="#sec3">ARW Algorithms</a> <li><a href="#sec4">ARW EViews Add-in</a> <li><a href="#sec5">Conclusion</a> <li><a href="#sec6">References</a></ol><br /> <h3 id="sec1">Introduction</h3> Note that it is certainly possible to impose both sign and exclusion restrictions. For example, Mountford and Uhlig (2009) are motivated by the idea that fiscal policy shocks are identified as orthogonal to both monetary policy and business cycle shocks, and use a penalty function approach (PFA) to impose zero restrictions. (For details on the PFA, please see our <a href="http://blog.eviews.com/2019/10/sign-restricted-var-add-in.html">SRVAR blog entry</a>.) They also considered anticipated government revenue shocks in which government revenue is restricted to rise one year following some impulse. Furthermore, Beaudry, Nam, and Wang (2011) estimate a structural VAR model including total factor productivity, stock prices, real consumption, real federal funds rate and hours worked. They use the PFA to show that a positive optimism shock causes an increase in both consumption and hours worked. Recently, Arias, Rubio-Ramirez, and Waggoner (2018), henceforth ARW, developed algorithms to independently draw from a family of conjugate posterior distributions over the structural parameterization when sign and zero restrictions are used to identify SRVARs. They showed the dangers of using the PFA when implementing sign and zero restrictions together to identify structural VARs (SVARs).<br /><br /> <h3 id="sec2">Orthogonal Reduced-Form Parameterization</h3> ARW focus on two SVAR parameterizations. In addition to the classical structural parameterization, they show that SVARs can also be written as a product of a reduced-form parameters and a set of orthogonal matrices. This is called the <i>orthogonal reduced-form parameterization</i>, henceforth, ORF. The algorithms ARW propose draw from a conjugate posterior distribution over the ORF and then transform said draws into a structural parameterization. In particular, they use the normal-inverse-Wishart distribution as the prior conjugate distribution, and develop a change of variable theory that characterizes the induced family of densities over the structural parameterization. This theory shows that a uniform-normal-inverse-Wishart density over the ORF parameterization induces a normal-generalized-normal density over the structural parameterization.<br /><br /> To motivate their contribution, ARW first show that existing algorithms for SVARs identified only by sign restrictions, conditional on a sign restriction using the change of variable theory, operate on independent draws from the normal-generalized-normal distribution over the structural parameterization. These algorithms independently draw from the uniform-normal-inverse-Wishart distribution over the ORF parameterization and only accept draws that impose a sign restriction.<br /><br /> Next, ARW generalize these algorithms to also consider zero restrictions. The key to this generalization is that, conditional on the reduced-form parameters, the class of zero restrictions on the structural parameters maps to linear restrictions on the orthogonal matrices. The resulting approach generalization independently draws from normal-inverse-Wishart over the reduced-form parameters and from the set of orthogonal matrices such that the zero restrictions hold. In this regard, conditional on the zero restrictions, they show that this generalization does not induce a distribution over the structural parameterization from the family of normal-generalized-normal distributions. Furthermore, they derive the induced distribution and write an importance sampler that, conditional on the sign and zero restrictions, independently draws from normal-generalized-normal distributions over the structural parameterization.<br /><br /> To formalize these ideas, consider the SVAR with the general form: \begin{align} Y_t^{\prime} A_{0} = \sum_{i=1}^{p} Y_{t-i}^{\prime}A_{i} + c + \epsilon_t^{\prime}, \quad t=1, \ldots, T \label{eq1} \end{align} where $ Y_t $ is an $ n\times 1 $ vector of endogenous variables, $ A_i $ are parameter matrices of size of $ n\times n $ with $ A_{0} $ invertible, $ c $ is a $ 1\times n $ vector of parameters, $ \epsilon_t $ is an $ n\times 1 $ vector of exogenous structural shocks, $ p $ is the lag length, and $ T $ is the sample size.<br /><br /> We can also summarize equation \eqref{eq1} as follows: \begin{align} Y_{t}^{\prime}A_{0} = X_{t}^{\prime}A_{+} + \epsilon_{t}^{\prime} \label{eq2} \end{align} where $ A_{+}^{\prime} = \left[A_{1}^{\prime}, \ldots, A_{p}^{\prime}, c^{\prime}\right]$ and $ X_{t}^{\prime} = \left[Y_{t-1}^{\prime}, \ldots, Y_{t-p}^{\prime}, 1\right] $.<br /><br /> The reduced form can now be written as: \begin{align} Y_{t}^{\prime} = X_{t}^{\prime}B + u_{t}^{\prime} \label{eq3} \end{align} where $ B = A_{+}A_{0}^{-1}, u_{t}^{\prime} = \epsilon_{t}^{\prime}A_{0}^{-1} $, and $ E(u_{t}u_{t}^{\prime}) = \Sigma = \left(A_{0}A_{0}^{\prime}\right)^{-1} $. Naturally, $ B $ and $ \Sigma $ are the reduced form parameters.<br /><br /> We can further write equation \eqref{eq3} as the orthogonal reduced-form parameterization \begin{align} Y_{t}^{\prime} = X_{t}^{\prime}B + \epsilon_{t}^{\prime}Q^{\prime}h(\Sigma) \label{eq4} \end{align} where the $ n\times n $ matrix $ h(\Sigma) $ is the Cholesky decomposition of covariance matrix $ \Sigma $.<br /><br /> Given equations \eqref{eq2} and \eqref{eq4}, in addition to the Cholesky decomposition $ h $, we can define a mapping between $ \left(A_{0}, A_{+}\right) $ and $ (B, \Sigma, Q) $ by: \begin{align} f_{h}\left(A_{0}, A_{+}\right) = \left(A_{+}A_{0}^{-1}, \left(A_{0}A_{0}^{\prime}\right)^{-1}, h\left(\left(A_{0}A_{0}^{\prime}\right)^{-1}\right)A_{0}\right) \label{eq5} \end{align} where the first element of the triad on the right corresponds to $ B $, the second to $ \Sigma $, and the third to $ Q $.<br /><br /> Note further that the function $ f_{h} $ is invertible with inverse defined by: \begin{align} f_{h}^{-1} (B,\Sigma, Q) = \left(h(\Sigma)^{-1}Q, Bh(\Sigma)^{-1}Q\right) \label{eq6} \end{align} where the first term on the right corresponds to $ A_{0} $ and the second to $ A_{+} $.<br /><br /> Thus, the ORF parameterization makes clear how the structural parameters depend on the reduced form parameters and orthogonal matrices.<br /><br /> <h3 id="sec3">ARW Algorithms</h3> Although ARW propose three different algorithms, the most important is in fact the third. The latter draws from a distribution over the ORF parameterization conditional on the sign and zero restriction and then transforms the draws into the structural parameterization. Since Algorithm 3 also depends on Algorithm 2, we present the latter here and recommend readers to refer to the supplementary materials of ARW (2018) if they require further details.<br /><br /> <h4>Algorithm 2</h4> Let $ Z_j $ define the zero restriction matrix on the $ j^{\text{th}} $ structural shock, and let $ z_{j} $ denote the number of zero restrictions associated with the $ j^{\text{th}} $ structural shock. Then: <ol> <li>Draw $ (B, \Sigma) $ independently from Normal-inverse-Wishart distribution. <li>For $ j \in \{1, \ldots, n\} $ draw $ X_{j} \in \mathbf{R}^{n+1-j-Z_{j}} $ independently from a standard normal distribution and set $ W_{j} = X_{j} / ||X_{j}||$. <li>Define $ Q = [q_{1}, \ldots q_{n}] $ recursively as $ q_{j} = K_{j}W_{j} $ for any matrix $ K_{j} $ whose columns form an orthonormal basis for the null space of the $ (j-1+z_{j})\times n $ matrix \begin{align} M_{j} = \left[q_{1}, \ldots, q_{j-1},\left(Z_{j}F\left(f_{h}^{-1}(B, \Sigma, I_{n})\right)\right)\right] \end{align} <li>Set $ (A_{0},A_{+}) = f_{h}^{-1}(B,\Sigma,Q) $.<br /><br /></ol> <h4>Algorithm 3</h4> Let $ \mathcal{Z} $ denote the set of all structural parameters that satisfy the zero restrictions, and define $ v_{(g^{\circ}f_{h})|\mathcal{Z}} $ as te volume element. Then: <ol> <li>Use Algorithm 2 to independently draw $ (A_{0}, A_{+}) $. <li>If $ (A_{0}, A_{+}) $ satisfies the sign restrictions, set its importance weight to $$ \frac{|\det(A_{0})|^{-(2n+m+1)}}{v_{(g^{\circ}f_{h})|\mathcal{Z}}(A_{0}A_{+})} $$ otherwise, set its importance weight to zero. <li>Return to Step 1 until the required number of draws has been obtained. <li>Re-sample with replacement using the importance weights.<br /><br /></ol> <h3 id="sec4">ARW EViews Add-in</h3> Now we turn to the implementation of the ARW add-in. First, we need to download and install the add-in from the EViews website. The latter can be found at <a href="https://www.eviews.com/Addins/arw.aipz">https://www.eviews.com/Addins/arw.aipz</a>. We can also do this from inside EViews itself. In particular, after opening EViews, click on <b>Add-ins</b> from the main menu, and click on <b>Download Add-ins...</b>. From here, locate the <i>ARW</i> add-in and click on <b>Install</b>.<br /><br /> <!-- :::::::::: FIGURE 1 :::::::::: --> <center> <a href="http://www.eviews.com/blog/arw/addin_download.png"><img height="auto" src="http://www.eviews.com/blog/arw/addin_download.png" title="Add-ins Download" width="360" /></a><br /> <small>Figure 1: Add-in installation</small><br /><br /> </center><!-- :::::::::: FIGURE 1 :::::::::: --> After installing, we open the data file named as <i>data.WF1</i> which can be found in the installation folder, typically located in <b>[Windows User Folder]/Documents/EViews Addins/ARW</b>.<br /><br /> <!-- :::::::::: FIGURE 2 :::::::::: --> <center> <a href="http://www.eviews.com/blog/arw/workfile.png"><img height="auto" src="http://www.eviews.com/blog/arw/workfile.png" title="ARW (2018) Data" width="360" /></a><br /> <small>Figure 2: ARW (2018) Data</small><br /><br /> </center><!-- :::::::::: FIGURE 2 :::::::::: --> We now replicate Figures 1 and Table 3 from ARW. We can of course do this in EViews as follows.<br /><br /> <ol> <li>Click on the <b>Add-ins</b> menu item in the main EViews menu, and click on <b>Sign restricted VAR</b>. <li>Under <b>Endogenous variables</b> enter <i>tfp stock cons ffr hour</i>. <li>Check the <b>Include constant</b> option. <li>Under <b>Number of lags</b>, enter <i>4</i>. <li>In the <b>Sign restriction vector</b> textbox enter <i>+2</i>. <li>Under <b>Sign restriction method</b> check <i>Penalty</i>. <li>In the <b>Number of horizons</b> enter <i>40</i></li> <li>Under <b>Zero restriction</b> textbox enter <i>tfp</i>. <li>Check the <b>variance decomposition box</b>. <li>Hit <b>OK</b>.<br /><br /> </ol> <!-- :::::::::: FIGURE 3 :::::::::: --> <center> <a href="http://www.eviews.com/blog/arw/pfa.png"><img height="auto" src="http://www.eviews.com/blog/arw/pfa.png" title="SRVAR Add-in (PFA)" width="360" /></a><br /> <small>Figure 3: SRVAR Add-in (PFA)</small><br /><br /> </center><!-- :::::::::: FIGURE 3 :::::::::: --> The steps above produce the following output (Panel A of Figure 1 of ARW):<br /><br /> <!-- :::::::::: FIGURE 4 :::::::::: --><center> <a href="http://www.eviews.com/blog/arw/panela.png"><img height="auto" src="http://www.eviews.com/blog/arw/panela.png" title="PFA Output" width="360" /></a><br /> <small>Figure 4: PFA Output</small><br /><br /> </center><!-- :::::::::: FIGURE 4 :::::::::: --> Next, we invoke the ARW add-in and proceed with the ARW Algorithm 3.<br /><br /> <ol> <li>Click on the <b>Add-ins</b> menu item in the main EViews menu, and click on <b>Sign and zero restricted VAR</b>. <li>Under <b>Endogenous variables</b> enter <i>tfp stock cons ffr hour</i>. <li>Check the <b>Include constant</b> option. <li>Under <b>Number of lags</b>, enter <i>4</i>. <li>In the <b>Sign restriction vector</b> textbox enter <i>+stock</i>. <li>In the <b>Zero restrictions</b> textbox enter <i>tfp</i>. <li>Under<b>Number of steps</b> enter <i>40</i>. <li>Check the <b>variance decomposition box</b>. <li>Hit <b>OK</b>.<br /><br /> </ol> <!-- :::::::::: FIGURE 5 :::::::::: --> <center> <a href="http://www.eviews.com/blog/arw/isampler.png"><img height="auto" src="http://www.eviews.com/blog/arw/isampler.png" title="ARW Add-in (Importance Sampler)" width="360" /></a><br /> <small>Figure 5: ARW Add-in (Importance Sampler)</small><br /><br /> </center><!-- :::::::::: FIGURE 5 :::::::::: --> The steps above produce the following output (Panel B of Figure 1 of ARW):<br /><br /> <!-- :::::::::: FIGURE 6 :::::::::: --> <center> <a href="http://www.eviews.com/blog/arw/panelb.png"><img height="auto" src="http://www.eviews.com/blog/arw/panelb.png" title="Importance Sampler Output" width="360" /></a><br /> <small>Figure 6: Importance Sampler Output</small><br /><br /> </center><!-- :::::::::: FIGURE 6 :::::::::: --> Figures 5 and 6 above illustrates the IRFs using the PFA and importance sampler methods, respectively. In case of the former, we can see the IRFs with probability bands for adjusted TFP, stock prices, consumption, real interest rate,and hours worked under the PFA. Examining the confidence bands around IRFs allows us to conclude that optimism shocks boost consumption and hours worked, as the corresponding IRFs do not contain a zero for at least 20 quarters.<br /><br /> Alternatively, the IRFs of the same variables obtained using the importance sampler yield a different result. For consumption and hours worked, the confidence bands are wider and contain zero. Furthermore, the corresponding point-wise median IRFs are closer to zero compared to those obtained using the PFA. This shows that the PFA exaggerates the effects of optimism shocks on stock prices, consumption, and hours worked, by generating much narrower confidence bands and larger point-wise median IRFs. In this regard, as pointed out by Uhlig (2005), we can see that the PFA includes additional identification restrictions when implementing sign and zero restrictions.<br /><br /> To further summarize the results, we present the table below which gives the specifics of the output figures above.<br /><br /> <center> <table style="width:100%"> <tr> <th></th> <th colspan="3">Penalty Function Approach</th> <th colspan="3">Importance Sampler</th> </tr> <tr> <td>Adjusted TFP</td> <td>0.07</td> <td><b>0.17</b></td> <td>0.29</td> <td>0.03</td> <td><b>0.11</b></td> <td>0.23</td> </tr> <tr> <td>Stock Prices</td> <td>0.54</td> <td><b>0.72</b></td> <td>0.84</td> <td>0.05</td> <td><b>0.29</b></td> <td>0.57</td> </tr> <tr> <td>Consumption</td> <td>0.13</td> <td><b>0.27</b></td> <td>0.43</td> <td>0.03</td> <td><b>0.17</b></td> <td>0.50</td> </tr> <tr> <td>Real Interest Rate</td> <td>0.07</td> <td><b>0.14</b></td> <td>0.23</td> <td>0.08</td> <td><b>0.20</b></td> <td>0.39</td> </tr> <tr> <td>Hours Worked</td> <td>0.20</td> <td><b>0.31</b></td> <td>0.45</td> <td>0.04</td> <td><b>0.18</b></td> <td>0.56</td> </tr> </table> <small>Table I: Forecast Error Variance Decomposition (FEVD)</small><br /><br /></center> Table I shows the contribution of shocks to the Forecast Error Variance Decomposition (FEVD) using the PFA and the importance sampler for the chosen horizon of 40 periods and 68 percent equal-tailed probability intervals. Under the PFA, the share of FEVD attributable to optimism shocks of consumption and hours worked is 27 and 31 percent, respectively. However, the contribution of optimism shocks to the FEVD of stock prices is 72 percent under the PFA in contrast to 29 percent using the importance sampler. It should be noted that for most variables, when using the importance sampler, optimism shocks contribute less to the FEVD, and probability intervals for the FEVD are broader as opposed to those obtained under the PFA.<br /><br /> <h3 id="sec5">Conclusion</h3> In this blog entry we presented the ARW add-in for EViews. The add-in is based on the work of ARW (2018) and generates impulse response curves based on the importance sampler which accommodates both sign and zero restrictions in the VAR model.<br /><br /> <hr /><h3 id="sec6">References</h3> <ol class="bib2xhtml"> <li><a name="arias-2018"></a>Arias J., Rubio-Ramirez J., and Waggoner D.: Inference Based on SVARs Identified with Sign and Zero Restrictions: Theory and Applications <cite>Econometrica</cite>, 86:685–720, 2018. </li> <li><a name="beaudry-2011"></a>Beaudry P., Nam D., and Wang J.: Do mood swing drive business cycle and is it rational? <cite>NBER Working Paper 17651</cite>, 2011. </li> <li><a name="mountford-2009"></a>Mountford A. and Uhlig H.: What are the effects of fiscal policy shocks? <cite>Journal of Applied Econometrics</cite>, 24:960–992, 2009. </li> <li><a name="uhlig-2005"></a>Uhlig H.: What are the effects of monetary policy on output? Results from an agnostic identification procedure. <cite>Journal of Monetary Economics</cite>, 52(2):381–419, 2005. </li> </ol></span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com4tag:blogger.com,1999:blog-6883247404678549489.post-44432315972472118622019-11-06T10:23:00.000-08:002019-11-06T13:02:01.360-08:00Dealing with the log of zero in regression models<script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { lb: "{\\left(}", rb: "{\\right)}", bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"></script> <span style="font-family: "verdana" sans-serif"><i>Author and guest post by Eren Ocakverdi</i><br /><br /> The title of this blog piece is a verbatim excerpt from the Bellego and Pape (2019) paper suggested by Professor David E. Giles in his <a href="https://davegiles.blogspot.com/2019/10/october-reading.html">October reading list</a>. (Editor's note: Professor Giles has recently announced the end of his blog - it is a fantastic resource and will be missed!). The topic is immediately familiar to practitioners who occasionally encounter the difficulty in applied work. In this regard, it is reassuring that the frustration is being addressed and that there is indeed an ongoing quest for the <i>silver bullet</i>.<a name='more'></a><br /><br /> <h3>Table of Contents</h3><ol> <li><a href="#sec1">Introduction</a> <li><a href="#sec2">A Novel Approach</a> <li><a href="#sec3">Files</a> <li><a href="#sec4">References</a></ol><br /> <h3 id="sec1">Introduction</h3> Consider the following data generating process where the dependent variable may contain zeros: $$ \log(y_i) = \alpha + x_i^\prime \beta + \epsilon_i \quad \text{with} \quad E(\epsilon_i)=0 $$ The most common remedy to the <i>logarithm of zero value</i> problem among practitioners is to add a common (observation independent) positive constant to the problematic observations. In other words, to work with the model: $$ \log(y_i + \Delta) = \alpha + x_i^\prime \beta + \omega_i $$ where $ \Delta $ is the corrective constant.<br /><br /> In the aforementioned paper, the authors use Monte Carlo simulations to demonstrate that the bias incurred by this correction is not necessarily negligible for small values of $ \Delta $, and in fact, may be substantial.<br /><br /> <!-- :::::::::: FIGURE 1 :::::::::: --><center> <a href="http://www.eviews.com/blog/log_of_zero/bias.png"><img height="auto" src="http://www.eviews.com/blog/log_of_zero/bias.png" title="Add-ins Download" width="360" /></a><br /> <small>Figure 1: Estimation bias as a function of $ \Delta $ </small><br /><br /></center><!-- :::::::::: FIGURE 1 :::::::::: --> In order to handle the zeros in model variables, the paper offers a new (complementary) solution that: <ol> <li>Does not generate computational bias by arbitrary normalization. </li> <li>Does not generate correlation between the error term and regressors. </li> <li>Does not require the deletion of observation.</li> <li>Does not require the estimation of a supplementary parameter.</li> <li>Does not require addition of a discretionary constant.</li><br /><br /></ol> <h3 id="sec2">A Novel Approach</h3> Bellego and Pape (2019) suggest that instead of adding a common positive constant $ \Delta $, one ought to add some optimal, observation-dependent positive value $ \Delta_{i} $. The novel strategy results in the following model and is estimated via GMM: $$ \log(y_i + \Delta_{i}) = \alpha + x_i^\prime \beta + \eta_{i} $$ where $ \Delta_i = \exp(x_i^\prime \beta) $ and $ \eta_i = \log(1 + \exp(\alpha + \epsilon_i)) $.<br /><br /> Since the details can be referred to in the original paper, here I’d like to replicate the simulation exercise in which the authors illustrate their method and make a comparison with other approaches. (The tables below can be replicated in EViews by running the program file <i>loglinear.prg</i>.)<br /><br /> <!-- :::::::::: FIGURE 2 :::::::::: --><center> <a href="http://www.eviews.com/blog/log_of_zero/table1.png"><img height="auto" src="http://www.eviews.com/blog/log_of_zero/table1.png" title="Add-ins Download" width="360" /></a><br /> <small>Figure 2: Output of OLS estimation (with $ \Delta = 1 $)</small><br /><br /></center><!-- :::::::::: FIGURE 2 :::::::::: --> <!-- :::::::::: FIGURE 3 :::::::::: --><center> <a href="http://www.eviews.com/blog/log_of_zero/table2.png"><img height="auto" src="http://www.eviews.com/blog/log_of_zero/table2.png" title="Add-ins Download" width="360" /></a><br /> <small>Figure 3: Output of Pseudo Poissson Maximum Likelihood (PPML) estimation</small><br /><br /></center><!-- :::::::::: FIGURE 3 :::::::::: --> <!-- :::::::::: FIGURE 4 :::::::::: --><center> <a href="http://www.eviews.com/blog/log_of_zero/table3.png"><img height="auto" src="http://www.eviews.com/blog/log_of_zero/table3.png" title="Add-ins Download" width="360" /></a><br /> <small>Figure 4: Output of proposed solution (GMM estimation)</small><br /><br /></center><!-- :::::::::: FIGURE 4 :::::::::: --> Simulation results show that both the PPML and the GMM solutions provide correct estimates (i.e. $ \alpha = 0 $ , $ \beta_{1} = \beta_{2} = 1 $), whereas OLS results are biased due to adding a common constant to all data points. Although $ \alpha $ is not identified in the proposed solution, the authors suggest OLS estimation to obtain the coefficient:<br /><br /> <!-- :::::::::: FIGURE 5 :::::::::: --><center> <a href="http://www.eviews.com/blog/log_of_zero/table4.png"><img height="auto" src="http://www.eviews.com/blog/log_of_zero/table4.png" title="Add-ins Download" width="360" /></a><br /> <small>Figure 5: OLS estimation of alpha parameter: $ \log(\exp(\eta_i)-1)=\alpha+\epsilon_i $</small><br /><br /></center><!-- :::::::::: FIGURE 5 :::::::::: --> When zeros are observed in both the dependent and independent variables, the authors suggest a functional coefficient model of the form: $$ \log(y_i) = \alpha + \mathbb{1}_{x_i > 0}\times\log(x_i)\beta_{x_i>0}+\mathbb{1}_{x_i=0}\times\beta_{x_i=0}+\epsilon_i $$ Again, a simulation exercise is carried out to compare the estimated coefficients with different methods. (The tables below can be reproduced in EViews by running the program <i>loglog.prg</i>.)<br /><br /> <!-- :::::::::: FIGURE 6 :::::::::: --><center> <a href="http://www.eviews.com/blog/log_of_zero/table5.png"><img height="auto" src="http://www.eviews.com/blog/log_of_zero/table5.png" title="Add-ins Download" width="360" /></a><br /> <small>Figure 6: OLS estimation</small><br /><br /></center><!-- :::::::::: FIGURE 6 :::::::::: --> <!-- :::::::::: FIGURE 7 :::::::::: --><center> <a href="http://www.eviews.com/blog/log_of_zero/table6.png"><img height="auto" src="http://www.eviews.com/blog/log_of_zero/table6.png" title="Add-ins Download" width="360" /></a><br /> <small>Figure 7: PPML estimation</small><br /><br /></center><!-- :::::::::: FIGURE 7 :::::::::: --> <!-- :::::::::: FIGURE 8 :::::::::: --><center> <a href="http://www.eviews.com/blog/log_of_zero/table7.png"><img height="auto" src="http://www.eviews.com/blog/log_of_zero/table7.png" title="Add-ins Download" width="360" /></a><br /> <small>Figure 8: GMM estimation</small><br /><br /></center><!-- :::::::::: FIGURE 8 :::::::::: --> Simulation results show that the suggested (flexible) formulation of the $ \beta $ coefficients works well for all estimation methods ($ \alpha=0 $ and $ \beta = 1.5 $).<br /><br /> <hr /><h3 id="sec3">Files</h3> <ol> <li><a href="http://www.eviews.com/blog/log_of_zero/deltasimul.prg">deltasimul.prg</a> <li><a href="http://www.eviews.com/blog/log_of_zero/loglinear.prg">loglinear.prg</a> <li><a href="http://www.eviews.com/blog/log_of_zero/loglog.prg">loglog.prg</a></ol><br /> <hr /><h3 id="sec4">References</h3> <ol class="bib2xhtml"> <!-- Authors: Bellego and Paper (2019) --><li><a name="bellego_pape-2019"></a>Bellego, C. and L-D. Pape. Dealing with the log of zero in regression models. <cite>CREST: Working Paper</cite>, No:2019-13, 2019.</li></ol></span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com1tag:blogger.com,1999:blog-6883247404678549489.post-48467357268436160422019-10-14T13:50:00.001-07:002019-12-03T12:39:35.078-08:00Sign Restricted VAR Add-In<script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { lb: "{\\left(}", rb: "{\\right)}", bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"></script> <span style="font-family: "verdana" sans-serif"><i>Authors and guest post by Davaajargal Luvsannyam and Ulziikhutag Munkhtsetseg</i><br /><br /> Nowadays, sign restricted VARs (SRVARs) are becoming popular and can be considered as an indispensable tool for macroeconomic analysis. They have been used for macroeconomic policy analysis when investigating the sources of business cycle fluctuations and providing a benchmark against which modern dynamic macroeconomic theories are evaluated. Traditional structural VARs are identified with the exclusion restriction which is sometimes difficult to justify by economic theory. In contrast, SRVARs can easily identify structural shocks since in many cases, economic theory only offers guidance on the sign of structural impulse responses on impact.<a name='more'></a><br /><br /> <h3>Table of Contents</h3><ol> <li><a href="#sec1">Introduction</a> <li><a href="#sec2">Bayesian Inference of SRVARs</a> <li><a href="#sec3">Recovering Structural Shocks from an SRVAR</a> <li><a href="#sec4">RSVAR EViews Add-in</a> <li><a href="#sec5">Conclusion</a> <li><a href="#sec6">References</a></ol><br /> <h3 id="sec1">Introduction</h3> Following the seminal work of Uhlig (2005), the uniform-normal-inverse-Wishart posterior over the orthogonal reduced-form parameterization has been dominant for SRVARs. Recently Arias, Rubio-Ramirez and Waggoner (2018), henceforth ARW, developed algorithms to independently draw from a family of conjugate posterior distributions over the structural parameterization when sign and zero restrictions are used to identify SRVARs. In particular, They show the dangers of using penalty function approaches (PFA) when implementing sign and zero restrictions to identify structural VARs (SVARs). In this blog, we describe the SRVAR add-in based on Uhlig (2005).<br /><br /> The main difference between a classic VAR and a sign restricted VAR is interpretation. For traditional structural VARs (SVARs), there is a unique point estimate of the structural impulse response function. Because sign restrictions represent inequality restrictions, sign restricted VARs are only set identified. In other words, the data are potentially consistent with a wide range of structural models that are all admissible in that they satisfy the identifying restrictions.<br /><br /> There have been both frequentist and Bayesian approaches to summarizing estimates of the admissible set of sign-identified structural VAR models. However, the most common approach for sign restricted VARs is based on Bayesian methods of inference. For example, Uhlig (2005) used a Bayesian approach which is computationally simple and a clean way of drawing error bands for impulse responses.<br /><br /> <h3 id="sec2">Bayesian Inference of SRVARs</h3> A typical VAR model is summarized by \begin{align} Y_t = B_1 Y_{t-1} + B_2 Y_{t-2} + \cdots + B_l Y_{t-l} + u_t, \quad t=1, \ldots, T \label{eq1} \end{align} where $ Y_t $ is an $ m\times 1 $ vector of data, $ B_i $ are coefficient matrices of size of $ m\times m $, and $ u_t $ is the one-step ahead prediction error with variance covariance matrix $ \mathbf{\Sigma} $. An intercept and a time trend is also sometimes added to \eqref{eq1}.<br /><br /> Next, stack the system in \eqref{eq1} as follows: \begin{align} \mathbf{Y} = \mathbf{XB} + \mathbf{u} \label{eq2} \end{align} where $ \mathbf{Y} = [Y_{1}, \ldots, Y_{T}]^{\prime} $, $ \mathbf{X} = [X_{1}, \ldots, X_{T}]^{\prime} $ and $ X_{t} = [Y_{t-1}^{\prime}, \ldots, Y_{t-l}^{\prime}] $, $ \mathbf{u} = [u_{1}, \ldots, u_{T}]^{\prime} $, and $ \mathbf{B} = [B_{1}, \ldots, B_{l}]^{\prime} $. It is also assumed that the $ u_{t} $'s are independent and normally distributed with covariance matrix $ \mathbf{\Sigma} $.<br /><br /> Model \eqref{eq2} is typically estimated using maximum likelihood (ML) estimation. In particular, the ML estimates of $ \left(\mathbf{B}, \mathbf{\Sigma}\right) $ is given by: \begin{align} \widehat{\mathbf{B}} &= \left(\mathbf{X}^{\prime}\mathbf{X}\right)^{-1}\mathbf{X}^{\prime}\mathbf{Y} \label{eq3} \\ \widehat{\mathbf{\Sigma}} &= \frac{1}{T}\left(\mathbf{Y} - \mathbf{X}\widehat{\mathbf{B}}\right)^{\prime}\left(\mathbf{Y} - \mathbf{X}\widehat{\mathbf{B}}\right) \label{eq4} \end{align} Next, note that a proper Wishart distribution of $ \left(\mathbf{B}, \mathbf{\Sigma}\right) $ centered around $ \left(\bar{\mathbf{B}}, \mathbf{S}\right) $, is characterized by the mean coefficient matrix $ \bar{\mathbf{B}} $, a positive definite mean covariance matrix $ \mathbf{S} $ along with an additional positive definite matrix $ \mathbf{N} $ of size $ ml \times ml $, and a degrees-of-freedom parameter $ v \geq 0 $. In this regard, Uhlig (2005) consider the priors and posterior for $ \left(\mathbf{B}, \mathbf{\Sigma}\right) $ to belong to the Normal-Wishart family $ W\left(\mathbf{S}^{-1} / v, v\right) $, with $ E\left(\mathbf{\Sigma}^{-1}\right) = \mathbf{S}^{-1} $, whereas the columnwise vectorized form of the coefficient matrix, $ vec\left(\mathbf{B}\right) $, conditional on $ \mathbf{\Sigma} $, is assumed to follow the Normal distribution $ \mathcal{N}\left(vec\left(\bar{\mathbf{B}}\right), \mathbf{\Sigma} \bigotimes N^{-1}\right) $.<br /><br /> Furthermore, Proposition A.1 in Uhlig (1994) shows that if the prior is characterized by the set of parameters $ \left(\bar{\mathbf{B}}_{0}, \mathbf{S}_{0}, \mathbf{N}_{0}, v_{0}\right) $, the posterior is then parameterized by the set $ \left(\bar{\mathbf{B}}_{T}, \mathbf{S}_{T}, \mathbf{N}_{T}, v_{T}\right) $ where: \begin{align} v_{T} &= T + v_{0} \label{eq5} \\ \mathbf{N}_{T} &= \mathbf{N}_{0} + \mathbf{X}^{\prime}\mathbf{X} \label{eq6} \\ \bar{B}_{T} &= \mathbf{N}_{T}^{-1} \left(\mathbf{N}_{0}\bar{\mathbf{B}}_{0} + \mathbf{X}^{\prime}\mathbf{X}\widehat{\mathbf{B}}\right) \label{eq7} \\ \mathbf{S}_{T} &= \frac{v_{0}}{v_{T}}\mathbf{S}_{0} + \frac{T}{v_{T}}\widehat{\mathbf{\Sigma}} + \frac{1}{v_{T}}\left(\widehat{\mathbf{B}} - \bar{\mathbf{B}}_{0}\right)^{\prime}\mathbf{N}_{0}\mathbf{N}_{T}^{-1}\left(\widehat{\mathbf{B}} - \bar{\mathbf{B}}_{0}\right) \label{eq8} \end{align} For instance, in the case of a flat prior with $ \bar{\mathbf{B}}_{0} $ and $ \mathbf{S}_{0} $ arbitrary and $ \mathbf{N}_{0} = v_{0} = 0 $, Uhlig (2005) show that $ \bar{\mathbf{B}}_{T} = \widehat{\mathbf{B}}, \mathbf{S}_{T} = \widehat{\mathbf{\Sigma}}, \mathbf{N}_{T} = \mathbf{X}^{\prime}\mathbf{X}, $ and $ v_{T} = T $.<br /><br /> <h3 id="sec3">Recovering Structural Shocks from an SRVAR</h3> Here we consider two approaches to recovering the structural shocks from an SRVAR. The first is based on what's known as the <b>rejection method</b>. In particular, the latter consists of the following algorithmic steps: <ol> <li>Run an unrestricted VAR in order to get $ \widehat{\mathbf{B}} $ and $ \widehat{\mathbf{\Sigma}} $. </li> <li>Randomly draw $ \bar{\mathbf{B}}_{T} $ and $ \mathbf{S}_{T} $ from the posterior distributions. </li> <li>Extract the orthogonal innovations from the model using a Cholesky decomposition.</li> <li>Calculate the resulting impulse responses from Step 3.</li> <li>Randomly draw an orthogonal impulse vector $ \mathbf{\alpha} $.</li> <li>Multiply the responses from Step 4 by $ \mathbf{\alpha} $ and check if they match the imposed signs.</li> <li>If yes, keep the response. If not, drop the draw.</li></ol> Note here that a draw $ \mathbf{\alpha} $ from an $ m $-dimensional unit sphere is easily obtained drawing $ \widetilde{\mathbf{\alpha}} $ from an $ m $-dimensional standard normal distribution and then normalizing its length to unity. In other words, $ \mathbf{\alpha} = \widetilde{\mathbf{\alpha}} / ||\widetilde{\mathbf{\alpha}}||$.<br /><br /> The second approach, proposed in Uhlig (2005), is called the <b>penalty function method</b>. In particular, the latter proposes the minimization of a penalty function given by: \begin{align} b(x) = \begin{cases} x &\quad \text{if } x \leq 0\\ 100 x &\quad \text{if } x > 0 \end{cases} \end{align} which penalizes positive responses in linear proportion, and rewards negative responses in linear proportion, albeit at a slope 100 times smaller than those on positive sides.<br /><br /> The steps involved in this algorithm can be summarized as follows: <ol> <li>Run an unrestricted VAR in order to get $ \widehat{\mathbf{B}} $ and $ \widehat{\mathbf{\Sigma}} $. </li> <li>Randomly draw $ \bar{\mathbf{B}}_{T} $ and $ \mathbf{S}_{T} $ from the posterior distributions. </li> <li>Extract the orthogonal innovations from the model using a Cholesky decomposition.</li> <li>Calculate the resulting impulse responses from Step 3.</li> <li>Minimize the penalty function with respect to an orthogonal impulse vector $ \mathbf{\alpha} $.</li> <li>Multiply the responses from Step 4 by $ \mathbf{\alpha}.$ </ol> Now, let $ r_{(j, \mathbf{\alpha})}(k) $ denote the response of variable $ j $ at step $ k $ to the impulse vector $ \mathbf{\alpha} $. Then the underlying minimization problem can be written as follows: \begin{align} \min_{\mathbf{\alpha}} \mathbf{\Psi}(\mathbf{\alpha}) = \sum_{j \in J}\sum_{k \in K}b\left(l_{j}\frac{r_{(j, \mathbf{\alpha})}(k)}{\sigma_{j}}\right) \end{align} To treat the signs equally, let $ l_j=-1 $ if the sign of restriction is positive and $ l_j=1 $ if the sign of restriction is negative. Scaling the variables is done by taking the standard errors, $ \sigma_{j} $ of the first differences of the variables. We parameterize the impulse vector $ \mathbf{\alpha} $ of the unit sphere in $ n $-space by randomly drawing $ n-1 $ from a standard Normal distribution and mapping the draw onto the $ n $ unit sphere using a stereographic projection.<br /><br /> <h3 id="sec4">SRVAR EViews Add-in</h3> Now we turn to the implementation of the SRVAR add-in. First, we need to download and install the add-in from the EViews website. The latter can be found at <a href="https://www.eviews.com/Addins/srvar.aipz">https://www.eviews.com/Addins/srvar.aipz</a>. We can also do this from inside EViews itself. In particular, after opening EViews, click on <b>Add-ins</b> from the main menu, and click on <b>Download Add-ins...</b>. From here, locate the <i>srvar</i> add-in and click on <b>Install</b>.<br /><br /> <!-- :::::::::: FIGURE 1 :::::::::: --> <center> <a href="http://www.eviews.com/blog/srvar/addin_download.png"><img height="auto" src="http://www.eviews.com/blog/srvar/addin_download.png" title="Add-ins Download" width="360" /></a><br /> <small>Figure 1: Polynomial Sieve Estimation</small><br /><br /> </center><!-- :::::::::: FIGURE 1 :::::::::: --> After installing, we import the data file named as <i>uhligdata1.xls</i> which can be found in the installation folder, typically located in <b>[Windows User Folder]/Documents/EViews Addins/srvar</b>.<br /><br /> <!-- :::::::::: FIGURE 2 :::::::::: --> <center> <a href="http://www.eviews.com/blog/srvar/workfile.png"><img height="auto" src="http://www.eviews.com/blog/srvar/workfile.png" title="Uhlig (2005) Data" width="360" /></a><br /> <small>Figure 2: Uhlig (2005) Data</small><br /><br /> </center><!-- :::::::::: FIGURE 2 :::::::::: --> Next, we take the logarithm of the series <b>gdpc1</b> (real gdp), <b>gdpdef</b> (gdp price deflator), <b>cprindex</b> (commodity price index), <b>totresns</b> (total reserves), and <b>bognonbr</b> (non-borrowed reserves). To do this, we can issue the following EViews commands:<br /><br /> <PRE><br />series gdpc1 = @log(gdpc1)*100.0<br />series gdpdef = @log(gdpdef)*100.0<br />series cprindex = @log(cprindex)*100.0<br />series totresns = @log(totresns)*100.0<br />series bognonbr = @log(bognonbr)*100.0<br /></PRE> We now replicate Figures 5, 6, and 14 from Uhlig (2005). In particular, using the aforementioned variables, Uhlig (2005) first estimate a VAR with 12 lags without a constant and trend. We can of course do this in EViews as follows:<br /><br /> <ol> <li>Click on <b>Quick/Estimate VAR...</b> to open the VAR estimation window.</li> <li>In the VAR estimation window, under <b>Endogenous variables</b>, enter <i>gdpc1 gdpdef cprindex fedfunds bognonbr totresns</i>.</li> <li>Under <b>Lag Intervals for Endogenous</b> enter <i>1 12</i></li> <li>Under the <b>Exogenous variables</b>, remove the <i>c</i> to remove the constant.</li> <li>Hit OK</li> </ol> <!-- :::::::::: FIGURE 3 :::::::::: --> <center> <a href="http://www.eviews.com/blog/srvar/basic_var.png"><img height="auto" src="http://www.eviews.com/blog/srvar/basic_var.png" title="VAR Estimation Window" width="360" /></a><br /> <small>Figure 3: VAR Estimation Window</small><br /><br /> </center><!-- :::::::::: FIGURE 3 :::::::::: --> <!-- :::::::::: FIGURE 4 :::::::::: --><center> <a href="http://www.eviews.com/blog/srvar/basic_var_results.png"><img height="auto" src="http://www.eviews.com/blog/srvar/basic_var_results.png" title="VAR Estimation Results" width="360" /></a><br /> <small>Figure 4: VAR Estimation Results</small><br /><br /> </center><!-- :::::::::: FIGURE 4 :::::::::: --> Next, we obtain the 60 period-ahead impulse response function using asymptotic standard error bands and <b>fedfunds</b> as the impulse. We can do this as follows:<br /><br /> <ol> <li>From the VAR estimation window, click on <b>View/Impulse Response...</b> to open the impulse response estimation window.</li> <li>Under <b>Display Format</b>, click <b>Multiple Graphs</b>.</li> <li>Under <b>Response Standard Errors</b>, click on <b>Analytic (asymptotic)</b></li> <li>Under <b>Impulses</b>, enter <i>fedfunds</i>.</li> <li>Under <b>Responses</b> enter <i>gdpc1 gdpdef cprindex bognonbr totresns</i></li> <li>Under <b>Periods</b>, enter <i>60</i></li> <li>Hit OK</li> </ol> <!-- :::::::::: FIGURE 5 :::::::::: --> <center> <a href="http://www.eviews.com/blog/srvar/basic_irf.png"><img height="auto" src="http://www.eviews.com/blog/srvar/basic_irf.png" title="IRF Estimation Window" width="360" /></a><br /> <small>Figure 5: IRF Estimation Window</small><br /><br /> </center><!-- :::::::::: FIGURE 5 :::::::::: --> At last, Figure 5 of Uhlig (2005) is replicated below: <!-- :::::::::: FIGURE 6 :::::::::: --> <center> <a href="http://www.eviews.com/blog/srvar/basic_irf_graphs.png"><img height="auto" src="http://www.eviews.com/blog/srvar/basic_irf_graphs.png" title="IRF Graphs" width="360" /></a><br /> <small>Figure 6: IRF Graphs</small><br /><br /> </center><!-- :::::::::: FIGURE 6 :::::::::: --> The price puzzle pointed out by Sims (1992) is clearly visible in the graphs above. In particular, the GDP deflator increases after a contractionary monetary policy shock. By contrast, the sign restricted identification approach (show in Figure 9 below), avoids the price puzzle by construction.<br /><br /> To demonstrate how sign restricted VARs avoid the price puzzle, we now make use of the SRVAR add-in. In this regard, we first create the sign restriction vector. In particular, Uhlig (2005) suggests that the impulse responses be positive on the 4th variable <b>fedfunds</b>, and negative on the 2nd variable <b>gdpdef</b>, the 3rd variable <b>cprindex</b>, and the 5th variable <b>bognonbr</b>. Thus, we create the sign restriction vector by issuing the following command: <PRE><br />vector rest = @fill(+4, -2, -3, -5)<br /></PRE> At last, we invoke the SRVAR add-in and proceed with the rejection method as the SRVAR impulse response algorithm. We do this by clicking on the <b>Add-ins</b> menu in the main EViews menu, and click on <b>Sign restricted VAR</b>. This opens the SRVAR add-in window. There, we enter the following details:<br /><br /> <ol> <li>Under <b>Endogenous variables</b> enter <i>gdpc1 gdpdef cprindex fedfunds bognonbr totresns</i>.</li> <li>Click on <b>Include constant</b>, to remove the checkmark.</li> <li>Under <b>Number of lags</b>, enter <i>12</i>.</li> <li>In the <b>Sign restriction vector</b> textbox enter <i>+4, -2, -3, -5</i>.</li> <li>In the <b>Number of horizons</b> enter <i>60</i></li> <li>For the <b>Maximum number of restrictions</b> enter <i>6</i></li> <li>Hit OK</li> </ol> The steps above produce a graph of sign restricted VAR impulse responses which correspond to Figure 6 in Uhlig (2005). <!-- :::::::::: FIGURE 7 :::::::::: --><center> <a href="http://www.eviews.com/blog/srvar/srvar_irf_graphs.png"><img height="auto" src="http://www.eviews.com/blog/srvar/srvar_irf_graphs.png" title="SRVAR Impulse Responses (Rejection Method)" width="360" /></a><br /> <small>Figure 7: SRVAR Impulse Responses (Rejection Method)</small><br /><br /></center><!-- :::::::::: FIGURE 7 :::::::::: --> From the SRVAR impulse response graph, it is readily seen that there is no price puzzle by construction. However, the impulse response of real GDP is within a ±0.2% interval around zero. Alternatively, if using the SRVAR penalty function algorithm, the analogous figure is presented below: <!-- :::::::::: FIGURE 8 :::::::::: --><center> <a href="http://www.eviews.com/blog/srvar/srvar_irf_graphs_penalty.png"><img height="auto" src="http://www.eviews.com/blog/srvar/srvar_irf_graphs_penalty.png" title="SRVAR Impulse Responses (Penalty Function Method)" width="360" /></a><br /> <small>Figure 8: SRVAR Impulse Responses (Penalty Function Method)</small><br /><br /></center><!-- :::::::::: FIGURE 9 :::::::::: --> <h3 id="sec5">Conclusion</h3> In this blog entry we presented the sign restricted VAR add-in for EViews. The add-in is based on the work of Uhlig (2005) and generates impulse response curves based on Bayesian inference which accommodate sign restrictions in the VAR model. In the next blog, we will describe the implementation of the ARW add-in which will show how to impose zero restrictions on the impact period of the impulse response function.<br /><br /> <hr /><h3 id="sec6">References</h3> <ol class="bib2xhtml"> <!-- Authors: Uhlig Herald --><li><a name="uhlig-1994"></a>Uhlig Herald. What macroeconomist should know about unit roots: a Bayesian perspective. <cite>Economic Theory</cite>, 10:645–671, 1994.</li> <!-- Authors: Uhlig Herald --><li><a name="uhlig-2005"></a>Uhlig Herald. What are the effects of monetary policy on output? Results from an agnostic identification procedure. <cite>Journal of Monetary Economics</cite>, 52(2):381–419, 2005.</li> </ol></span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com18tag:blogger.com,1999:blog-6883247404678549489.post-50858909833227118062019-07-17T13:20:00.001-07:002019-07-17T13:20:09.515-07:00Pyeviews update: now compatible with Python 3<span style="font-family: "verdana" , sans-serif;">If you’re a user of both EViews and Python, then you may already be aware of pyeviews (if not, take a look at our original blog post <a href="http://blog.eviews.com/2016/03/pyeviews-python-eviews.html" target="_blank">here</a> or our whitepaper <a href="http://www.eviews.com/download/whitepapers/pyeviews.pdf" target="_blank">here</a>). </span><br /><span style="font-family: "verdana" , sans-serif;"><br /></span><span style="font-family: "verdana" , sans-serif;">Pyeviews has been updated and is now compatible with Python 3. We’ve also added support for numpy structured arrays and several additional time series frequencies. </span><br /><span style="font-family: "verdana" , sans-serif;"><br /></span><span style="font-family: "verdana" , sans-serif;">You can get these updates through pip:</span><br /><br /><span style="font-family: "courier new" , "courier" , monospace;">pip install pyeviews</span><br /><span style="font-family: "verdana" , sans-serif;"><br /></span><span style="font-family: "verdana" , sans-serif;">Through the conda-forge channel in Anaconda:</span><br /><br /><span style="font-family: "courier new" , "courier" , monospace;">conda install pyeviews -c conda-forge</span><br /><span style="font-family: "verdana" , sans-serif;"><br /></span><span style="font-family: "verdana" , sans-serif;">Or by typing:</span><br /><br /><span style="font-family: "courier new" , "courier" , monospace;">python setup.py install</span><br /><span style="font-family: "verdana" , sans-serif;"><br /></span><span style="font-family: "verdana" , sans-serif;">in your installation directory.</span><br /><span style="font-family: "verdana" , sans-serif;"><br /></span><span style="font-family: "verdana" , sans-serif;"><br /></span><br />IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com0tag:blogger.com,1999:blog-6883247404678549489.post-22104609406423388342019-06-26T13:04:00.000-07:002019-06-27T09:54:36.913-07:00Bayesian VAR Prior Comparison<script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { lb: "{\\left(}", rb: "{\\right)}", bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], ubar: ['{\\mkern 0.5mu\\underline{\\mkern-0.5mu#1\\mkern-0.5mu}\\mkern 0.5mu}', 1], undrln: ['{\\rlap{{\\hspace{-1pt}}\\underline{\\hphantom{H}}}{#1^{#4}\\vphantom{\\beta}}_{\\hspace{#3}\\vphantom{\\underline{}}_{#2}}}', 4], norm: ['{\\lVert#1\\rVert}', 1] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"></script> <span style="font-family: "verdana" sans-serif"> EViews 11 introduces a completely new Bayesian VAR engine that replaces one from previous versions of EViews. The new engine offers two new major priors; the Independent Normal-Wishart and the Giannone, Lenza and Primiceri, that compliment the previously implemented Minnesota/Litterman, Normal-Flat, Normal-Wishart and Sims-Zha priors. The new priors were enhanced with new options for forming the underlying covariance matrices that make up essential components of the prior.<a name='more'></a><br /><br /> The covariance matrices that form the prior specification are generally formed by specifying a matrix alongside a number of hyper-parameters which define any non-zero elements of the matrix. The hyper-parameters themselves are either selected by the researcher, or taken from an initial error covariance estimate. Sensitivity of the posterior distribution to the choice of hyper-parameter is a well researched topic, with practitioners often selecting many different hyper-parameter values to check their analysis does not change based solely on (an often arbitrary) choice of parameter. However, this sensitivity analysis is restricted to the parameters selected by the researcher, with often only passing thought given to those estimated by an initial covariance estimate.<br /><br /> Since EViews 11 offers a number of choices for estimating the initial covariance, we thought it would be interesting to perform a comparison of forecast accuracy both across prior types, and across choices of initial covariance estimate.<br /><br /> <h3>Table of Contents</h3><ol> <li><a href="#sec1">Prior Technical Details</a> <li><a href="#sec2">Estimating a Bayesian VAR in EViews</a> <li><a href="#sec3">Data and Models</a> <li><a href="#sec4">Results</a> <li><a href="#sec5">Conclusions</a></ol><br /> <h3 id="sec1">Prior Technical Details</h3> We will not provide in-depth details of each prior type here, leaving such details to the <a href="http://www.eviews.com/help/helpintro.html#page/content%2FbVAR-Bayesian_VAR_Models.html%23">EViews documentation</a> and its <a href="http://www.eviews.com/help/content/bVAR-References.html#">references</a>. However we will provide a summary with enough details to demonstrate how an initial covariance matrix influences each prior type. We will also, for sake of notational convenience, ignore exogenous variables and the constant from our discussion.<br /><br /> First we write the VAR as: $$y_t = \sum_{j=1}^p\Pi_jy_{t-j}+\epsilon_t$$ where <ul> <li><h4></h4>$y_t = (y_{1t},y_{2t}, ..., y_{Mt})'$ is an M vector of endogenous variables <li><h4></h4>$\Pi_j$ are $M\times M$ matrices of lag coefficients <li><h4></h4>$\epsilon_t$ is an $M$ vector of errors where we assume $\epsilon_t\sim N(0,\Sigma)$<br /><br /></ul> If we define $x_t=(y_{t-1}', ..., y_{t-p})$ stack variables to form, for example, $Y = (y_1, ...., y_T)'$, and let $y=vec(Y')$, the multivariate normal assumption on $\epsilon_t$ gives us: $$(y\mid \beta)\sim N((X\otimes I_M)\beta, I_T\otimes \Sigma)$$ Bayesian estimation of VAR models then centers around the derivation of posterior distributions of $\beta$ and $\Sigma$ based upon the above multivariate distribution, and prior distributional assumptions on $\beta$ and $\Sigma$.<br /><br /> To demonstrate how each prior relies on an initial estimate of $\Sigma$, for the priors other than Litterman, we only need to consider the component of each prior relating to the distribution $\beta$, and in particular its covariance. <ol> <li><h4><i>Litterman/Minnesota Prior</i></h4> $$\beta \sim N\left(\undrln{\beta}{Mn}{2.25pt}{}, \undrln{V}{Mn}{2.25pt}{}\right)$$ $\undrln{V}{Mn}{2.25pt}{}$ is assumed to be a diagonal matrix. The diagonal elements corresponding to endogenous variables, $i,j$ at lag $l$ are specified by: $$\undrln{V}{Mn, i,j}{-4.5pt}{l} = \begin{cases} \left(\frac{\lambda_1}{l^{\lambda_3}}\right)^2 &\text{for } i = j\\ \left(\frac{\lambda_1 \lambda_2 \sigma_i}{l^{\lambda_3} \sigma_j}\right)^2 &\text{for } i \neq j \end{cases} $$ where $\lambda_1$, $\lambda_2$ and $\lambda_3$ are hyper-parameters chosen by the researcher, and $\sigma_i$ is the square root of the corresponding $(i,i)^{\text{th}}$ element of an initial estimate of $\Sigma$.<br /><br /> The Litterman/Minnesota prior also assumes that $\Sigma$ is fixed, forming no prior on $\Sigma$, just using the initial estimate as given.<br /><br /> <li><h4><i>Normal-Flat and Normal-Wishart</i></h4> $$\beta\mid\Sigma\sim N\left(\undrln{\beta}{N}{2.25pt}{}, \undrln{H}{N}{0pt}{}\otimes\Sigma\right)$$ where $\undrln{H}{N}{0pt}{} = c_3I_M$ and $c_3$ is a chosen hyper-parameter. As such, the Normal-Flat and Normal-Wishart priors do not rely on an initial estimate of the error covariance at all.<br /><br /> <li><h4><i>Independent Normal-Wishart</i></h4> $$\beta\sim N\left(\undrln{\beta}{INW}{2.25pt}{}, \undrln{H}{INW}{0pt}{}\otimes\Sigma\right)$$ where, again, $\undrln{H}{INW}{0pt}{} = c_3I_M$ and $c_3$ is a chosen hyper-parameter. Thus, like the Normal-Flat and Normal-Wishart priors the prior matrices do not depend upon an initial $\Sigma$ estimate. However, the Independent Normal-Wishart requires an MCMC chain to derive the posterior distributions, and the MCMC chain does require an initial estimate for $\Sigma$ to start the chain (although, hopefully, the impact of this starting estimate should be minimal).<br /><br /> <li><h4><i>Sims-Zha</i></h4> $$\beta\mid\beta_0\sim N\left(\undrln{\beta}{SZ}{2.25pt}{}, \undrln{H}{SZ}{0pt}{}\otimes\Sigma\right)$$ $\undrln{H}{SZ}{0pt}{}$ is assumed to be a diagonal matrix. The diagonal elements corresponding to endogenous variables, $i,j$ at lag $l$ are specified by: $$\undrln{H}{SZ, i, j}{-4.5pt}{l} = \left(\frac{\lambda_0\lambda_1}{\sigma_j l^{\lambda_3}}\right)^2 \text{for } i = j$$ where $\lambda_0$, $\lambda_1$ and $\lambda_3$ are hyper-parameters chosen by the researcher, and $\sigma_i$ is the square root of the corresponding $(i,i)^{\text{th}}$ element of an initial estimate of $\Sigma$.<br /><br /> <li><h4><i>Giannone, Lenza and Primiceri</i></h4> $$\beta\mid\beta_0\sim N(\undrln{\beta}{GLP}{2.25pt}{}, \undrln{H}{GLP}{0pt}{}\otimes\Sigma)$$ $\undrln{H}{GLP}{0pt}{}$ is assumed to be a diagonal matrix. The diagonal elements corresponding to endogenous variables, $i,j$ at lag $l$ are specified by: $$\undrln{H}{GLP,i,j}{-4.5pt}{l} = \left(\frac{\lambda_1}{\phi_j l^{\lambda_3}}\right)^2 \text{for } i = j$$ where $\lambda_1$, $\lambda_3$ and $\phi_j$ are hyper-parameters of the prior.<br /><br /> GLP's method revolves around using optimization techniques to select the optimal hyper-parameter values. However, it is possible to optimize only a subset of the hyper-parameters and select others. $\phi_j$ is often set, rather than optimized, as $\phi_j = \sigma_j$ and is the square root of the corresponding $(j,j)^{\text{th}}$ element of an initial estimate of $\Sigma$. Even when $\phi_j$ is optimized rather than set, an inititial estimate is used as the starting point of the optimizer.<br /><br /></ol> Of these priors, only the normal-flat and normal-Wishart priors do not rely on an initial estimate of $\Sigma$ at all. Consequently the method used for that initial estimate might have a large impact on the final results.<br /><br /> Different implementations of Bayesian VAR estimations use different methods to calculate the initial $\Sigma$. Some of these methods are:<br /><br /> <ul> <li><h4></h4>A classical VAR model. <li><h4></h4>A classical VAR model with the off-diagonal elements replaced with zero. <li><h4></h4>A univariate AR(p) model for each endogenous variable (forcing $\Sigma$ to be diagonal). <li><h4></h4>A univariate AR(1) model for each endogenous variable (forcing $\Sigma$ to be diagonal).<br /><br /></ul> With each of these methods, there is also the decision as to whether to degree-of-freedom adjust the final estimate (and if so, by what factor), and whether to include any exogenous variables from the Bayesian VAR in the calculation of the classical VAR or univariate AR models.<br /><br /> Bayesian VAR priors can be complimented with the addition of dummy-observation priors to increase the predictive power of the model. There are two specific priors - the sum-of-coefficients prior that adds additional observations to the start of the data to account for any unit root issues, and the dummy-initial-observation prior which adds additional observations to account for cointegration.<br /><br /> With the addition of extra observations to the data used in the Bayesian prior, there is also a choice to be made as whether those additional observations are also included in any initial covariance estimation.<br /><br /> <h3 id="sec2">Estimating a Bayesian VAR in EViews</h3> Estimating VARs in EViews is straight forward, you simply select the variables you want in your VAR, right click, select <i>Open As VAR</i> and then fill in the details of the VAR, including the estimation sample and the number of lags. For Bayesian VARs the only additional steps that need to be taken are changing the VAR type to Bayesian, and then filling in the details of the prior you want to use and any hyper-parameter specification.<br /><br /> For full details on how to estimate a Bayesian VAR in EViews, refer to the <a href="http://www.eviews.com/help/content/bVAR-Estimating_a_Bayesian_VAR_in_EViews.html#">documentation</a>, and <a href="http://www.eviews.com/help/content/bVAR-Examples.html#">examples</a>.<br /><br /> However we’ve also provided a simple video demonstration of both importing the data used in this blog post, and estimating and forecasting the normal-Wishart prior.<br /><br /> <center><iframe width="640" height="540" src="http://www.eviews.com/blog/bvar/video/video_player.html?embedIFrameId=embeddedSmartPlayerInstance" webkitallowfullscreen=""></iframe><br /><br /></center> <h3 id="sec3">Data and Models</h3> To evaluate the forecasting performance of the priors under different initial covariance estimation methods, we'll perform an experiment closely following that performed in Giannone, Lenza and Primiceri (GLP). Notably, we use the Stock and Watson (2008) data set which includes data on 149 quarterly US macroeconomic variables between 1959Q1 and 2008Q4. <br /><br /> Following GLP we produce forecasts from the BVARs recursively for two forecast lengths (1 quarter and 1 year), starting with data from 1959 to 1974, then increasing the estimation sample by one quarter at a time, to give 128 different estimations.<br /><br /> We perform two sets of experiments, each representing a different sized VAR:<br /><br /> <ul> <li><h4></h4>SMALL containing just three variables - GDP, the GDP deflator and the federal funds rate. <li><h4></h4>MEDIUM containing seven variables - adding consumption, investment, hours and wages.<br /><br /></ul> Each of these VARs is estimated at five lags using a classical VAR and 39 different combinations of prior and initial covariance options:<br /><br /> <!-- :::::::::: TABLE 0 :::::::::: --><center> <a href="http://www.eviews.com/blog/bvar/table0.png"><img height="auto" src="http://www.eviews.com/blog/bvar/table0.png" title="Models Overview" width="600" /></a><br /><br /></center><!-- :::::::::: TABLE 0 :::::::::: --> After each BVAR estimation, Bayesian sampling of the forecast period is performed - drawing from the full posterior distributions for the Litterman, Normal-flat, Normal-Wishart and Sims-Zha priors, and running MCMC draws for the Independent normal-Wishart and GLP priors. The mean of the draws is used as a point estimate, and the root mean square error (RMSE) is calculated. Each forecast draw uses 100,000 iterations. With 39*128=4,992 forecasts and two sizes of VARs, that is a total of 1 billion draws!<br /><br /> <h3 id="sec4">Results</h3> The following tables show the average root-mean square of each of the four sets of forecasts. Click on a table to enlarge the image.<br /><br /> <!-- :::::::::: TABLE 1 :::::::::: --><center> <a href="http://www.eviews.com/blog/bvar/table1.png"><img height="auto" src="http://www.eviews.com/blog/bvar/table1.png" title="Three variable VAR one quarter GDP forecast RMSE" width="720" /></a><br /><br /></center><!-- :::::::::: TABLE 1 :::::::::: --> <!-- :::::::::: TABLE 2 :::::::::: --><center> <a href="http://www.eviews.com/blog/bvar/table2.png"><img height="auto" src="http://www.eviews.com/blog/bvar/table2.png" title="Three variable VAR one year GDP forecast RMSE" width="720" /></a><br /><br /></center><!-- :::::::::: TABLE 2 :::::::::: --> <!-- :::::::::: TABLE 3 :::::::::: --><center> <a href="http://www.eviews.com/blog/bvar/table3.png"><img height="auto" src="http://www.eviews.com/blog/bvar/table3.png" title="Five variable VAR one quarter GDP forecast RMSE" width="720" /></a><br /><br /></center><!-- :::::::::: TABLE 3 :::::::::: --> <!-- :::::::::: TABLE 4 :::::::::: --><center> <a href="http://www.eviews.com/blog/bvar/table4.png"><img height="auto" src="http://www.eviews.com/blog/bvar/table4.png" title="Five variable VAR one year GDP forecast RMSE" width="720" /></a><br /><br /></center><!-- :::::::::: TABLE 4 :::::::::: --> <h3 id="sec5">Conclusions</h3>For the three variable one-quarter ahead experiment, it is clear that the GLP prior is more effective than the other prior types, although the Litterman prior is relatively close in accuracy. In terms of which covariance method performs best, there is no clear winner, with the differences between covariance choice only having a large impact on the Litterman and GLP priors.<br /><br /> The choice of whether to include dummy observation priors, and if so whether to include them in the covariance calculation, choice appears to only impact the GLP prior severely.<br /><br /> The overall winner, at least in terms of RMSE, was the GLP prior with a diagonal VAR used for initial covariance choice without dummy observations.<br /><br /> A similar story is told for the three variable one-year ahead experiment, however this time the Litterman prior is the clear winner. Again there is not much difference between covariance choices and dummy observation choices. Notably, although Litterman does best across the options, the overall most accurate was the Normal-flat.<br /><br /> Expanding to the five variable VARs, the one-quarter ahead experiment is not as clear-cut as the three variable equivalent. Across covariance options is a toss-up between Litterman and GLP. The effect of covariance has a bigger impact, with the Univariate AR(5) option looking best.<br /><br /> For the first time, optimizing $\phi$ in the GLP prior has a positive impact, with the version including dummy observations being the overall most accurate option combination.<br /><br /> The final experiment is similar, no clear-cut winner in terms of prior choice, although Litterman might just edge GLP. Choice of covariance again has an impact, with again a univariate AR(5) looking best.<br /><br /> Across all the experiments it is difficult to give an overall winner. The original Litterman and GLP priors are ahead of the others, but knowing which covariance choice to select or whether to include dummy observations is more ambiguous.<br /><br /> One absolutely clear result is, however, that no matter which combination of prior and options are selected, the Bayesian VAR will vastly outperform a classical VAR.<br /><br /> Finally, it is worth mentioning that these results are, with the obvious exception of the GLP prior, for a fixed set of hyper-parameters, and the conclusions may differ if attention is given to simultaneously finding the best set of hyper-parameters and covariance choice. </span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com1tag:blogger.com,1999:blog-6883247404678549489.post-24918477283176795732019-05-13T09:34:00.000-07:002019-05-14T11:00:17.335-07:00Functional Coefficient Estimation: Part I (Nonparametric Estimation)<script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { lb: "{\\left(}", rb: "{\\right)}", bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"></script> <span style="font-family: "verdana" sans-serif"> Recently, EViews 11 introduced several new nonparametric techniques. One of those features is the ability to estimate functional coefficient models. To help familiarize users with this important technique, we're launching a multi-part blog series on nonparametric estimation, with a particular focus on the theoretical and practical aspects of functional coefficient estimation. Before delving into the subject matter however, in this Part I of the series, we give a brief and gentle introduction to some of the most important principles underlying nonparametric estimation, and illustrate them using EViews programs.<a name='more'></a><br /><br /> <h3>Table of Contents</h3><ol> <li><a href="#sec1">Nonparametric Estimation</a> <li><a href="#sec2">Global Methods</a> <ol type="i"> <li><a href="#sec2.1">Optimal Sieve Length</a> <li><a href="#sec2.2">Critiques</a> </ol> <li><a href="#sec3">Local Methods</a> <ol type="i"> <li><a href="#sec3.1">Localized Kernel Regression</a> <li><a href="#sec3.2">Bandwidth Selection</a> </ol> <li><a href="#sec4">Conclusion</a> <li><a href="#sec5">Files</a> <li><a href="#sec6">References</a></ol><br /> <h3 id="sec1">Nonparametric Estimation</h3> Traditional least squares regression is parametric in nature. It confines relationships between the dependent variable $ Y_{t} $ and independent variables (regressors) $ X_{1,t}, X_{2,t}, \ldots $ to be, in expectation, linear in the parameter space. For instance, if the true data generating process (DGP) for $ Y_{t} $ derives from $ p $ regressors, the least squares regression model postulates that: $$ Y_{t} = m(x_{1}, \ldots, x_{p}) \equiv E(Y_t | X_{1,t} = x_{1}, \ldots, X_{p,t} = x_{p}) = \beta_0 + \sum_{k=1}^{p}{\beta_k x_{k}} $$ Since this relationship holds only in expectation, a statistically equivalent form of this statement is: \begin{align} Y_t &= m\left(X_{1,t}, \ldots, X_{p,t}\right) + \epsilon_{t} \nonumber \\ &=\beta_0 + \sum_{k=1}^{p}{\beta_k X_{k,t}} + \epsilon_t \label{eq.1.1} \end{align} where the error term $ \epsilon_{t} $ has mean zero, and parameter estimates are solutions to the minimization problem: $$ \arg\!\min_{\hspace{-1em}\beta_{0}, \ldots, \beta_{p}} E\left(Y_{t} - \beta_0 + \sum_{k=1}^{p}{\beta_k X_{k,t}}\right)^{2} $$ Nevertheless, while this framework is typically sufficient for most applications, and is obviously very appealing and intuitive, when the true but unknown DGP is in fact non-linear, inference is rendered unreliable.<br /><br /> On the other hand, nonparametric modelling prefers to remain agnostic about functional forms. Relationships are, in expectation, simply functionals $ m(\cdot) $, and if the true DGP for $ Y_{t} $ is a function of $ p $ regressors, then: $$ Y_t = m\left(X_{1,t}, \ldots, X_{p,t}\right) + \epsilon_{t} $$ Here, estimators of $ m(\cdot) $ can generally be cast as minimization problems of the form: \begin{align} \arg\!\min_{\hspace{-1em} m\in \mathcal{M}} E\left(Y_{t} - m\left(X_{1,t}, \ldots, X_{p,t}\right)\right)^{2} \label{eq.1.2} \end{align} where $ \mathcal{M} $ is now a function space. In this regard, a nonparametric estimator can be thought of as a solution to a search problem over functions as opposed to parameters.<br /><br /> The problem in \eqref{eq.1.2}, however, is infeasible. It turns out the function space is effectively uncountable. In fact, even if arguing to the contrary, solutions would be unidentified since different functions in $ \mathcal{M} $ can map to the same range. Accordingly, general practice is to reduce $ \mathcal{M} $ to a lower dimensional countable space and optimize over it. This typically implies a reduction of the problem to a parametric framework so that the problem in \eqref{eq.1.2} is cast into: \begin{align} \arg\!\min_{\hspace{-1em} m\in \mathcal{M}} E\left(Y_{t} - h\left(X_{1,t}, \ldots, X_{p,t}; \mathbf{\Theta} \right)\right)^{2} \label{eq.1.3} \end{align} where $ h(\cdot; \mathbf{\Theta}) \in \mathcal{H} $ is a function with associated parameters $ \mathbf{\Theta} \in \mathbf{R}^{q} $ and $ \mathcal{H} $ is a function space which is <i>dense</i> in $ \mathcal{M} $; formally, $ h^{\star} \in \mathcal{H} \rightarrow m^{\star} \in \mathcal{M} $ where $ \rightarrow $ denotes asymptotic convergence. Recall that this means that any feasible estimate $ h^{\star} $ must become arbitrarily close to the unfeasible estimate $ m^{\star} $ as the space $ \mathcal{H} $ grows to asymptotic equivalence with $ \mathcal{M} $. In this regard, nonparametric estimators are typically classified into either <i>global</i> or <i>local</i> kinds.<br /><br /> <h3 id="sec2">Global Methods</h3> Global estimators, generally synonymous with the class of <i>sieve</i> estimators introduced by Grenander (1981), approximate arbitrary functions by simpler functions which are uniformly dense in the target space $ \mathcal{M} $. A particularly important class of such estimators are <i>linear sieves</i> which are constructed as linear combinations of popular basis functions. The latter include <i>Bernstein polynomials</i>, <i>Chebychev polynomials</i>, <i>Hermite polynomials</i>, <i>Fourier series</i>, <i>polynomial splines</i>, <i>B-splines</i>, and <i>wavelets</i>. Formally, when the function $ m(\cdot) $ is univariate, linear sieves assume the following general structure: \begin{align} \mathcal{H}_{J} = \left\{h \in \mathcal{M}: h(x; \mathbf{\Theta}) = \sum_{j=1}^{J}\theta_{k}f_{j}(x)\right\} \label{eq.1.4} \end{align} where $ \theta_{j} \in \mathbf{\Theta} $, $ f_{j}(\cdot) $ is one of the aforementioned basis functions, and $ J \rightarrow \infty$.<br /><br /> For instance, if the sieve exploits the <i>Stone-Weierstrass' Approximation Theorem</i> which claims that any continuously differentiable function over a compact interval, can be uniformly approximated on that interval by a polynomial to any degree, then $ f_{j}(x) = x^{j-1} $. In particular, if the unknown function of interest is $ m(x) $, then choosing to approximate the latter with a polynomial of degree $ J = J^{\star} < \infty $ (some integer), reduces the problem in \eqref{eq.1.4} to: $$ \arg\!\min_{\hspace{-1em} m\in \mathcal{M}} E\left(Y_{t} - \theta_{0} + \sum_{j=1}^{J^{\star}}\theta_{j}X_{t}^{j} \right)^{2} $$ where $ Y_{t} $ are the values we observe from the theoretical function $ m(x) $, and $ X_{t} $ is the regressor we're using to estimate it. Usual least squares now yields $ \widehat{\theta}_{j} $ for $ j=1,\ldots, J^{\star} $. Furthermore, $ m(x) $ can be approximated as $$ m(x) \approx \widehat{\theta}_{0} + \sum_{j=1}^{J^{\star}}\widehat{\theta}_{j}x^{j} $$ where $ x $ is evaluated on some grid $ [a,b] $, where it can have arbitrary length, or even on the original regressor values so that $ x \equiv X_{t} $.<br /><br /> To demonstrates the procedure, define the true but unknown function $ m(x) $ as: \begin{align} m(x) = \sin(x)\cos(\frac{1}{x}) + \log\left(x + \sqrt{x^2+1}\right) \quad x \in [-6,6]\label{eq.1.5} \end{align} Furthermore, generate observable data from $ m(x) $ as $ Y_{t} = m(x) + 0.5\epsilon_{t} $ and generate the regressor data as $ X_{t} = x - 0.5 + \eta_{t} $ where $ \epsilon_{t} $ and $ \eta_{t} $ are mutually independent respectively standard normal and standard uniform random variables. Estimation is now summarized for polynomial degrees 1, 5, and 15, respectively.<br /><br /> <!-- :::::::::: FIGURE 1 :::::::::: --><center> <a href="http://www.eviews.com/blog/funcoef/polysieveplot.jpeg"><img height="auto" src="http://www.eviews.com/blog/funcoef/polysieveplot.jpeg" title="Polynomial Sieve Estimation" width="360" /></a><br /> <small>Figure 1: Polynomial Sieve Estimation</small><br /><br /></center><!-- :::::::::: FIGURE 1 :::::::::: --> Alternatively, if the sieve exploits Hermite polynomials, one can construct the <i>Gaussian sieve</i> which reduces the problem in \eqref{eq.1.4} to: $$ \arg\!\min_{\hspace{-1em} m\in \mathcal{M}} E\left(Y_{t} - \theta_{0} + \sum_{j=1}^{J^{\star}}\theta_{j}\phi(X_{t})H_{j}(X_{t}) \right)^{2} $$ where $ \phi(\cdot) $ is the standard normal density and $ H_{j}(\cdot) $ are Hermite polynomials of degree $ j $. The figure below demonstrates the procedure using sieve lengths 1, 3, and 10, respectively.<br /><br /> <!-- :::::::::: FIGURE 2 :::::::::: --><center> <a href="http://www.eviews.com/blog/funcoef/gausssieveplot.jpeg"><img height="auto" src="http://www.eviews.com/blog/funcoef/gausssieveplot.jpeg" title="Gaussian Sieve Estimation" width="360" /></a><br /> <small>Figure 2: Gaussian Sieve Estimation</small><br /><br /></center><!-- :::::::::: FIGURE 2 :::::::::: --> Clearly, both sieve estimators are very similar. So how does one select an <i>optimal</i> sieve? There really isn't a prescription for such optimization. Each sieve has its advantages and disadvantages, but the general rule of thumb is to choose a sieve that most closely resembles the function of interest $ m(\cdot) $. For instance, if the function is polynomial, then using a polynomial sieve is probably best. Alternatively, if the function is expected to be smooth and concentrated around its mean, a Gaussian sieve will work well. On the other hand, the question of optimal sieve length lends itself to more concrete advice.<br /><br /> <h4 id="sec2.1">Optimal Sieve Length</h4> Given the examples explored above, it is evident that sieve length plays a major role in fitting accuracy. For instance, estimation with a low sieve length resulted in severe underfitting, while a higher sieve length resulted in better fit. The question of course is whether an optimal length can be determined.<br /><br /> Li et. al. (1987) studied three well-known procedures, all of which are based on the mean squared forecast error of the estimated function over a search grid $ \mathcal{J} \equiv \left\{J_{min},\ldots, J_{max}\right\} $, and all of which are asymptotically equivalent. In particular, let $ J^{\star} $ the optimal sieve length and consider:<br /><br /> <ol> <li>$ C_{p} $ method due to Mallows (1973): $$ J^{\star} = \min_{J \in \mathcal{J}} \frac{1}{T}\sum_{t=1}^{T}\left(Y_{t} - \widehat{m}(X_{t})\right)^{2} - 2\widehat{\sigma}^{2}\frac{J}{T} $$ where $ \widehat{\sigma}^{2} = \frac{1}{n}\sum_{t=1}^{T}\left(Y_{t} - \widehat{m}(X_{t})\right)^{2}$ <li>Generalized cross-validation method due to Craven and Wahba (1979): $$ J^{\star} = \min_{J \in \mathcal{J}} \frac{1}{(1 - (J/2))^{2}T}\sum_{t=1}^{T}\left(Y_{t} - \widehat{m}(X_{t})\right)^{2} $$ <li>Leave-one-out cross validation method due to Stone (1974): $$ J^{\star} = \min_{J \in \mathcal{J}} \frac{1}{T}\sum_{t=1}^{T}\left(Y_{t} - \widehat{m}_{\setminus t^{\star}}(X_{t})\right)^{2} $$ where the subscript notation $ \setminus t^{\star} $ indicates estimation after dropping observation $ t^{\star} $. </ol> Here we discuss the algorithm for the last of the three procedures. In particular, with the search grid $ \mathcal{J} $ defined as before, iterate the following steps over $ J \in \mathcal{J} $: <ol> <li>For each observation $ t^{\star} \in \left\{1, \ldots, T \right\} $: <ol type="i"> <li>Solve the optimization problem in \eqref{eq.1.4} using data from the pair $ (Y_{t}, X_{t})_{t \neq t^{\star}} $, and derive the estimated model as follows: $$ \widehat{m}_{J,\setminus t^{\star}}(x) \equiv \widehat{\theta}_{_{J,\setminus t^{\star}}0} + \sum_{j=1}^{J}\widehat{\theta}_{_{J,\setminus t^{\star}}j}f_{j}(x)$$ where the subscript $ J,\setminus t^{\star} $ indicates that parameters are estimated using sieve length $ J $, after dropping observation $ t^{\star} $. <li>Derive the forecast error for the dropped observation as follows: $$ e_{_{J}t^{\star}} \equiv Y_{t^{\star}} - \widehat{m}_{J,\setminus t^{\star}}(X_{t^{\star}}) $$ </ol> <li>Derive the cross-validation mean squared error for sieve length $ J $ as follows: $$ MSE_{J} = \frac{1}{T}\sum_{t=1}^{T} e_{_{J}t}^{2} $$ <li>Determine the optimal sieve length $ J^{\star} $ as the minimum $ MSE_{J} $ across $ \mathcal{J} $. In other words $$ J^{\star} = \min_{J\in\mathcal{J}} MSE_{J} $$ </ol> In words, the algorithm moves across the sieve search grid $ \mathcal{J} $ and computes an out-of-sample forecast error for each observation. The optimal sieve length is that which minimizes the average mean squared error across the search grid. We demonstrate the selection criteria and accompanied estimation when using a grid search from 1 to 15.<br /><br /> <!-- :::::::::: FIGURE 3 :::::::::: --><center> <a href="http://www.eviews.com/blog/funcoef/optest.jpeg"><img height="auto" src="http://www.eviews.com/blog/funcoef/optest.jpeg" title="Sieve Regression with Optimized Sieve Length Selection" width="720" /></a><br /> <small>Figure 3: Sieve Regression with Optimized Sieve Length Selection</small><br /><br /></center><!-- :::::::::: FIGURE 3 :::::::::: --> Evidently, both the polynomial and Gaussian sieve models ought to use a sieve length of 15.<br /><br /> <h4 id="sec2.2">Critiques</h4> While global nonparametric estimators are easy to work with, they exhibit several well recognized drawbacks. First, they leave little room for fine-tuning estimation. For instance, in the case of polynomial sieves, the polynomial degree is not continuous. In other words, if estimation underfits when sieve length is $ J $, but overfits when sieve length is $ J+1 $, then there is no polynomial degree $ J < J^{\star} < J+1 $.<br /><br /> Second, global estimators are often subject to infeasibility since regressor values may not be sufficiently small. This is because increased sieve lengths can result in the values of the regressor covariance matrix to become extremely large. In turn, this can render the inverse of the covariance matrix nearly singular, and by extension, render estimation infeasible. In other words, at some point, increasing the polynomial degree further does not lead to estimate improvements.<br /><br /> Lastly, it is worth pointing out that global estimators fit curves by smoothing (averaging) over the entire domain. As such, they can have difficulties handling observations with strong influences such as outliers and regime switches. This is due to the fact that outlying observations will be averaged with the rest of the data, resulting in a curve that significantly under- or over- fits these observations. To illustrate this point, consider a modification of equation \eqref{eq.1.5} with outliers when $ -1 < x \leq 1 $ : \begin{align} m(x) = \begin{cases} \sin(x)\cos(\frac{1}{x}) + \log\left(x + \sqrt{x^2+1}\right) & \text{if } x\in [-6,1]\\ \sin(x)\cos(\frac{1}{x}) + \log\left(x + \sqrt{x^2+1}\right) + 4 & \text{if } x \in (-1,1]\\ \sin(x)\cos(\frac{1}{x}) + \log\left(x + \sqrt{x^2+1}\right) - 2 & \text{if } x \in (1,6] \end{cases}\label{eq.1.6} \end{align} We generate $ Y_{t} $ and $ X_{t} $ as before, and estimate this model using both polynomial and Gaussian sieves based on cross-validated sieve length selection.<br /><br /> <!-- :::::::::: FIGURE 4 :::::::::: --><center> <a href="http://www.eviews.com/blog/funcoef/optestoutliers.jpeg"><img height="auto" src="http://www.eviews.com/blog/funcoef/optestoutliers.jpeg" title="Sieve Regression with Optimized Sieve Length Selection and Outliers" width="720" /></a><br /> <small>Figure 4: Sieve Regression with Optimized Sieve Length Selection and Outliers</small><br /><br /></center><!-- :::::::::: FIGURE 4 :::::::::: --> Clearly, both procedures have a difficult time handling jumps in the domain region $ -1 < x \leq 1 $. Nevertheless, it is evident that the Gaussian sieve does significantly better than polynomial regression. This is further corroborated by the leave-one-out cross-validation MSE values which indicate that the Gaussian sieve minimum MSE is roughly 6 times as small as the polynomial sieve minimum MSE.<br /><br /> It turns out that a number of these shortcomings can be mitigated by averaging locally instead of globally. In this regard, we turn to the idea of <i>local estimation</i> next.<br /><br /> <h3 id="sec3">Local Methods</h3> The general idea behind local nonparametric estimators is <i>local averaging</i>. The procedure partitions the functional variable $ x $ into <i>bins</i> of a particular size, and estimates $ m(x) $ as a linear interpolation of the average values of the dependent variable at the middle of each bin. We demonstrate the procedure when $ m(x) $ is the function in \eqref{eq.1.5}.<br /><br /> In particular, define $ Y_{t} $ as before, but let $ X_{t} = x $. In other words, we consider deterministic regressors. We will relax the latter assumption later, but this is momentarily more instructive as it leads to contiguous partitions of the explanatory variable $ X_{t} $. At last, define the bins as quantiles of $ x $ and consider the procedure with bin partitions equal to 2, 5, 15, and 30, respectively.<br /><br /> <!-- :::::::::: FIGURE 5 :::::::::: --><center> <a href="http://www.eviews.com/blog/funcoef/quantplot.jpeg"><img height="auto" src="http://www.eviews.com/blog/funcoef/quantplot.jpeg" title="Local Averaging with Quantiles" width="720" /></a><br /> <small>Figure 5: Local Averaging with Quantiles</small><br /><br /></center><!-- :::::::::: FIGURE 5 :::::::::: --> Clearly, when the number of bins is 2, the estimate is a straight line and severely underfits the objective function. Nevertheless, as the number of bins increases, so does the accuracy of the estimate. Indeed, local estimation here is shown to be significantly more accurate than global estimation used earlier on the same function $ m(x) $. This is of course a consequence of local averaging which performs piecemeal smoothing on only those observations restricted to each bin. Naturally, high leverage observations and outliers are better accommodated as they are averaged only with those observations in the immediate vicinity which also fall in the same bin. In fact, we can demonstrate this using the function $ m(x) $ in \eqref{eq.1.6}.<br /><br /> <!-- :::::::::: FIGURE 6 :::::::::: --><center> <a href="http://www.eviews.com/blog/funcoef/quantplotoutliers.jpeg"><img height="auto" src="http://www.eviews.com/blog/funcoef/quantplotoutliers.jpeg" title="Local Averaging with Quantiles and Outliers" width="720" /></a><br /> <small>Figure 6: Local Averaging with Quantiles and Outliers</small><br /><br /></center><!-- :::::::::: FIGURE 6 :::::::::: --> Evidently, increasing the number of bins leads to increasingly better adaptation to the presence of outlying observations.<br /><br /> It's worth pointing out here that unlike sieve estimation which can suffer from infeasibility with increased sieve length, in local estimation, there is in principle no limit to how finely we wish to define the bin width. Nevertheless, as is evident from the visuals, while increasing the number of bins will reduce bias, it will also introduce variance. In other words, smoothness is sacrificed at the expense of accuracy. This is of course the <i>bias-variance tradeoff</i> and is precisely the mechanism by which fine-tuning the estimator is possible.<br /><br /> <h4 id="sec3.1">Localized Kernel Regression</h4> The idea of local averaging can be extended to accommodate various bin types and sizes. The most popular approaches leverage information of the points at which estimates of $ m(x) $ are desired. For instance, if estimates of $ m(x) $ are desired at a set of points $ \left(x_{1}, \ldots, x_{J} \right) $, then the estimate $ \widehat{m}(x_{j}) $ can be the average of $ Y_{t} $ for each point $ X_{t} $ in some <i>neighborhood</i> of $ x_{j} $ for $ j=1,\ldots, J $. In other words, bins are defined as neighborhoods centered around the points $ x_{j} $, with the size of the neighborhood determined by some distance metric. Then, to gain control over the bias-variance tradeoff, neighborhood size can be exploited with a penalization scheme. In particular, penalization introduces a weight function which disadvantages those $ X_{t} $ that are too far from $ x_{j} $ in any direction. In other words, those $ X_{t} $ close to $ x_{j} $ (in the neighborhood) are assigned larger weights, whereas those $ X_{t} $ far from $ x_{j} $ (outside the neighborhood) are weighed down.<br /><br /> Formally, when the function $ m(\cdot) $ is univariate, local kernel estimators solve optimization problems of the form: \begin{align} \arg\!\min_{\hspace{-1em} \beta_{0}} E\left(Y_{t} - \beta_{0}\right)^{2}K_{h}\left(X_{t} - x_{j}\right) \quad \forall j \in \left\{1, \ldots, J\right\}\label{eq.1.7} \end{align} Here we use the traditional notation $ K_{h}(X_{t} - x_{j}) \equiv K\left(\frac{|X_{t} - x_{j}|}{h}\right) $ where $ K(\cdot) $ is a distributional weight function, otherwise known as a <i>kernel</i>, $ |\cdot| $ denotes a distance metric (typically Euclidean), $ h $ denotes the size of the local neighbourhood (bin), otherwise known as a <i>bandwidth</i>, and $ \beta_{0} \equiv \beta_{0}(x_{j}) $ due to its dependence on the evaluation point $ x_{j} $.<br /><br /> To gain further insight, it is easiest to think of $ K(\cdot) $ as a probability density function with support on $ [-1,1] $. For instance, consider the famous <i>Epanechnikov</i> kernel: $$ K(u) = \frac{3}{4}\left(1 - u^{2}\right) \quad \text{for} \quad |u| \leq 1 $$ or the <i>cosine</i> kernel specified by: $$ K(u) = \frac{\pi}{4}\cos(\frac{\pi}{2}u) \quad \text{for} \quad |u| \leq 1 $$ <table> <tbody> <tr> <td> <!-- :::::::::: FIGURE 7A :::::::::: --> <center> <a href="http://www.eviews.com/blog/funcoef/epankern.jpeg"><img height="auto" src="http://www.eviews.com/blog/funcoef/epankern.jpeg" title="Epanechnikov Kernel" width="360" /></a><br /> </center> <!-- :::::::::: FIGURE 7A :::::::::: --> </td> <td> <!-- :::::::::: FIGURE 7B :::::::::: --> <center> <a href="http://www.eviews.com/blog/funcoef/coskern.jpeg"><img height="auto" src="http://www.eviews.com/blog/funcoef/coskern.jpeg" title="Cosine Kernel" width="360" /></a><br /> </center> <!-- :::::::::: FIGURE 7B :::::::::: --> </td> </tr> <tr> <td> <center> <small>Figure 7A: Epanechnikov Kernel</small><br /><br /> </center> </td> <td> <center> <small>Figure 7B: Cosine Kernel</small><br /><br /> </center> </td> </tr> </tbody></table> Now, if $ |X_{t} - x| > h $, it is clear that $ K(\cdot) = 0 $. In other words, if the distance between $ X_{t} $ and $ x $ is larger than the bandwidth (neighborhood size), then $ X_{t} $ lies outside the neighborhood and its importance will be weighed down to zero. Alternatively, if $ |X_{t} - x| = 0 $, then $ X_{t} = x $ and $ X_{t} $ will be assigned the highest weight, which in the case of the Epanechnikov and cosine kernels, is 0.75 and 0.8, respectively <br /><br /> To demonstrate the mechanics, consider a kernel estimator based on $ k- $nearest neighbouring points, or the weighted $ k-NN $ estimator. In particular, this estimator defines the neighbourhood as all points $ X_{t} $, the distance of which to an evaluation point $ x_{j} $, are no greater than the distance of the $ k^{\text{th}} $ nearest point $ X_{t} $ to the same evaluation point $ x_{j} $. When used in the optimization problem \eqref{eq.1.7}, the resulting estimator is also sometimes referred to as <i>LOWESS</i> - LOcally Weighted Estimated Scatterplot Smoothing.<br /><br /> The algorithm used in the demonstration is relatively simple. First, define $ k^{\star} $ as the number of neighbouring points to be considered and define a grid $ \mathcal{X} \equiv \{x_{1}, \ldots, x_{J}\} $ of points at which an estimate of $ m(\cdot) $ is desired. Next, define a kernel function $ K(\cdot) $. Finally, for each $ j \in \{1, \ldots, J\}, $, execute the following: <ol> <li>For each $ t \in \{1,\ldots, T\} $, compute $ d_{t} = |X_{t} - x_{j}| $ -- the Euclidean distance between $ X_{t} $ and $ x_{j} $. <li>Order the $ d_{t} $ in ascending order to form the ordered set $ \{d_{(1)} \leq d_{(2)} \leq \ldots \leq d_{(T)}\} $. <li>Set the bandwidth as $ h = d_{(k^{\star})} $. <li>For each $ t \in \{1,\ldots, T\} $, compute a weight $ w_{t} \equiv K_{h}(X_{t} - x_{j}) $. <li>Solve the optimization problem: $$ \arg\!\min_{\hspace{-1em} \beta_{0}} E\left(Y_{t} - \beta_{0}\right)^{2}w_{t} $$ to derive the parameter estimate: $$ \widehat{m}_{\setminus t^{\star}}(x_{j}) \equiv \widehat{\beta}_{0}(x_{j}) = \frac{\sum_{t=1}^{T}w_{t}Y_{t}}{\sum_{t=1}^{T}w_{t}} $$ </ol> An estimate of $ m(x) $ along the domain $ \mathcal{X} $, is now the linear interpolation of the points $ \{\widehat{\beta}_{0}(x_{1}), \ldots, \widehat{\beta}_{0}(x_{J})\} $.<br /><br /> For instance, suppose $ m(x) $ is the curve defined in \eqref{eq.1.6}, the evaluation grid $ \mathcal{X} $ consists of points in the interval $ [-6,6] $, and $ K(\cdot) $ is the Epanechnikov kernel. Furthermore, suppose $ Y_{t} = m(x) + 0.5\epsilon_{t} $ and $ X_{t} = x - 0.5 + \eta_{t} $. Notice that we're back to treating the regressor as a stochastic variable. Then, the $ k-NN $ estimator of $ m(\cdot) $ with 15, 40, 100, and 200 nearest neighbour points, respectively, is illustrated below.<br /><br /> <!-- :::::::::: FIGURE 8 :::::::::: --><center> <a href="http://www.eviews.com/blog/funcoef/knnreg.jpeg"><img height="auto" src="http://www.eviews.com/blog/funcoef/knnreg.jpeg" title="k-NN Regression" width="720" /></a><br /> <small>Figure 8: k-NN Regression</small><br /><br /></center><!-- :::::::::: FIGURE 8 :::::::::: --> Clearly, the estimator can be very adaptive to the nuances of outlying points but can suffer from both underfitting and overfitting. In this regard, observe that the number of neighbouring points is directly proportional to neighbourhood (bandwidth) size. In other words, as the number of neighbouring points increases, the bandwidth increases. This is evidenced by a very volatile estimator when the number of neighbouring points is 15, and a significantly smoother estimator when th number of neighbouring points is 200. Therefore, there must be some optimal middle ground between undersmoothing and oversmoothing. In general, notice that apart from the lower zero bound, the bandwidth is not bounded above. Thus, there is an extensive range of bandwidth possibilities. So how does one define what constitutes an optimal bandwidth?<br /><br /> <h4 id="sec3.2">Bandwidth Selection</h4> While we will cover optimal bandwidth selection in greater detail in Part II of this series, it is not difficult to draw similarities between the role of bandwidth size in local estimation and sieve length in global methods. In fact similar methods for optimal bandwidth selection exist in the context of local kernel regression, and analogous to sieve methods, are also typically grid searches. In this regard, in order to avoid complicated theoretical discourse, consider momentarily the optimization problem in \eqref{eq.1.7}.<br /><br /> It is not difficult to demonstrate that the estimator $ \widehat{\beta}_{0}(x) $ satisfies: \begin{align*} \widehat{\beta}_{0}(x) &= \frac{T^{-1}\sum_{t=1}^{T}K_{h}\left(X_{t} - x\right)Y_{t}}{T^{-1}\sum_{t=1}^{T}K_{h}\left(X_{t} - x\right)}\\ &=\frac{1}{T}\sum_{t=1}^{T}\left(\frac{K_{h}\left(X_{t} - x\right)}{T^{-1}\sum_{i=1}^{T}K_{h}\left(X_{i} - x\right)}\right)Y_{t} \end{align*} Accordingly, if $ h\rightarrow 0 $, then $ \frac{K_{h}\left(X_{t} - x\right)}{T^{-1}\sum_{i=1}^{T}K_{h}\left(X_{i} - x\right)} \rightarrow T $ and is only defined on $ x = X_{t} $. In other words, as the bandwidth approaches zero, $ \widehat{\beta}_{0}(x) \equiv \widehat{\beta}_{0}(X_{t}) \rightarrow Y_{t} $, and the estimator is effectively an interpolation of the data. Naturally, this estimator has very small bias since it picks up every data point in $ Y_{t} $, but also has very large variance for the same reason.<br /><br /> Alternatively, should $ h \rightarrow \infty $, then $ \frac{K_{h}\left(X_{t} - x\right)}{T^{-1}\sum_{i=1}^{T}K_{h}\left(X_{i} - x\right)} \rightarrow 1 $ for all values of $ x $, and $ \widehat{\beta}_{0}(x) \rightarrow T^{-1}\sum_{t=1}^{T}Y_{t} $. That is, $ \widehat{\beta}_{0}(x) $ is a constant function equal to the mean of $ Y_{t} $, and therefore has zero variance, but suffers from very large modelling bias since it picks up only those points equal to the average.<br /><br /> Between these two extremes is an entire spectrum of models $ \left\{\mathcal{M}_{h} : h \in \left(0, \infty\right) \right\} $ ranging from the most complex $ \mathcal{M}_{0} $, to the least complex $ \mathcal{M}_{\infty} $. In other words, the bandwidth parameter $ h $ governs model complexity. Thus, the optimal bandwidth selection problem selects an $ h^{\star} $ to generate a model $ \mathcal{M}_{h^{\star}} $ best suited for the data under consideration. In other words, it reduces to the classical bias-variance tradeoff.<br /><br /> To demonstrate certain principles, we close this section by returning to the leave-one-out cross-validation procedure discussed earlier. As a matter of fact, the algorithm also applies to local kernel regression and we do so in the context of $ k-NN $ regression, also discussed earlier.<br /><br /> In particular, define a search grid $ \mathcal{K} \equiv \{k_{min}, \ldots, k_{max}\} $ of the number of neighbouring points, select a kernel function $ K(\cdot) $, and iterate the following steps over $ k \in \mathcal{K} $: <ol> <li>For each observation $ t^{\star} \in \left\{1, \ldots, T \right\} $: <ol type="i"> <li>For each $ t \neq t^{\star} \in \{1,\ldots, T\} $, compute $ d_{t \neq t^{\star}} = |X_{t} - X_{t^{\star}}| $. <li>Order the $ d_{t \neq t^{\star}} $ in ascending order to form the ordered set $ \{d_{t \neq t^{\star} (1)} \leq d_{t \neq t^{\star} (2)} \leq \ldots d_{t \neq t^{\star} (T-1)}\} $. <li>Set the bandwidth as $ h_{\setminus t^{\star}} = d_{t \neq t^{\star} (k)} $. <li>For each $ t \neq t^{\star} \in \{1,\ldots, T\} $ , compute a weight $ w_{_{\setminus t^{\star}}t} \equiv K_{h_{\setminus t^{\star}}}(X_{t} - X_{t^{\star}}) $. <li>Solve the optimization problem: $$ \arg\!\min_{\hspace{-1em} \beta_{0}} E\left(Y_{t} - \beta_{0}\right)^{2}w_{_{\setminus t^{\star}}t} $$ to derive the parameter estimate: $$ \widehat{m}_{k,\setminus t^{\star}}(X_{t^{\star}}) \equiv \widehat{\beta}_{_{k,\setminus t^{\star}}0}(X_{t^{\star}}) = \frac{\sum_{t\neq t^{\star}}^{T}w_{_{\setminus t^{\star}}t}Y_{t}}{\sum_{t\neq t^{\star}}^{T}w_{_{\setminus t^{\star}}t}} $$ where we use the subscript $ k,\setminus t^{\star} $ to denote explicit dependence on the number of neighbouring points $ k $ and the dropped observation $ t^{\star} $. <li>Derive the forecast error for the dropped observation as follows: $$ e_{_{k}t^{\star}} \equiv Y_{t^{\star}} - \widehat{m}_{k,\setminus t^{\star}}(X_{t^{\star}}) $$ </ol> <li>Derive the cross-validation mean squared error when using $ k $ nearest neighbouring points : $$ MSE_{k} = \frac{1}{T}\sum_{t=1}^{T} e_{_{k}t}^{2} $$ <li>Determine the optimal number of neighbouring points $ k^{\star} $ as the minimum $ MSE_{k} $ across $ \mathcal{K} $. In other words $$ k^{\star} = \min_{k\in\mathcal{K}} MSE_{k} $$ </ol> We close this section and blog entry with an illustration of the procedure. In particular, we again consider the function in \eqref{eq.1.6}, and use the cosine kernel to search for the optimal number of neighbouring points over the search grid $ \mathcal{K} \equiv \{40, \ldots, 80\} $.<br /><br /> <!-- :::::::::: FIGURE 9 :::::::::: --><center> <a href="http://www.eviews.com/blog/funcoef/knnregopt.jpeg"><img height="auto" src="http://www.eviews.com/blog/funcoef/knnregopt.jpeg" title="k-NN Regression with Optimal k" width="360" /></a><br /> <small>Figure 9: k-NN Regression with Optimized k</small><br /><br /></center><!-- :::::::::: FIGURE 9 :::::::::: --> <h3 id="sec4">Conclusion</h3> Given the recent introduction of functional coefficient estimation in EViews 11, our aim in this multi-part blog series is to complement this feature release with a theoretical and practical overview. As first step in this regard, we've dedicated this Part I of the series to gently introducing readers to the principles of nonparametric estimation, and illustrated them using EViews programs. In particular, we've covered principles of sieve and kernel estimation, as well as optimal sieve length and bandwidth selection. In Part II, we'll extend the principles discussed here and cover the theory underlying functional coefficient estimation in greater detail.<br /><br /> <h3 id="sec5">Files</h3>The workfile and program files can be downloaded here.<br /><br /> <ul> <li> <a href="http://www.eviews.com/blog/funcoef/sievereg.prg">sievereg.prg</a> <li> <a href="http://www.eviews.com/blog/funcoef/locavg.prg">locavg.prg</a> <li> <a href="http://www.eviews.com/blog/funcoef/knnreg.prg">knnreg.prg</a></ul><br /><br /> <hr /><h3 id="sec6">References</h3> <ol class="bib2xhtml"> <!-- Authors: Craven Peter and Wahba Grace --><li><a name="craven-1979"></a>Peter Craven and Grace Wahba. Estimating the correct degree of smoothing by the method of generalized cross-validation. <cite>Numerische Mathematik</cite>, 31:377–403, 1979.</li> <!-- Authors: Grenander Ulf --><li><a name="grenander-1981"></a>Ulf Grenander. Abstract inference. Technical report, 1981.</li> <!-- Authors: Li Ker Chau and others --><li><a name="li-1987"></a>Ker-Chau Li and others. Asymptotic optimality for csub>p</sub>, csub>l</sub> , cross-validation and generalized cross-validation: Discrete index set. <cite>The Annals of Statistics</cite>, 15(3):958–975, 1987.</li> <!-- Authors: Mallows Colin L --><li><a name="mallows-1973"></a>Colin L Mallows. Some comments on c p. <cite>Technometrics</cite>, 15(4):661–675, 1973.</li> <!-- Authors: Stone Mervyn --><li><a name="stone-1974"></a>Mervyn Stone. Cross-validation and multinomial prediction. <cite>Biometrika</cite>, 61(3):509–515, 1974.</li> </ol></span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com0tag:blogger.com,1999:blog-6883247404678549489.post-69999115724147797532019-04-23T15:09:00.001-07:002019-04-26T07:16:56.719-07:00Generalized Autoregressive Score (GAS) Models: EViews Plays with Python<script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { lb: "{\\left(}", rb: "{\\right)}", bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"></script> <span style="font-family: "verdana" sans-serif"> Starting with EViews 11, users can take advantage of communication between EViews and Python. This means that workflow can begin in EViews, switch over to Python, and be brought back into EViews seamlessly. To demonstrate this feature, we will use U.S. macroeconomic data on the unemployment rate to fit a GARCH model in EViews, transfer the data over and estimate a GAS model equivalent of the GARCH model in Python, transfer the data back to EViews, and compare the results.<br /><br /><a name='more'></a> <h3>Table of Contents</h3><ol> <li><a href="#sec1">GAS Models</a> <li><a href="#sec2">Example Description</a> <li><a href="#sec3">Preparatory Work</a> <li><a href="#sec4">Data Analysis in EViews</a> <li><a href="#sec5">Data Analysis in Python</a> <li><a href="#sec6">Back to EViews</a> <li><a href="#sec7">Files</a> <li><a href="#sec8">References</a></ol><br /> <h3 id="sec1">GAS Models</h3> Historically, time varying parameters have received an enormous amount of attention and the literature is saturated with numerous specifications and estimation techniques. Nevertheless, many of these specifications are often difficult to estimate, such as the family of stochastic volatility models, among which GARCH is a canonical example. In this regard, Creal, Koopman, and Lucas (2013) and Harvey (2013) proposed a novel family of time-varying parametric models estimated using the familiar maximum likelihood framework with the score of the conditional density function driving the updating mechanism. The family has now come to be known as the <b>generalized autoregressive score</b> (GAS) family or model.<br /><br /> GAS models are agnostic as to the type of data under consideration as long as the score function and the Hessian are well defined. In particular, the model assumes an input vector of random variables at time $ t $, say $ \pmb{y}_{t} \in \mathbf{R}^{q} $, where $ q=1 $ if the setting is univariate. Furthermore, the model assumes a conditional distribution at time $ t $ specified as: $$ \pmb{y}_{t} | \pmb{y}_{1}, \ldots, \pmb{y}_{t-1} \sim p(\pmb{y}_{t}; \pmb{\theta}_{t}) $$ where $ \pmb{\theta}_{t} \equiv \pmb{\theta}_{t} (\pmb{y}_{1}, \ldots, \pmb{y}_{t-1}, \pmb{\xi}) \in \Theta \subset \mathbf{R}^{r}$ is a vector of time varying parameters which fully characterize $ p(\cdot) $ and are functions of past data and possibly time invariant parameters $ \pmb{\xi} $.<br /><br /> What distinguishes GAS models from the rest of the literature is that dynamics in $ \pmb{\theta}_{t} $ are driven by an autoregressive mechanism augmented with the score of the conditional distribution of $ p(\cdot) $. In particular, $$ \pmb{\theta}_{t+1} = \pmb{\omega} + \pmb{A}\pmb{s}_{t} + \pmb{B}\pmb{\theta}_{t} $$ where $ \pmb{\omega}, \pmb{A}, $ and $ \pmb{B} $ are matrix coefficients collected in $ \pmb{\xi} $, and $ \pmb{s}_{t} $ is a vector proportional to the score of $ p(\cdot) $: $$ \pmb{s}_{t} = \pmb{S}_{t}(\pmb{\theta}_{t}) \pmb{\nabla}_{t}(\pmb{y}_{t}, \pmb{\theta}_{t}) $$ Above, $ \pmb{S}_{t} $ is an $ r\times r $ positive definite scaling matrix known at time $ t $, and $$ \pmb{\nabla}_{t}(\pmb{y}_{t}, \pmb{\theta}_{t}) \equiv \frac{\partial \log p(\pmb{y}_{t}; \pmb{\theta}_{t})}{\partial \pmb{\theta}_{t}}$$ It turns out that different choices of $ \pmb{S}_{t} $ produce different GAS models. For instance, setting $ \pmb{S}_{t} $ to some power $ \gamma > 0 $ of the information matrix of $ \pmb{\theta}_{t} $ will change how the variance of $ \pmb{\nabla}_{t} $ impacts the model. In particular, consider: $$ \pmb{S}_{t} = \pmb{\mathcal{I}}_{t}(\pmb{\theta}_{t})^{-\gamma} $$ where $$ \pmb{\mathcal{I}}_{t}(\pmb{\theta}_{t}) = E_{t-1}\left\{ \pmb{\nabla}_{t}(\pmb{y}_{t}, \pmb{\theta}_{t}) \pmb{\nabla}_{t}(\pmb{y}_{t}, \pmb{\theta}_{t})^{\top} \right\} $$ Typical choices for $ \gamma $ are 0, 1/2, and 1. For instance, if $ \gamma=0 $, $ \pmb{S}_{t} = \pmb{I} $ and no scaling occurs. Alternatively, when $ \gamma = 1/2 $, the scaling results in $ Var_{t-1}(\pmb{s}_{t}) = \pmb{I} $; in other words, standardization occurs.<br /><br /> Regardless of the choice of $ \gamma $, $ \pmb{s}_{t} $ is a martingale difference with respect to the distribution $ p(\cdot) $, and $ E_{t-1}\left\{ \pmb{s}_{t} \right\} = 0 $ for all $ t $. This latter property further implies that $ \pmb{\theta}_{t} $ is in fact a stationary process with long-term mean value $ (\pmb{I}_{t} - \pmb{B})^{-1}\pmb{\omega} $, whenever the spectral radius of $ \pmb{B} $ is less than one. Thus, $ \pmb{\omega} $ and $ \pmb{B} $ are respectively responsible for controlling the level and the persistence of $ \pmb{\theta}_{t} $, whereas $ \pmb{A} $ controls for the impact of $ \pmb{s}_{t} $. In other words, $ \pmb{s}_{t} $ denotes the direction of updating $ \pmb{\theta}_{t} $ to $ \pmb{\theta}_{t+1} $, acting as the the steepest ascent algorithm for improving the model's local fit.<br /><br /> With the above frameowrk established, Creal, Koopman, and Lucas (2013) show that various choices for $ p(\cdot) $ and $ \pmb{S}_{t} $ lead to various GAS specifications, some of which reduce to very familiar and well established existing models. For instance, let $ y_{t} = \sigma_{t}\epsilon_{t} $, and suppose $ \epsilon_{t} $ is a Gaussian random variable with mean zero and unit variance. It is readily shown that setting $ S_{t} = \mathcal{I}_{t}^{-1} $ and $ \theta_{t} = \sigma_{t}^{2} $, the GAS updating equation reduces to: $$ \theta_{t+1} = \omega + A(y_{t}^{2} - \theta_{t}) + B\theta_{t} $$ which is equivalent to the standard GARCH(1,1) model $$ \sigma_{t+1}^{2} = \alpha + \beta y_{t}^{2} + \eta \sigma_{t}^{2} $$ where $ \alpha = \omega $, $ \beta = A $, and $ \eta = B - A $. There is of course a number of other examples and configurations, and we refer the reader to the original texts for more details.<br /><br /> <h3 id="sec2">Example Description</h3> Our objective here is to communicate between EViews and Python to estimate a GAS model in Python and compare the results back in EViews. In particular, we will work with U.S. monthly civil unemployment rate, defined as the number of unemployed as a percentage of the labor force -- <i>Labor force data are restricted to people 16 years of age and older, who currently reside in 1 of the 50 states or the District of Columbia, who do not reside in institutions (e.g., penal and mental facilities, homes for the aged), and who are not on active duty in the Armed Forces.</i> See the FRED database at <a href="https://fred.stlouisfed.org/series/UNRATE">https://fred.stlouisfed.org/series/UNRATE</a>) -- to which we will fit a GARCH(1,1) model using the traditional method as well as the GAS approach.<br /><br /> It is well known that unemployment rates are typically very volatile and persistent, particularly in contractionary economic cycles. This is because major firm decisions, such as workforce expansions and contractions, are often accompanied by large sunk costs (e.g. job advertisements, screening, training), and are usually irreversible in the immediate short term (e.g. wage frictions such as labour contracts and dismissal costs). Thus, in contractionary periods, firms typically prefer to defer hiring decisions until more favourable conditions return, resulting in strong unemployment persistence known as <i>spells</i>. On the other hands, these periods are often characterized by frequent labour force transitions and increased search activities, both of which contribute to unemployment volatility.<br /><br /> In light of the above, measuring the volatility of unemployment requires the use of econometric models which are designed to capture both volatility and persistence. While several such models exist in the literature, here we focus on perhaps the most well known such model proposed by Engle (1982) and Bollerslev (1986), the generalized autoregressive conditional heteroskedasticity (GARCH) model described earlier. In particular, if we let $ y_{t} $ denote the monthly unemployment rate, we are interested in obtaining an estimate $ \widehat{\sigma}_{t} $ of $ \sigma_{t} $, at each point in time, effectively tracing the evolution of unemployment volatility for the period under consideration. Since the GAS model above reduces to the GARCH model when the conditional distribution $ p(\cdot) $ is Gaussian and the time varying parameter is the volatility of the process, we would like to compare the estimates from the GAS model to those generated by EViews' internal GARCH estimation. Note here that while EViews can estimate numerous (G)ARCH models, it cannot yet natively estimate GAS models. Accordingly, we will fit a GARCH model in EViews, transfer our data over to Python, and estimate a GAS model using the Python package <b>PyFlux</b>. We will then compare our findings.<br /><br /> <h3 id="sec3">Preparatory Work</h3> Before getting started, please make sure that you have Python 3 installed from <a href="https://www.python.org/downloads/release/python-368/">https://www.python.org/downloads/release/python-368/</a> on your system, and that you also have the following Python packages installed: <ol> <li>NumPy <li>Pandas <li>Matplotlib <li>Seaborn <li>PyFlux </ol> One (certainly not the only) way to install said packages, is to open up a command prompt on your system and navigate to the directory where Python was installed; this is usually <code>C:\Users\USER_NAME\AppData\Local\Programs\Python\Python36_64</code> if you have a 64-bit version. From there, issue the following commands: <pre><br /> python -m pip install --upgrade pip<br /> python -m pip install PACKAGE_NAME<br /></pre> Next, make sure that the path to Python is specified in your EViews options. Specifically, in EViews, go to <b>Options/General Options...</b> and on the left tree select <b>External program interface</b> and ensure that <b>Home Path</b> is correctly pointing to the directory where Python is installed. Usually, you will not have to touch this setting since EViews populates this field by searching your system for the install directory.<br /><br /> Finally, please note that as of writing, the analysis that follows was tested with Python version 3.6.8 and PyFlux version 0.4.15.<br /><br /> <h3 id="sec4">Data Analysis in EViews</h3> Turning to data analysis, in EViews, create a new monthly workfile. To do so, click on <b>File/New/Workfile</b>. Under <b>Frequency</b> select <b>Monthly</b>, and set the <b>Start date</b> to <b>2006M12</b> and the <b>End date</b> to <b>2013M12</b>, and hit <b>OK</b>. Next, fetch the unemployment rate data from the FRED database by clicking on <b>File/Open/Database...</b>. From here, select <b>FRED Database</b> from the <b>Database/File Type</b> dropdown, and hit <b>OK</b>. This opens the FRED database window. To get the series of interest from here, click on the <b>Browse</b> button. This opens a new window with a folder-like overview. Here, click on <b>All Series Search</b> and then type <b>UNRATE</b> in the <b>Search For</b> textbox. This will list a series called <i>Civilian Unemployment Rate (M,SA,%)</i>. Drag the series over to the workfile to make it available for analysis. This will fetch the series <b>UNRATE</b> from the FRED database and place it in the workfile. In particular, we are grabbing data from the period of December 2006 to December 2013 -- effectively the recessionary period characterized by the recent housing loan crisis in the United States. <table> <tbody> <tr> <td> <!-- :::::::::: FIGURE 1A :::::::::: --> <center> <a href="https://lh3.googleusercontent.com/-Q-8R_IidAy4/XL9_4pjYlVI/AAAAAAAAAwE/fIApBqd5JaM6BUKSTbtyWoVebZ9H3o-6gCLcBGAs/s1600/workfiledlg.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-Q-8R_IidAy4/XL9_4pjYlVI/AAAAAAAAAwE/fIApBqd5JaM6BUKSTbtyWoVebZ9H3o-6gCLcBGAs/s1600/workfiledlg.jpg" title="Workfile Dialog" width="320" /></a><br /> </center> <!-- :::::::::: FIGURE 1A :::::::::: --> </td> <td> <!-- :::::::::: FIGURE 1B :::::::::: --> <center> <a href="https://lh3.googleusercontent.com/-l_NxAegKlPA/XL9_1x9o3jI/AAAAAAAAAvk/Ti-KspaNYvcxFHOTFnf01N-fAQwYGr5kwCLcBGAs/s1600/dbasedlg.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-l_NxAegKlPA/XL9_1x9o3jI/AAAAAAAAAvk/Ti-KspaNYvcxFHOTFnf01N-fAQwYGr5kwCLcBGAs/s1600/dbasedlg.jpg" title="Database Dialog" width="320" /></a><br /> </center> <!-- :::::::::: FIGURE 1B :::::::::: --> </td> </tr> <tr> <td> <center> <small>Figure 1A: Workfile Dialog</small><br /><br /> </center> </td> <td> <center> <small>Figure 1B: Database Dialog</small><br /><br /> </center> </td> </tr> <tr> <td> <!-- :::::::::: FIGURE 1C :::::::::: --> <center> <a href="https://lh3.googleusercontent.com/-ZXhI23HsbNg/XL-dET3-9WI/AAAAAAAAAxQ/VO7Ei3YNsZo323-Lki9uF8X9gQomoK8-gCLcBGAs/s1600/fredqry.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-ZXhI23HsbNg/XL-dET3-9WI/AAAAAAAAAxQ/VO7Ei3YNsZo323-Lki9uF8X9gQomoK8-gCLcBGAs/s1600/fredqry.jpg" title="FRED Browse" width="320" /></a><br /> </center> <!-- :::::::::: FIGURE 1C :::::::::: --> </td> <td> <!-- :::::::::: FIGURE 1D :::::::::: --> <center> <a href="https://lh3.googleusercontent.com/-Oe-7rPF-3B4/XL-dBX3UPMI/AAAAAAAAAxM/CbHRLkPXNZUlu8Bk3NDpY_XJXJFh-x4IwCLcBGAs/s1600/fredqry2.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-Oe-7rPF-3B4/XL-dBX3UPMI/AAAAAAAAAxM/CbHRLkPXNZUlu8Bk3NDpY_XJXJFh-x4IwCLcBGAs/s1600/fredqry2.jpg" title="FRED Search" width="320" /></a><br /> </center> <!-- :::::::::: FIGURE 1D :::::::::: --> </td> </tr> <tr> <td> <center> <small>Figure 1C: FRED Browse</small><br /><br /> </center> </td> <td> <center> <small>Figure 1C: FRED Search</small><br /><br /> </center> </td> </tr> </tbody></table> Also, restrict the sample to the period from January 2007 to December 2013. Why we do this will become apparent later. To do so, issue the following command in EViews: <pre><br /> smpl 2007M01 @last<br /></pre> To see what the data looks like, double click on a <b>UNRATE</b> in the workfile to open the series object. Next, click on <b>View/Graph...</b>. This will open a graph options window. We will stick with the defaults so click on <b>OK</b>. The output is reproduced below.<br /><br /> <!-- :::::::::: FIGURE 2 :::::::::: --><center> <a href="https://lh3.googleusercontent.com/-uHw8WuakTR4/XL9_5hRhC0I/AAAAAAAAAwI/9fNMnaORB0s1VtiXiNExZk2F1lhJtaTogCLcBGAs/s1600/unrategrph.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-uHw8WuakTR4/XL9_5hRhC0I/AAAAAAAAAwI/9fNMnaORB0s1VtiXiNExZk2F1lhJtaTogCLcBGAs/s1600/unrategrph.jpg" title="Time Series Plot of UNRATE" width="320" /></a><br /> <small>Figure 2: Time Series Plot of UNRATE</small><br /><br /></center><!-- :::::::::: FIGURE 2 :::::::::: --> We will now estimate a basic GARCH model on <b>UNRATE</b>. To do this, click on <b>Quick/Estimate Equation...</b>, and under <b>Method</b> choose <b>ARCH - Autoregressive Conditional Heteroskedasticity</b>. In the <b>Mean Equation</b> text box type <b>UNRATE</b> and leave everything else as their default values. Click on <b>OK</b>. <table> <tbody> <tr> <td> <!-- :::::::::: FIGURE 3A :::::::::: --> <center> <a href="https://lh3.googleusercontent.com/-hhVBxWIvrWQ/XL9_2nSKhHI/AAAAAAAAAvw/4XEXNVqQYKoSJsz-o07VYotr_chidFdBwCLcBGAs/s1600/garchdlg.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-hhVBxWIvrWQ/XL9_2nSKhHI/AAAAAAAAAvw/4XEXNVqQYKoSJsz-o07VYotr_chidFdBwCLcBGAs/s1600/garchdlg.jpg" title="GARCH Estimation Dialog" width="320" /></a><br /> </center> <!-- :::::::::: FIGURE 3A :::::::::: --> </td> <td> <!-- :::::::::: FIGURE 3B :::::::::: --> <center> <a href="https://lh3.googleusercontent.com/-qOWoCw0lzlY/XL9_3H7NTOI/AAAAAAAAAv8/s8OAJ2cGWBg2xYkfgrFI9QA9KHsQ51A6ACLcBGAs/s1600/garchoutput.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-qOWoCw0lzlY/XL9_3H7NTOI/AAAAAAAAAv8/s8OAJ2cGWBg2xYkfgrFI9QA9KHsQ51A6ACLcBGAs/s1600/garchoutput.jpg" title="GARCH Estimation Output" width="320" /></a><br /> </center> <!-- :::::::::: FIGURE 3B :::::::::: --> </td> </tr> <tr> <td> <center> <small>Figure 3A: GARCH Estimation Dialog</small><br /><br /> </center> </td> <td> <center> <small>Figure 3B: GARCH Estimation Output</small><br /><br /> </center> </td> </tr> </tbody></table> From the estimation output we can see that model parameters have the following estimates: <ol> <li>$ \alpha = 1.068302 $ <li>$ \beta = 1.236277 $ <li>$ \eta = -0.247753 $ </ol> We can also see the path of the volatility process by clicking on <b>View/Garch Graph/Conditional Variance</b>. This produces a plot of $ \widehat{\sigma}^{2}_{t} $. In fact, we will also create a series object from the data points used to produce the GARCH conditional variance. To do this, from the GARCH conditional variance window, click on <b>Proc/Make GARCH Variance Series...</b> and in the <b>Conditional Variance</b> textbox enter <b>EVGARCH</b> and hit <b>OK</b>. This produces a series object called <b>EVGARCH</b> and places it in the workfile. We will use it a bit later.<br /><br /> <table> <tbody> <tr> <td> <!-- :::::::::: FIGURE 4A :::::::::: --> <center> <a href="https://lh3.googleusercontent.com/-8DKMKkkml08/XL9_2yebDjI/AAAAAAAAAv0/JcgxOH_kCo0jlbZwBZS8CwfZsx_7wQguQCLcBGAs/s1600/garchcondvar.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-8DKMKkkml08/XL9_2yebDjI/AAAAAAAAAv0/JcgxOH_kCo0jlbZwBZS8CwfZsx_7wQguQCLcBGAs/s1600/garchcondvar.jpg" title="GARCH Conditional Variance of UNRATE" width="320" /></a><br /> </center> <!-- :::::::::: FIGURE 4A :::::::::: --> </td> <td> <!-- :::::::::: FIGURE 4B :::::::::: --> <center> <a href="https://lh3.googleusercontent.com/-MXO4qlsX3nY/XL9_2ixTAAI/AAAAAAAAAvs/9dCFnnP_0EEG6AA0QbFfG1ecA_BsNhiYgCLcBGAs/s1600/garchcondvardlg.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-MXO4qlsX3nY/XL9_2ixTAAI/AAAAAAAAAvs/9dCFnnP_0EEG6AA0QbFfG1ecA_BsNhiYgCLcBGAs/s1600/garchcondvardlg.jpg" title="GARCH Conditional Variance of Proc" width="320" /></a><br /> </center> <!-- :::::::::: FIGURE 4B :::::::::: --> </td> </tr> <tr> <td> <center> <small>Figure 4A: GARCH Conditional Variance of UNRATE</small><br /><br /> </center> </td> <td> <center> <small>Figure 4B: GARCH Conditional Variance Proc</small><br /><br /> </center> </td> </tr> </tbody></table> <h3 id="sec5">Data Analysis in Python</h3> To estimate the GAS equivalent of this model we must first transfer our data over to Python. To do so, issue the following command in EViews: <pre><br /> xopen(p)<br /></pre> This tells EViews to open an instance of Python within EViews and open up bi-directional communication. In fact you should see a new command window appear, titled <b>Log: Python Output</b>. Here you can issue commands into Python directly as if you had opened a Python instance at any command prompt. You can also send commands to Python using EViews command prompt. In fact, we will use the latter approach to import packages into our Python instance as follows: <pre><br /> xrun "import numpy as np"<br /> xrun "import pandas as pd"<br /> xrun "import pyflux as pf"<br /> xrun "import matplotlib.pyplot as plt"<br /></pre> For instance, the first command above tells eviews to issue the command <i>import numpy as np</i> in the open Python instance, thereby importing the NumPy package. In fact, all results will be echoed in the Python instance.<br /><br /> <!-- :::::::::: FIGURE 5 :::::::::: --><center> <a href="https://lh3.googleusercontent.com/-GCNeE9hdtPk/XL-dH7SByDI/AAAAAAAAAxU/R2vsxQiCvucN9mvTsxJcr2Gys_PH2YNvACLcBGAs/s1600/pythondlg.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-GCNeE9hdtPk/XL-dH7SByDI/AAAAAAAAAxU/R2vsxQiCvucN9mvTsxJcr2Gys_PH2YNvACLcBGAs/s1600/pythondlg.jpg" title="Python Output Log" width="320" /></a><br /> <small>Figure 5: Python Output Log</small><br /><br /></center><!-- :::::::::: FIGURE 5 :::::::::: --> Next, transfer the <b>UNRATE</b> series over to Python by issuing the following command in EViews: <pre><br /> xput(ptype=dataframe) unrate<br /></pre> The command above sends the series <b>UNRATE</b> to Python and transforms that data into a Pandas DataFrame object.<br /><br /> We now follow the PyFlux documentation and estimate the GAS model by issuing the following commands from EViews: <pre><br /> xrun "model = pf.GAS(ar=1, sc=1, data=unrate, family=pf.Normal())"<br /> xrun "fit = model.fit('MLE')"<br /> xrun "fit.summary()"<br /></pre> The first command above tells PyFlux to create a GAS model object that has one autoregressive and one scaling parameter, sets $ p(\cdot) $ to the Gaussian distribution, and uses the series <b>UNRATE</b> as $ y_{t} $. In other words, the autoregressive and scaling parameters respectively corresponds to the coefficients $ A $ and $ B $ in the first section of this document. The second command tells Python to create a variable <b>FIT</b> which will hold the output from an estimated GAS model which uses maximum likelihood as the estimation technique. We display the output of this estimation by invoking the third command. In particular, we have the following estimates: <ol> <li>$ \omega = 0.0027 $ <li>$ A = 1.2973 $ <li>$ B = 0.9994 $ </ol> In fact, we can also obtain a distributional plot of the autoregressive coefficient $ B $ across the period of estimation. To do this, invoke the following command within EViews: <pre><br /> xrun "model.plot_z([1], figsize=(15,5))"<br /></pre> The latter command tells Python to plot the distribution of the 2nd estimated coefficient (the AR coefficient) and to display a figure which is of size $ 15\times 5 $ inches. This is the distribution of the evolution of $ B $ and is <b>not</b> the time path of the estimated coefficient. <br /><br /> <!-- :::::::::: FIGURE 6 :::::::::: --><center> <a href="https://lh3.googleusercontent.com/-jDmWcghD7Ac/XL9_3PK7ppI/AAAAAAAAAv4/qsfrDrSvkG4Ho9epb73PaOgKz3kCI4HrQCLcBGAs/s1600/pyar1.png"><img height="auto" src="https://lh3.googleusercontent.com/-jDmWcghD7Ac/XL9_3PK7ppI/AAAAAAAAAv4/qsfrDrSvkG4Ho9epb73PaOgKz3kCI4HrQCLcBGAs/s1600/pyar1.png" title="Python GAS Distribution of AR Parameter" width="320" /></a><br /> <small>Figure 6: Python GAS Distribution of AR Parameter</small><br /><br /></center><!-- :::::::::: FIGURE 6 :::::::::: --> While we can obtain a distribution of the estimated parameters, unfortunately, PyFlux does not offer a way to extract the time path as a Python data object. Thankfully, we can recreate it manually and easily as a series in EViews.<br /><br /> <h3 id="sec6">Back To EViews</h3> To create the time path of the estimated GAS coefficient, we first need to transfer the coefficients from the estimated GAS model back into EViews. To do this, we invoke the following command in EViews: <pre><br /> xget(name=gascoefs, type=vector) fit.results.x[0:3]<br /></pre> This tells Python to send the first three estimated coefficients back to EViews, and saves the result as a vector called <b>GASCOEFS</b>.<br /><br /> Next, create a new series in the workfile called <b>GASGARCH</b> by issuing the following command in the EViews: <pre><br /> series gasgarch<br /></pre> Also, since this is an autoregressive process, we need to set an initial value for <b>GASGARCH</b>. We do this by setting the December 2006 observation to 0.7 -- the default value EViews uses to initialize its internal GARCH estimation. We do this by typing the following commands in EViews: <pre><br /> smpl 2006M12 2006M12<br /> gasgarch = 0.7<br /></pre> Next, we set the sample back to the period of interest and fill the values of <b>GASGARCH</b> using the GARCH formula with the coefficients from the GAS model. To do this, issue the following commands in EViews again: <pre><br /> smpl 2007M01 @last<br /> gasgarch = gascoefs(1) + gascoefs(3)*(unrate(-1)^2 - gasgarch(-1)) + gascoefs(2)*gasgarch(-1)<br /></pre> At last, we plot the GARCH conditional variance path from the internal estimation, <b>EVGARCH</b> along with the newly created series <b>GASGARCH</b>. We can do this programatically by issuing the following commands in EViews: <pre><br /> plot evgarch gasgarch<br /></pre> <!-- :::::::::: FIGURE 7 :::::::::: --><center> <a href="https://lh3.googleusercontent.com/-4S2DBUuONKQ/XL-DnGwwZSI/AAAAAAAAAw0/eDTMCzULzPI4fnjOvqIxLQ2ZojzxB5i5QCLcBGAs/s1600/garchgascompare.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-4S2DBUuONKQ/XL-DnGwwZSI/AAAAAAAAAw0/eDTMCzULzPI4fnjOvqIxLQ2ZojzxB5i5QCLcBGAs/s1600/garchgascompare.jpg" title="GARCH Conditional Variance Comparison with GAS" width="320" /></a><br /> <small>Figure 7: GARCH Conditional Variance Comparison with GAS</small><br /><br /></center><!-- :::::::::: FIGURE 7 :::::::::: --> It is clear that the two estimation techniques produce the same path despite having different estimates for the coefficients. At last, note that while GARCH models are estimated using maximum likelihood procedures, parameter estimates are typically numerically unstable and often fail to converge. This often requires a re-specification of the convergence criterion and / or a change in starting values. These drawbacks are also an issue with GAS models.<br /><br /> <h3 id="sec7">Files</h3>The workfile and program files can be downloaded here.<br /><br /> <ul> <li> <a href="http://www.eviews.com/blog/pygas/pygas.WF1">seasuroot.WF1</a> <li> <a href="http://www.eviews.com/blog/pygas/pygas.prg">seasuroot.prg</a> </ul><br /><br /> <hr /><h3 id="sec8">References</h3> <table> <tr valign="top"> <td align="right" class="bibtexnumber"> <a name="bollerslev-1986">1</a> </td> <td class="bibtexitem"> Tim Bollerslev. Generalized autoregressive conditional heteroskedasticity. <em>Journal of econometrics</em>, 31(3):307--327, 1986. [ <a href="references_bib.html#bollerslev-1986">bib</a> ] </td> </tr> <tr valign="top"> <td align="right" class="bibtexnumber"> <a name="creal-2013">2</a> </td> <td class="bibtexitem"> Drew Creal, Siem Jan Koopman, and André Lucas. Generalized autoregressive score models with applications. <em>Journal of Applied Econometrics</em>, 28(5):777--795, 2013. [ <a href="references_bib.html#creal-2013">bib</a> ] </td> </tr> <tr valign="top"> <td align="right" class="bibtexnumber"> <a name="engle-1982">3</a> </td> <td class="bibtexitem"> Robert F Engle. Autoregressive conditional heteroscedasticity with estimates of the variance of united kingdom inflation. <em>Econometrica: Journal of the Econometric Society</em>, pages 987--1007, 1982. [ <a href="references_bib.html#engle-1982">bib</a> ] </td> </tr> <tr valign="top"> <td align="right" class="bibtexnumber"> <a name="harvey-2013">4</a> </td> <td class="bibtexitem"> Andrew C Harvey. <em>Dynamic models for volatility and heavy tails: with applications to financial and economic time series</em>, volume 52. Cambridge University Press, 2013. [ <a href="references_bib.html#harvey-2013">bib</a> ] </td> </tr> </table> </span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com0tag:blogger.com,1999:blog-6883247404678549489.post-86737930382696770012019-04-23T07:48:00.000-07:002019-04-23T07:48:17.503-07:00Seasonal Unit Root Tests<script type="text/x-mathjax-config"> MathJax.Hub.Config({ tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], }, TeX: { equationNumbers: { autoNumber: "AMS" }, extensions: ["AMSmath.js"], Macros: { lb: "{\\left(}", rb: "{\\right)}", bu: ['{\\underline{#1}}', 1], ba: ['{\\overline{#1}}', 1], norm: ['{\\lVert#1\\rVert}', 1] } } }); </script> <script async="" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS_CHTML" type="text/javascript"></script> <span style="font-family: "verdana" sans-serif"> <i>Author and guest post by Nicolas Ronderos</i><br /><br /> In this blog entry we will offer a brief discussion on some aspects of seasonal non-stationarity and discuss two popular seasonal unit root tests. In particular, we will cover the Hylleberg, Engle, Granger, and Yoo (1990) and Canova and Hansen (1995) tests and demonstrate practically using EViews how the latter can be used to detect the presence of seasonal unit roots in a US macroeconomic time series. All files used in this exercise can be downloaded at the end of the entry.<br /><br /><a name='more'></a> <h3>Deterministic vs Stochastic Seasonality</h3> When we talk about the concept of seasonality in time series, we usually refer to the idea of <i>"... systematic, although not necessarily regular, intra-year movement caused by changes of the weather, the calendar, and timing of decisions..."</i> (Hans Franses). Naturally, macroeconomic data observed with high periodicity (sampled more than once a year) usually exhibit this behavior.<br /><br /> Seasonality can be modelled in two ways: deterministically or stochastically. The former arises form systematic cycles such as calendar effects or climatic phenomena and can be removed from data by the seasonal adjustment procedures -- in other words, by including seasonal dummy variables. Formally, this implies deterministic seasonality evolves as:<br /><br /> $$ y_{t} = \mu + \sum_{s=1}^{S-1}\delta_{s}D_{s,t} + e_{t} $$ where $ S $ is the total number of period cycles, $ D_{s,t} $ are seasonal dummy variables which equal 1 in season $ s $ and 0 otherwise, and $ e_{t} $ are the usual innovations. For example, in the case of quarterly data $ (S=4) $, one could postulate that seasonality evolves as:<br /><br /> $$ y_{t} = 15 - D_{1,t} - 4D_{2,t} - 6D_{3,t} + e_{t}$$ The process is visualized below:<br /><br /> <!-- :::::::::: FIGURE 1 :::::::::: --> <center> <a href="https://lh3.googleusercontent.com/-Rcyh4rXs9xk/XL3zX9LeHbI/AAAAAAAAAtc/TNaYbumDBwko5GG2503X6x6NuwUJZLHSQCEwYBhgL/s1600/ds.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-Rcyh4rXs9xk/XL3zX9LeHbI/AAAAAAAAAtc/TNaYbumDBwko5GG2503X6x6NuwUJZLHSQCEwYBhgL/s1600/ds.jpg" title="Deterministic Seasonality" width="320" /></a><br /> <small>Figure 1: Deterministic Seasonality</small><br /><br /> </center> Notice here that the optimal $ h $-period ahead forecast of $ y_{t} $ in season $ s $, is given by:<br /><br /> $$ \widehat{y}_{S(t+h)-s} = \widehat{\mu} + \widehat{\delta}_{s} $$ where $ s = S-1, \ldots, 0 $. In other words, the optimal forecast of $ y_{t} $ in season $ s $ is the same at each future point in time for said season. It is precisely this property which formalizes the notion of systematic cyclicality.<br /><br /> On the other hand, stochastic seasonality describes nearly systematic cycles which evolve as seasonal ARMA$(p,q)$ processes of the form:<br /><br /> $$ (1 - \eta_{1}L^{S} - \eta_{2}L^{2S} - \ldots - \eta_{p}L^{pS})y_{t} = (1 + \xi_{1}L^{S} + \xi_{2}L^{2S} + \ldots + \xi_{q}L^{qS})e_{t}$$ where $ L $ denotes the usual lag operator. In particular, when $ p = 1 $ and $ q = 0 $, the seasonal AR(1) model with $ \eta_{1} = 0.75 $ is visualized as follows:<br /><br /> <!-- :::::::::: FIGURE 2 :::::::::: --> <center> <a href="https://lh3.googleusercontent.com/-frISG5yc5Vs/XL3z2QMIM6I/AAAAAAAAAtk/VrmiJrXJp6oFRUAZpTcJBMk-F1dr7ilIACEwYBhgL/s1600/ss.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-frISG5yc5Vs/XL3z2QMIM6I/AAAAAAAAAtk/VrmiJrXJp6oFRUAZpTcJBMk-F1dr7ilIACEwYBhgL/s1600/ss.jpg" title="Stochastic Seasonality" width="320" /></a><br /> <small>Figure 2: Stochastic Seasonality</small><br /><br /> </center> Unlike the deterministic seasonal model however, the $ h $-period ahead forecast of the stochastic seasonal model is not constant. In particular, for the seasonal AR(1) model, the forecast $ h $-periods ahead is given by:<br /><br /> $$ \widehat{y}_{S(t+h)-s} = \widehat{\eta}_{1}^{h}y_{St-s} $$ In other words, the forecast in any given season is a function of past data values, and is therefore considered to be <i>stochastic</i>.<br /><br /> So how does one identify whether a series exhibits deterministic or stationary seasonality? One useful tool is the <i>periodogram</i> which produces a decomposition of the dominant frequencies (cycles) of a time series. As it turns out, there are at most $ S $ frequencies in a time series exhibiting $ S $ period cycles. Formally, these are identified in conjugate pairs as follows:<br /><br /> $$ \omega \in \left\{0, \left(\frac{2\pi}{S}, 2\pi-\frac{2\pi}{S}\right), \left(\frac{4\pi}{S}, 2\pi-\frac{4\pi}{S}\right), \ldots, \pi \right\} $$ if $ S $ is even, and<br /><br /> $$ \omega \in \left\{0, \left(\frac{2\pi}{S}, 2\pi-\frac{2\pi}{S}\right), \left(\frac{4\pi}{S}, 2\pi-\frac{4\pi}{S}\right), \ldots, \left(\frac{\lfloor S/2 \rfloor\pi}{S}, 2\pi-\frac{\lfloor S/2\rfloor\pi}{S}\right) \right\} $$ if $ S $ is odd.<br /><br /> Thus, given a stationary time series with $ S $ period cycles, we expect the periodogram to protrude at the non-zero frequencies. In particular, we present the periodogram for deterministic and stochastic seasonal processes below:<br /><br /> <table> <tbody> <tr> <td> <!-- :::::::::: FIGURE 3A :::::::::: --> <center> <a href="https://lh3.googleusercontent.com/-nfQ9gfzbaV8/XL34GxM97eI/AAAAAAAAAt8/zwgbTwDu3MU8-tkF7OdUDvSBvA7j5bCUACEwYBhgL/s1600/dsprdgrm.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-nfQ9gfzbaV8/XL34GxM97eI/AAAAAAAAAt8/zwgbTwDu3MU8-tkF7OdUDvSBvA7j5bCUACEwYBhgL/s1600/dsprdgrm.jpg" title="Deterministic Seasonality Periodogram" width="320" /></a><br /> </center> <!-- :::::::::: FIGURE 3A :::::::::: --> </td> <td> <!-- :::::::::: FIGURE 3B :::::::::: --> <center> <a href="https://lh3.googleusercontent.com/-n6oWWlny_30/XL34Gg5AJHI/AAAAAAAAAt4/1hmGndwDR20hVcGhUrZbirTE_uEbAWpmwCEwYBhgL/s1600/ssprdgrm.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-n6oWWlny_30/XL34Gg5AJHI/AAAAAAAAAt4/1hmGndwDR20hVcGhUrZbirTE_uEbAWpmwCEwYBhgL/s1600/ssprdgrm.jpg" title="Stochastic Seasonality Periodogram" width="320" /></a><br /> </center> <!-- :::::::::: FIGURE 3B :::::::::: --> </td> </tr> <tr> <td> <center> <small>Figure 3A: Deterministic Seasonality Periodogram</small><br /><br /> </center> </td> <td> <center> <small>Figure 3B: Stochastic Seasonality Periodogram</small><br /><br /> </center> </td> </tr> </tbody> </table> We can see from the periodograms that the spectrum of deterministic seasonal processes exhibits sharp peaks at the seasonal frequencies $ \omega $, whereas that of stochastic seasonal processes exhibits a window of sharp peaks centered around seasonal frequencies $ \omega $. In case of stochastic seasonality, the fact that the spectrum spreads around principal frequencies and is not a single peak reaffirms the notion that cycles are stochastically distributed around said frequencies.<br /><br /> <h3>Seasonal Unit Roots</h3> A particularly important form of stochastic seasonality manifests in the form of unit roots at some or all of the frequencies $ \omega $. In particular, consider the following process:<br /><br /> $$ y_{t} = \eta y_{t-S} + e_{t} $$ and note that the characteristic equation associated with the process is defined as:<br /><br /> \begin{align} 1 - \eta z^{S} = 0 \quad \text{or} \quad z^{S} = 1/\eta \label{eq1} \end{align} Analogous to the case of classical unit root processes, when $ |\eta|=1 $ or $ |z| = 1^{1/S} = 1 $, $ y_{t} $ is in fact non-stationary. In contrast to the classical unit root case however, $ y_{t} $ can possess not one, but upto $ S $ unique unit roots. To see this, note that any complex number $ z = a + ib $ can be written in polar form as:<br /><br /> $$ z = \sqrt{a^{2} + b^{2}}(\cos(\theta) + i\sin(\theta)) = r(\cos(\theta) + i\sin(\theta)) $$ where $ r = |z|$ is called the magnitude of $ z $, but is also the radius of the circle in polar coordinates. Accordingly, when $ |\eta | = 1 $ or $ |z|=1 $, $ z $ lies on a circle with radius $ r = 1 $. In other words, $ y_{t} $ is a unit root process. Next, recall Euler's formula:<br /><br /> $$ e^{ix} = \cos(x) + i \sin(x) $$ Clearly, any complex number $ z $ with magnitude $ r=1 $ satisfies Euler's formula. In other words, $ z = e^{i\theta} $. Since Euler's formula also implies that:<br /><br /> $$ e^{2\pi i k} = 1 \quad \text{for} \quad k=0,1,2,\ldots$$ when $ \eta=1 $ or $ |z|=1 $, the characteristic equation \eqref{eq1} can be expressed as:<br /><br /> \begin{align*} z = e^{i\omega} &= 1^{1/S} \notag\\ &= (e^{2\pi i k})^{1/S}\notag\\ &= e^{\frac{2\pi i k}{S}} \end{align*} where the relations above evidently hold for all $ k=0,1,2,\ldots, S-1 $ since the solutions begin to cycle when $ k \geq S $. Now, taking logarithms of both sides, it is clear that:<br /><br /> \begin{align} \omega = \frac{2\pi k}{S} \quad \text{for} \quad k=0,1,2,\ldots, S-1 \label{eq2} \end{align} In other words, the characteristic equation \eqref{eq1} has $ S $ unique solutions identified by the $ S $ relationships in \eqref{eq2}. These solutions are equally (by $ 2\pi k/S $ degrees) spaced on the unit circle, with two real solutions associated with $ \omega = 0 $ and $ \omega = \pi $, and the remaining $ S-2 $ imaginary solutions organized in harmonic pairs.<br /><br /> Thus, when we identify $ S $ with a temporal frequency, namely a week, month, quarter, and so on, the problem of identifying roots of the characteristic equation \eqref{eq1} extends the classical unit root literature in which $ S=1 $ (or annual frequency), to that of identifying $ S > 1 $ possible roots on the unit circle.<br /><br /> In fact, like the classical unit-root literature in which unchecked unit roots are known to have severe inferential consequences, the presence of unit roots at seasonal frequencies can also give rise to similar inferential inaccuracies and concerns. Accordingly, identifying the presence of unit roots at one or more seasonal frequencies is the subject of the battery of tests known as <i>seasonal unit root tests</i>.<br /><br /> <h3>Seasonal Unit Root Tests</h3> Historically, the first test for a seasonal unit root was proposed by Dickey, Hasza and Fuller (1984) (DHF). In its simplest form, the test is based on running the regression:<br /><br /> $$ (1-L^{S})y_{t} = \eta y_{t-s} + e_{t} $$ and testing the null hypothesis $ H_{0}: \eta = 0 $ against the one-sided alternative $ H_{A}: \eta < 0 $. The test is carried out using the familiar Student's-$ t $ statistic on statistical significance for $ \eta $, and analogous to the classic augmented Dickey-Fuller (ADF) test, exhibits a non-standard asymptotic distribution under the null. Nevertheless, the DHF test is very restrictive. Whereas the test imposes the existence of a unit root at all $ S $ seasonal frequencies simultaneously, in reality, a process may exhibit a seasonal unit root at some seasonal frequencies but not others.<br /><br /> <h4>HEGY Seasonal Unit Root Test</h4> To correct for the shortcomings of the DHF test, Hylleberg, Engle, Granger and Yoo (1990) (HEGY) proposed a test for the determination of unit roots at each of the $ S $ seasonal frequencies individually, or collectively. In particular, following the notation in Smith and Taylor (1999), in its simplest form, the HEGY test is based on regressions of the form:<br /><br /> \begin{align*} (1-L^{s})y_{St-s} &= \mu + \pi_{0}L\left(1 + L + \ldots + L^{S-1}\right)y_{St-s}\\ &+ L\sum_{k=1}^{S^{\star}}\left( \pi_{k,1}\sum_{j=0}^{S-1}\cos\left((j+1)\frac{2\pi k}{S}\right)L^{j} - \pi_{k,2}\sum_{j=0}^{S-1}\sin\left((j+1)\frac{2\pi k}{S}\right)L^{j} \right)y_{St-s}\\ &+ \pi_{S/2}L\left(1 - L + L^{2} - \ldots - L^{S-1}\right)y_{St-s} + e_{t}\\ &\equiv \mu + \pi_{0}y_{St-s-1, 0} + \sum_{k=1}^{S^{\star}}\pi_{k,1}y_{St-s-1,k,1} + \sum_{k=1}^{S^{\star}}\pi_{k,2}y_{St-s-1,k,2} + \pi_{S/2}y_{St-s-1, S/2} +e_{t} \end{align*} where $ S^{\star} = (S/2) - 1 $ if $ S $ is even and $ S^{\star} = \lfloor S/2 \rfloor $ if $ S $ is odd, and as before, $ s = S-1, \ldots, 1, 0 $.<br /><br /> In particular, when data is quarterly with $ S=4 $ and therefore $ S^{\star} = 1 $, then:<br /><br /> \begin{align*} y_{4t-s, 0} &= (1+L+L^{2}+L^{3})y_{4t-s}\\ y_{4t-s, 1,1} &= -L(1-L^{2})y_{4t-s}\\ y_{4t-s, 1,2} &= -(1-L^{2})y_{4t-s}\\ y_{4t-s, 2} &= -(1-L+L^{2}-L^{3})y_{4t-s} \end{align*} Here, $ y_{4t-s, 0} $ is in fact the series $ y_{4t-s} $ filtered by the 0 frequency filter, $ y_{4t-s, 1,1} $ is the series $ y_{4t-s} $ filtered by the $ \pi/2 $ frequency filter, $ y_{4t-s, 1,2} $ is the series $ y_{4t-s} $ filtered by the $ 3\pi/2 $ frequency filter, and $ y_{4t-s, 2} $ is the series $ y_{4t-s} $ filtered by the $ \pi $ frequency filter.<br /><br /> To visualize the frequency filters, consider the spectral filter functions associated with each of the processes above. The latter are computed as $ |\phi(e^{i\theta})| $ where $ \phi(\cdot) $ is the lag polynomial applied to $ y_{St-s} $, and $ \theta \in [0, 2\pi) $. For instance, in case of quarterly data, the 0 frequency filter is computed as $ |1 + e^{i\theta} + e^{i2\theta} + e^{i3\theta} + e^{i4\theta}| $, and so on.<br /><br /> <!-- :::::::::: FIGURE 4 :::::::::: --> <center> <a href="https://lh3.googleusercontent.com/-fdsyB7HGBTM/XL4p5kYpm5I/AAAAAAAAAuY/Wr2Pj5D42L4NV178cYYzKjRtq1afj7f7wCLcBGAs/s1600/filters.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-fdsyB7HGBTM/XL4p5kYpm5I/AAAAAAAAAuY/Wr2Pj5D42L4NV178cYYzKjRtq1afj7f7wCLcBGAs/s1600/filters.jpg" title="HEGY Seasonal Filters" width="320" /></a><br /> <small>Figure 4: HEGY Seasonal Filters</small><br /><br /> </center> Like the DHF test, the HEGY test also reduces to verifying parameter significance in the regression equation. Nevertheless, in contrast to DHF, HEGY tests can detect isolated effect of each seasonal frequency independently. In the case of quarterly data, for instance, a $ t-$test on coefficient significance for $ \pi_{1} = 0 $ is in fact a test for a unit root in the $ \omega = 0 $ frequency, a $ t- $test on coefficient significance for $ \pi_{2} = 0 $ is a test for the presence of a unit root at the $ \omega = \pi $ frequency, and an $ F- $test for the joint parameter significance of $ \pi_{1,1} = 0 $ and $ \pi_{1,2} = 0 $, is in fact a joint test for the presence of a unit root at the harmonic conjugate pair of frequencies $ (\pi/2, 3\pi/2) $.<br /><br /> It should also be noted here that while we have focused on the simplest form, the HEGY test can accommodate various deterministic specifications in the form of seasonal dummies, constants, and trends. Moreover, in the presence of serial correlation in the innovation process, the HEGY test can also be augmented with lags of the dependent variable as additional regressors to the principal equation presented above, in order to mitigate the effect.<br /><br /> In fact, the HEGY test is very similar to the ADF test which is effectively a unit root test at the 0-frequency alone. Whereas the latter proceeds as a regression of a differenced series against its lagged level, the former proceeds as a regression of a seasonally differenced series against the lagged levels at each of the constituent seasonal frequencies. In this regard, the HEGY test is considered an extension of the ADF test in the direction of non-zero frequencies. As such, it also suffers from the same shortcomings as the ADF test, and can exhibit low statistical power when the individual frequencies are in fact stationary, but exhibit near-unit root behaviour.<br /><br /> <h4>Canova-Hansen Seasonal Unit Root Test</h4> One response to the low power of ADF tests in the presence of near unit root stationarity was the test of Kwiatkowski, Phillips, Schmidt, and Shin (1992) (KPSS), which is in fact a test for stationarity at the 0-frequency alone. The analogous development in the seasonal unit root literature was the test of Canova and Hansen (1995) (CH). Like the KPSS test, the CH test is also a test for stationarity but extends to non-zero seasonal frequencies.<br /><br /> The idea behind the CH test is to suppose that seasonality manifests in the process mean. In other words, given a process $ y_{t} $, if seasonal effects are present, then $ y_{t} $ will exhibit a seasonally dependent average. Traditionally, this is formalized using seasonal dummy variables as:<br /><br /> $$ y_{t} = \sum_{s=0}^{S-1}\delta_{s}D_{s,t} + e_{t} $$ Nevertheless, it is well known that an equivalent representation using discrete Fourier expansions exists in terms of sine and cosine functions. In particular,<br /><br /> $$ y_{t} = \sum_{k=0}^{S^{\star}}\left(\delta_{k,1}\cos\left(\frac{2\pi k t}{S}\right) + \delta_{k,2}\sin\left(\frac{2\pi k t}{S}\right)\right) + e_{t} $$ where $ S^{\star} $ was defined earlier, and $ \delta_{k,1} $ and $ \delta_{k,2}$ are referred to as <i>spectral intercept</i> coefficients. In either case, the expression can be expressed in vector notation as follows:<br /><br /> \begin{align} y_{t} = \pmb{Z}_{t}^{\top}\pmb{\gamma}_{t} + e_{t} \label{eq3} \end{align} where $ \pmb{Z}_{t} = \left(1, \pmb{z}_{1,t}^{\top}, \ldots, \pmb{z}_{S^{\star},t}^{\top} \right) $ (or $ \pmb{Z}_{t} = \left(1, D_{1,t}, \ldots, D_{S-1,t}\right) $) and $ \pmb{\gamma}_{t} = \left(\gamma_{1,t}, \ldots, \gamma_{S,t}\right) $ is a an $ S\times 1 $ vector of coefficients, and $ \pmb{z}_{k,t} = \left(\cos\left(\frac{2\pi k t}{S}\right), \sin\left(\frac{2\pi k t}{S}\right)\right) $ for $ j=1,\ldots, S^{\star} $, with the convention $ \pmb{z}_{S^{\star},t} \equiv \cos(\pi t) = (-1)^{t} $ when $ S $ is even.<br /><br /> Next, to distinguish between stationary and non-stationary seasonality, CH assume that the coefficient vector $ \pmb{\gamma}_{t} $ evolves as the following AR(1) model:<br /><br /> \begin{align*} \pmb{\gamma}_{t} &= \pmb{\gamma}_{t-1} + u_{t}\\ u_{t} &\sim IID(\pmb{0}, \pmb{G})\\ \pmb{G} &= \text{diag}(\theta_{1}, \ldots, \theta_{S}) \end{align*} Observe that when $ \theta_{k} > 0 $, then $ \gamma_{k,t} $ follows a random walk. On the other hand, when $ \theta_{k} = 0 $, then $ \gamma_{k,t} = \gamma_{k, t-1} = \gamma_{k} $, a fixed constant for all $ t $. In other words, when $ \theta_{k} > 0 $, the process $ y_{t} $ exhibits a seasonal unit root at the harmonic frequency pair $ (\frac{2\pi k}{S}, 2\pi - \frac{2\pi k}{S}) $ for $ 1\leq k < \lfloor S/2 \rfloor $, and the frequency $ \frac{2\pi k}{S} $ if $ k=0 $ or $ k = \lfloor S/2 \rfloor $. In this regard, to test the null hypothesis that $ y_{t} $ exhibits at most deterministic seasonality at certain (possibly all) frequencies, against the alternative hypothesis that $ y_{t} $ exhibits a seasonal unit root at certain (possibly all) frequencies, define $ \pmb{A}_1$ and $ \pmb{A}_2 $ as mutually orthogonal, full column-rank, $(S \times a_1)-$ and $(S \times a_2)$-matrices which respectively constitute $1 \leq a_1 \leq S$ and $a_2 = S - a_1$ sub-columns from the order-$S$ identity matrix $\pmb{I}_s$.<br /><br /> For instance, if one wishes to test whether a seasonal unit root exists at frequency $ \pi $, one would set $ \pmb{A}_{1} = (0,\ldots, 0,1)^{\top} $. Alternatively, if testing for a seasonal unit root at the frequency pair $ \left(\frac{2\pi}{S}, 2\pi - \frac{2\pi}{S}\right) $, then one would set:<br /><br /> $$ \pmb{A}_{1} = \begin{bmatrix} 0 & 0 \\ 1 & 0 \\ 0 & 1 \\ 0 & 0 \\ \vdots & \vdots \\ 0 & 0 \end{bmatrix} $$ Note further that one can further rewrite \eqref{eq3} as follows:<br /><br /> $$ y_{t} = \pmb{Z}_{t}^{\top}\pmb{A}_{1}\pmb{A}_{1}^{\top}\pmb{\gamma}_{t} + \pmb{Z}_{t}^{\top}\pmb{A}_{2}\pmb{A}_{2}^{\top}\pmb{\gamma}_{t} + e_{t} $$ Next, define $ \pmb{\Theta} = \left(\theta_{1}, \ldots, \theta_{S}\right)^{\top} $ and observe that the CH hypothesis battery reduces to:<br /><br /> \begin{align*} H_{0}: \text{}\pmb{A}_{1}^{\top}\pmb{\Theta} = \pmb{0}\\ H_{A}: \text{}\pmb{A}_{1}^{\top}\pmb{\Theta} > 0 \end{align*} where in addition to $ H_{0} $, it is implicitly maintained that $H_{M}:\text{} \pmb{A}_{2}^{\top}\pmb{\Theta} = \pmb{0} $. In particular, notice that when both $ H_{0} $ and $ H_{M} $ hold, equation \eqref{eq3} reduces to:<br /><br /> \begin{align} y_{t} = \pmb{Z}_{t}^{\top}\pmb{\gamma} + e_{t} \label{eq4} \end{align} where $ \pmb{\gamma} $ is now constant across time. In other words, $ y_{t} $ exhibits at most deterministic (stationary) seasonality. In this regard, holding $ H_{M} $ implicitly true, Canova and Hansen (1995) propose a consistent test for $ H_{0} $ versus $ H_{A} $, using the statistic:<br /><br /> \begin{align*} \mathcal{L} = T^{-2} \text{tr}\left(\left(\pmb{A}_{1}^{\top}\widehat{\pmb{\Omega}}\pmb{A}_{1}\right)^{-1}\pmb{A}_{1}^{\top}\left(\sum_{t=1}^{T}\widehat{F}_{t}\widehat{F}_{t}^{\top}\right)\pmb{A}_{1}^{\top}\right) \end{align*} where $ \text{tr}(\cdot) $ is the trace operator, $ \widehat{e}_{t} $ are the OLS residuals from regression \eqref{eq4}, $ \widehat{F}_{t} = \sum_{t=1}^{T} \widehat{e}_{t}\pmb{Z}_{1,t} $, and the HAC estimator<br /><br /> $$ \widehat{\pmb{\Omega}} = \sum_{j=-T+1}^{T-1}\kappa\left(\frac{j}{h}\right)\widehat{\pmb{\Gamma}}(j) $$ Above, $ \kappa(\cdot) $ is the kernel function, $ h $ is the bandwidth parameter, and $ \widehat{\pmb{\Gamma}}(j) $ is the autocovariance (at level $ j $ ) estimator<br /><br /> $$ \widehat{\pmb{\Gamma}}(j) = T^{-1} \sum_{t=j+1}^{T} \widehat{e}_{t}\pmb{Z}_{t}\widehat{e}_{t-j}\pmb{Z}_{t-j}^{\top} $$ Naturally, we reject the null hypothesis when $ \mathcal{L} $ is larger than some critical value which depends on the rank of $ \pmb{A}_{1} $.<br /><br /> <h4>Unattended Unit Roots</h4> A well-known problem with the CH test concerns the issue of <i>unattended unit roots</i>. In particular, CH tests the null hypothesis $ H_{0} $ while imposing $ H_{M} $, where the latter lies in the complementary space to that generated by the former. In practice however, one does not know which spectral frequency exhibits a unit root. If one did know, the exercise of testing for their presence would be nonsensical. In this regard, if $ H_{0} $ is imposed but $ H_{M} $ is violated, then, Taylor (2003) shows that the CH test is severely undersized. To overcome the shortcoming, Taylor (2003) suggests filtering the regression equation \eqref{eq3} to reduce the order of integration at all spectral frequencies identified in $ \pmb{A}_{2} $. In particular, consider the filter:<br /><br /> $$ \nabla_{2} = \frac{1 - L^{S}}{\nabla_{1}} $$ where $ \nabla_{1} $ reduces, by one, the order of integration at each frequency identified in $ \pmb{A}_{1} $. For instance, if $ \pmb{A}_{1} $ identifies the 0-frequency, then $ \nabla_{1} = (1 - L) $ and $ \nabla_{2} = \frac{1-L^{S}}{1-L} = 1 + L + \ldots + L^{S-1} $. Alternatively, if $ \pmb{A}_{1} $ identifies the harmonic frequency pair $ \left(\frac{2\pi k}{S}, 2\pi - \frac{2\pi k}{S}\right) $, then $ \nabla_{1} = 1 - 2\cos\left(\frac{2\pi k}{S}\right)L + L^{2} $, and so on. Accordingly, if we assume $ \pmb{\gamma}_{t} = \pmb{\gamma}_{t-1} + u_{t} $, it is clear that $ \nabla_{2}y_{t} $ will not admit unit root behaviour at any of the frequencies identified in $ \pmb{A}_{2} $ and the maintained hypothesis $ H_{M} $ will hold. See Taylor (2003) and Busetti and Taylor (2003) for further details.<br /><br /> Furthermore, since $ \nabla_{2} $ acts only on frequencies identified in $ \pmb{A}_{2} $, it can also be formally shown that the regressors $ \nabla_{2}\pmb{Z}_{t}^{\top}\pmb{A}_{1}$ span a space identical to the space spanned by $ \pmb{Z}_{t}^{\top}\pmb{A}_{1}$. Accordingly, the strategy in Taylor (2003) is to run the regression:<br /><br /> \begin{align*} \nabla_{2}y_{t} &= \nabla_{2}\pmb{Z}_{t}^{\top}\pmb{A}_{1}\pmb{A}_{1}^{\top}\pmb{\gamma}_{t} + \nabla_{2}\pmb{Z}_{t}^{\top}\pmb{A}_{2}\pmb{A}_{2}^{\top}\pmb{\gamma}_{t} + \nabla_{2}e_{t} \\ &= \pmb{Z}_{t}^{\top}\pmb{A}_{1}\pmb{A}_{1}^{\top}\pmb{\gamma}_{t} + e_{t}^{\star} \end{align*} where $ e_{t}^{\star} = \nabla_{2}\pmb{Z}_{t}^{\top}\pmb{A}_{2}\pmb{A}_{2}^{\top}\pmb{\gamma}_{t} + \nabla_{2}e_{t} $. Naturally, the modified test statistic is now given by:<br /><br /> \begin{align*} \mathcal{L}^{\star} = T^{-2} \text{tr}\left(\left(\pmb{A}_{1}^{\top}\widehat{\pmb{\Omega}}^{\star}\pmb{A}_{1}\right)^{-1}\pmb{A}_{1}^{\top}\left(\sum_{t=1}^{T}\widehat{F}_{t}^{\star}\widehat{F}_{t}^{\star^{\top}}\right)\pmb{A}_{1}^{\top}\right) \end{align*} where $ \widehat{F}_{t}^{\star} = \sum_{t=1}^{T} \widehat{e}_{t}^{\star}\pmb{Z}_{1,t} $ and $ \widehat{\pmb{\Omega}}^{\star} $ is computed analogous to $ \widehat{\pmb{\Omega}} $ upon replacing $ \widehat{e}_{t} $ with $ \widehat{e}_{t}^{\star} $.<br /><br /> <h3>Seasonal Unit Root Test in EViews</h3> Starting with version 11 of EViews, a battery of tests aimed at diagnosing unit roots in the presence seasonality are now supported natively. These tests include the most famous Hylleberg, Engle, Granger, and Yoo (1990) (HEGY) test as well its Smith and Taylor (1999) likelihood ratio variant, the Canova and Hansen (1995) (CH) test, and the Taylor (2005) variance ratio test.<br /><br /> Here, we will apply the HEGY and CH tests to detect the presence of seasonal unit roots in quarterly U.S. government consumption expenditures and gross investment data running from 1947 to 2018. We have named the series object containing the data as <b>USCONS</b>. The latter can either be opened from the workfile associated with this blog, or by running a fetch procedure to grab the data directly from the FRED database. In case of the latter, in EViews, issue the following commands in the command window:<br /><br /> <pre><br /> wfcreate q 1947q1 2018q4<br /> fetch(d=fred) NA000333Q<br /> rename NA000333Q uscons<br /> </pre> We begin with a plot of the data. To do so, double click on a <b>USCONS</b> in the workfile to open the series object. Next, click on <b>View/Graph...</b>. This will open a graph options window. We will stick with the defaults so click on <b>OK</b>. The output is reproduced below.<br /><br /> <!-- :::::::::: FIGURE 5 :::::::::: --> <center> <a href="https://lh3.googleusercontent.com/-GlpQtAZiLJY/XL4xMFbOTqI/AAAAAAAAAuw/J4eehkrtXBciIbp4EZlmBNtiba2-YA58QCLcBGAs/s1600/usconsgrph.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-GlpQtAZiLJY/XL4xMFbOTqI/AAAAAAAAAuw/J4eehkrtXBciIbp4EZlmBNtiba2-YA58QCLcBGAs/s1600/usconsgrph.jpg" title="Time Series Plot of USCONS" width="320" /></a><br /> <small>Figure 5: Time Series Plot of USCONS</small><br /><br /> </center> A visual analysis indicates data is trending with very prominent seasonal effects. To determine statistically whether these seasonal effects exhibit unit roots, we click on <b>View/Unit Root Tests/Seasonal Unit Root Tests...</b> to open the seasonal unit root test window.<br /><br /> <!-- :::::::::: FIGURE 6 :::::::::: --> <center> <a href="https://lh3.googleusercontent.com/-IOV0OdjWwG8/XL4xeK3zTvI/AAAAAAAAAu4/NyzUQuwil9YtlTojfaaVzTSRq5HoDzjlQCEwYBhgL/s1600/hegydlg.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-IOV0OdjWwG8/XL4xeK3zTvI/AAAAAAAAAu4/NyzUQuwil9YtlTojfaaVzTSRq5HoDzjlQCEwYBhgL/s1600/hegydlg.jpg" title="HEGY Test Dialog" width="320" /></a><br /> <small>Figure 6: HEGY Test Dialog</small><br /><br /> </center> We will start with the HEGY test, which is the default test. Here, EViews has already filled out the periodicity with 4 to match the cyclicality of the data. Nevertheless, if you wish to test the data under a different periodicity, you may manually adjust this to one of the following supported values: 2, 4, 5, 6, 7, 12. Since our data is trending, we will change the <b>Non-Seasonal deterministics</b> dropdown from <b>None</b> to <b>Intercept and trend</b> and leave the <b>Seasonal Deterministics</b> dropdown unchanged.<br /><br /> As discussed earlier, in case of serially correlated errors, the HEGY test can be augmented by lags of the dependent variable added as additional regressors to the HEGY regression. To determine the precise number of lags to add, EViews offers both automatic and manual methods. The default is automatic lag selection with Akaike Information Criterion and maximum of 12 lags. The details can be changed of course, or, if automatic selection is undesired, a <b>User Selected</b> value can be specified. We will stick with the defaults. Hit <b>OK</b>.<br /><br /> <!-- :::::::::: FIGURE 7 :::::::::: --> <center> <a href="https://lh3.googleusercontent.com/-tVC52Adt14g/XL4xkQhZMmI/AAAAAAAAAvQ/cJjgkQnFn4shqvETTxtULjPMQ4emhCGbQCEwYBhgL/s1600/hegytbl.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-tVC52Adt14g/XL4xkQhZMmI/AAAAAAAAAvQ/cJjgkQnFn4shqvETTxtULjPMQ4emhCGbQCEwYBhgL/s1600/hegytbl.jpg" title="HEGY Test Output" width="320" /></a><br /> <small>Figure 7: HEGY Test Output</small><br /><br /> </center> Looking at the output, EViews provides a table, the top portion of which summarizes the testing procedure, whereas the lower summarizes the regression output upon which the test is conducted. In particular, EViews computes the HEGY test statistic for each of the 0, harmonic pairs, and $ \pi $ frequencies, in addition to the joint test for all seasonal frequencies -- a joint test for all frequencies other than 0 -- and a joint test for all frequencies including the frequency 0. As in traditional unit root tests, the null hypothesis postulates the existence of a unit root at the seasonal frequencies under consideration and rejection of the null requires the absolute value of the test statistic to exceed the absolute value of a critical value associated with the limiting distribution. In this regard, EViews summarizes the 1\%, 5\%, and 10\% critical values derived from simulation for sample sizes ranging from 20 to 480 in intervals of 20. To adjust for the actual sample size used in the HEGY regression, EViews also offers an interpolated version of the critical values. Here, it is clear that we will not reject the null hypothesis at any of the individual or harmonic pair frequencies, nor at the two joint tests. The overwhelming conclusion is that <b>USCONS</b> exhibits a unit root at each of the quarterly spectral frequencies individually and jointly.<br /><br /> Consider next the CH test applied to the same data. To bring up the CH test options, from the series object, once again click on <b>View/Unit Root Tests/Seasonal Unit Root Tests...</b> and under the <b>Test type</b> dropdown, select <b>Canova and Hansen</b>. As before, we will leave the <b>Periodicity</b> unchanged and will change the <b>Non-Seasonal Deterministics</b> to <b>Intercept and trend</b>. Note here that the traditional Canova and Hansen (1995) paper does not allow for the inclusion of deterministic trends. However, as noted in Busetti and Harvey (2003), we can relax ``the conditions of CH by showing that the distribution is unaffected when a deterministic trend is included in the model''.<br /><br /> <!-- :::::::::: FIGURE 8 :::::::::: --> <center> <a href="https://lh3.googleusercontent.com/-9HBvymMw6yc/XL4xkG00nHI/AAAAAAAAAvM/tghX_6dEDrk0l3j98eBozXn0XZlQ5Y5MACEwYBhgL/s1600/chdlg.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-9HBvymMw6yc/XL4xkG00nHI/AAAAAAAAAvM/tghX_6dEDrk0l3j98eBozXn0XZlQ5Y5MACEwYBhgL/s1600/chdlg.jpg" title="CH Test Dialog" width="320" /></a><br /> <small>Figure 8: CH Test Dialog</small><br /><br /> </center> Next, change the <b>Seasonal Deterministics</b> dropdown from <b>Seasonal dummies</b> to <b>Seasonal intercepts</b>. Notice that when we do this the <b>Restriction selection</b> box changes to reflect that restrictions are no longer on seasonal dummies, but on seasonal intercepts. Note that we can multi-select which frequencies we would like to test. This is equivalent to specifying the entries of the matrix $ \pmb{A}_{1} $ we considered earlier. If no restrictions are selected, which is the default, then EViews will test all available restrictions. Here we will not select anything.<br /><br /> We will also leave the <b>Include lag of dep. variable</b> untouched. As noted in Canova and Hansen (1995), the inclusion of a lagged dependent variable in the CH regression ``will reduce this serial correlation (we can think of this as a form of pre-whitening), yet not pose a danger of extracting a seasonal root''. At last, note the <b>HAC Options</b> button which opens a set of options associated with how the long-run variance is computed and gives users the option to customize which kernel and bandwidths are used, and whether further residual whitening is desired. We stick with default values and simply click on <b>OK</b> to execute the test.<br /><br /> <!-- :::::::::: FIGURE 9 :::::::::: --> <center> <a href="https://lh3.googleusercontent.com/-rqSM5wpijrs/XL4xkB7TNYI/AAAAAAAAAvI/T5unGNvBNbIn-TuiEZ41RPWOZ0fS2ipsgCEwYBhgL/s1600/chtbl.jpg"><img height="auto" src="https://lh3.googleusercontent.com/-rqSM5wpijrs/XL4xkB7TNYI/AAAAAAAAAvI/T5unGNvBNbIn-TuiEZ41RPWOZ0fS2ipsgCEwYBhgL/s1600/chtbl.jpg" title="CH Test Output" width="320" /></a><br /> <small>Figure 9: CH Test Output</small><br /><br /> </center> Turning to the output, EViews divides the analysis into four sections. The first is a table summarizing the joint test for all elements in $ \pmb{A}_{1} $. In the example at hand, we have 3 restrictions -- 2 associated with the harmonic pair $ (\frac{\pi}{2}, \frac{3\pi}{2}) $ , and one associated with the frequency $ \pi $. Since the null hypothesis is that no unit root exists at the specified frequencies and the test statistic 4.53631 is larger than either of the 1\%, 5\%, or 10\% critical values, we conclude that the joint test rejects the null hypothesis.<br /><br /> The next table presents a detailed look at the harmonic pair test. Although we did not explicitly ask for this test, EViews presents a breakdown of the joint test requested into its constituent restrictions. These are harmonic pair tests in which the restriction matrix $ \pmb{A}_{1} $ would be $ S\times 2 $. In this case, the test for no seasonal unit root at the harmonic pair is 2.968384 which is clearly larger than any of the critical values associated with the limiting distribution. In other words, we reject the null and conclude that there's evidence of a unit root at the harmonic pair frequencies. Notice also that in addition to the CH test statistic EViews also offers an additional test statistic marked by an asterisk for differentiation. This is in fact the test statistic that corresponds to the Taylor (2003) version of the CH test robustified to the possible violation of the maintained hypothesis $ H_{M} $ discussed earlier.<br /><br /> The table beneath the harmonic pair tests is a table summarizing CH tests corresponding to the individual breakdown of all frequencies under consideration. In other words, these are individual tests in which the restriction matrix $ \pmb{A}_{1} $ would be $ S\times 1 $. Since the frequency $ \pi $ was requested as part of the joint test, it is reported here. Clearly, with the test statistic equaling 3.842780, we reject the null hypothesis and conclude in favor of evidence supporting the existence of a unit root at the frequency $ \pi $. As before, note here that below the test statistic associated with the $ \pi $ frequency is an additional statistic differentiated by an asterisk. This, as before, is the Taylor (2003) version of the CH test robustified to unattended unit roots.<br /><br /> At last, the final table presents the CH regression. The residuals from this regression are used in the computation of the CH test statistics.<br /><br /> <h3>Conclusion</h3> In this entry we gave a brief introduction into the subject of seasonal unit root tests. We highlighted the need to distinguish between deterministic and stochastic cyclicality and discussed several statistical methods designed to do so. Among these, our focus was on the HEGY tests, which is effectively an extension of the ADF test in the direction of non-zero seasonal frequencies, and the CH test, which is the analogue of the KPSS test in the direction of non-zero seasonal frequencies. We also looked at some of the mathematical details which underly these methods. At last, we closed with a brief application of both tests to the US Government consumption expenditure and investment data, sampled quarterly from 1947 to 2018. Both tests overwhelmingly supported evidence of unit roots at both individual and joint frequencies.<br /><br /> <h3>Files</h3> The workfile and program files can be downloaded here.<br /><br /> <ul> <li> <a href="http://www.eviews.com/blog/seasuroot/seasuroot.WF1">seasuroot.WF1</a> <li> <a href="http://www.eviews.com/blog/seasuroot/seasuroot.prg">seasuroot.prg</a> </ul> <br /><br /> <hr /> <h3> References</h3><table> <tr valign="top"><td align="right" class="bibtexnumber"><a name="busetti-2003">1</a></td><td class="bibtexitem">Fabio Busetti and AM Robert Taylor. Testing against stochastic trend and seasonality in the presence of unattended breaks and unit roots. <em>Journal of Econometrics</em>, 117(1):21--53, 2003. [ <a href="references_bib.html#busetti-2003">bib</a> ] </td></tr> <tr valign="top"><td align="right" class="bibtexnumber"><a name="busetti-2003a">2</a></td><td class="bibtexitem">Fabio Busetti and Andrew Harvey. Seasonality tests. <em>Journal of Business & Economic Statistics</em>, 21(3):420--436, 2003. [ <a href="references_bib.html#busetti-2003a">bib</a> ] </td></tr> <tr valign="top"><td align="right" class="bibtexnumber"><a name="canova-1995">3</a></td><td class="bibtexitem">Fabio Canova and Bruce E Hansen. Are seasonal patterns constant over time? a test for seasonal stability. <em>Journal of Business & Economic Statistics</em>, 13(3):237--252, 1995. [ <a href="references_bib.html#canova-1995">bib</a> ] </td></tr> <tr valign="top"><td align="right" class="bibtexnumber"><a name="hylleberg-1990">4</a></td><td class="bibtexitem">Svend Hylleberg, Robert F Engle, Clive WJ Granger, and Byung Sam Yoo. Seasonal integration and cointegration. <em>Journal of econometrics</em>, 44(1-2):215--238, 1990. [ <a href="references_bib.html#hylleberg-1990">bib</a> ] </td></tr> <tr valign="top"><td align="right" class="bibtexnumber"><a name="kwiatkowski-1992">5</a></td><td class="bibtexitem">Denis Kwiatkowski, Peter CB Phillips, Peter Schmidt, and Yongcheol Shin. Testing the null hypothesis of stationarity against the alternative of a unit root: How sure are we that economic time series have a unit root? <em>Journal of econometrics</em>, 54(1-3):159--178, 1992. [ <a href="references_bib.html#kwiatkowski-1992">bib</a> ] </td></tr> <tr valign="top"><td align="right" class="bibtexnumber"><a name="smith-1999">6</a></td><td class="bibtexitem">Richard J Smith and AM Robert Taylor. Likelihood ratio tests for seasonal unit roots. <em>Journal of Time Series Analysis</em>, 20(4):453--476, 1999. [ <a href="references_bib.html#smith-1999">bib</a> ] </td></tr> <tr valign="top"><td align="right" class="bibtexnumber"><a name="taylor-2003">7</a></td><td class="bibtexitem">Robert AM Taylor. Robust stationarity tests in seasonal time series processes. <em>Journal of Business & Economic Statistics</em>, 21(1):156--163, 2003. [ <a href="references_bib.html#taylor-2003">bib</a> ] </td></tr> <tr valign="top"><td align="right" class="bibtexnumber"><a name="taylor-2005">8</a></td><td class="bibtexitem">AM Robert Taylor. Variance ratio tests of the seasonal unit root hypothesis. <em>Journal of Econometrics</em>, 124(1):33--54, 2005. [ <a href="references_bib.html#taylor-2005">bib</a> ] </td></tr></table></span>IHSEViewshttp://www.blogger.com/profile/04703437003033046408noreply@blogger.com2