Monday, October 15, 2018

Principal Component Analysis Part I (Theory)

Most students of econometrics are taught to appreciate the value of data. We are generally taught that more data is better than less, and that throwing data away is almost "taboo". While this is generally good practice when it concerns the number of observations per variable, it is not always recommended when it concerns the number of variables under consideration. In fact, as the number of variables increases, it becomes increasingly more difficult to rank the importance (impact) of any given variable, and can lead to problems ranging from basic overfitting, to more serious issues such as multicollinearity or model invalidity. In this regard, selecting the smallest number of the most meaningful variables -- otherwise known as dimensionality reduction -- is not a trivial problem, and has become a staple of modern data analytics, and a motivation for many modern techniques. One such technique is Principal Component Analysis (PCA).

Wednesday, September 19, 2018

Authors and guest blog by Davaajargal Luvsannyam and Khuslen Batmunkh

Dating of business cycle is a very crucial for policy makers and businesses. Business cycle is the upward and downward trend of the production or business. Especially macro business cycle, which represents the general economic prospects, plays important role for policy and management decisions. For instance, when the economy is in downtrend companies tend to act more conservative. In contrast, when the economy is in uptrend companies tend to act more aggressive with the purpose of enhancing their market share. Keynesian business cycle theory suggests that business cycle is an important indicator for monetary policy which is able to stabilize the fluctuations of the economy. Therefore accurate dating of business cycle can be fundamental to efficient and practical policy decisions.

Monday, August 20, 2018

This post is guest authored by Ulrich Gunter, Irem Önder, Stefan Gindl, all from MODUL University Vienna, and edited by the EViews team.  (Note: all images on this post are for illustrative purposes only; are not taken from the published article and do not represent the exact analysis performed for the article).

in the scholarly journal Tourism Economics investigates the predictive ability of Facebook “likes” and Google Trends data on tourist arrivals in four major Austrian cities.  The use of online “big data” to perform short term forecasts or nowcasts is becoming increasingly important across all branches of economic study, but is particularly powerful in tourism economics.

Wednesday, May 30, 2018

State Space Models with Fat-Tailed Errors and the sspacetdist add-in

Author and guest post by Eren Ocakverdi.

Linear State Space Models (LSSM) provide a very useful framework for the analysis of a wide range of time series problems. For instance; linear regression, trend-cycle decomposition, smoothing, ARIMA, can all be handled practically and dynamically within this flexible system.
One of the assumptions behind LSSM is that the errors of the measurement/signal equation are normally distributed. In practice, however, there are situations where this may not be the case and errors follow a fat-tailed distribution. Ignoring this fact may result in wider confidence intervals for the estimated parameters or may cause outliers to bias parameter estimates.

Tuesday, October 17, 2017

10+ New Features Added to EViews 10

EViews 10+ is a free update to EViews 10, and introduces a number of new features, including:
• Chow-Lin, Denton and Litterman frequency conversion with multiple indicator series.
• Model dependency graphs.
• US Bureau of Labor Statistics (BLS) data connectivity.
• Introduction of the X-13 Force option for forcing annual totals.
• Expansion of the EViews 10 snapshot system to program files.
• A new help command.
All current EViews 10 users can receive the following new features. To update your copy of EViews 10, simply use the built in update feature (Help->EViews Update), or manually download the latest EViews 10 patch.

Tuesday, August 8, 2017

Dumitrescu-Hurlin Panel Granger Causality Tests: A Monte Carlo Study

With data availability at its historical peak, time series panel econometrics is in the limelight. Unlike traditional panel data in which each cross section $i = 1, \ldots, N$ is associated with $t=1, \ldots, T < N$ observations, what characterizes time series panel data is that $N$ and $T$ can both be very large. Moreover, the time dimension also gives rise to temporal dynamic information and with it, the ability to test for serial correlation, unit roots, cointegration, and in this regard, also Granger causality.

Wednesday, July 26, 2017

Hamilton’s “Why you should never use the Hodrick-Prescott Filter”

Professor James D. Hamilton requires no introduction, having been one of the most important researchers in time series econometrics for decades.
Over the past few years, Hamilton has been working on a paper calling on applied economists to abandon the ubiquitous Hodrick-Prescott Filter and replace it with a much simpler method of extracting trend and cycle information from a time series.
This paper has become popular, and a number of our users have asked how to replicate it in EViews. One of our users, Greg Thornton, has written an EViews add-in (called Hamilton) that performs Hamilton’s method.  However, given its relative simplicity, we thought we’d use a blog post to show manual calculation of the method and replicate the results in Hamilton’s paper.