In this blog post we will show one of the new methods for time series cross-validation. The demonstration will compare the forecasting performance of rolling window cross-validation with models constructed from least squares as well as a simple split of our dataset into training and test sets.
We will be evaluating the out-of-sample prediction abilities of this new technique on some important macroeconomic variables. The analysis will show the promising forecast performance obtained on the variables in this dataset by using a time series specific cross validation method compared with simpler methods.
Table of Contents
Background
When performing model selection for a time series forecasting problem it is important to be aware of the temporal properties of the data. The time series may be generated by an underlying process that changes over time, resulting in data that are not independent and identically distributed (i.i.d.). For example, time series data are frequently serially correlated and the ordering of the data are important.Traditional time series econometrics solves this problem by splitting the data into training and test sets, with the test set coming from the end of the dataset. While this preserves the temporal aspects of the data, not all of the information in the dataset is used because the data in the test set are not used to train the model. Any characteristics unique to the training or test dataset may negatively affect the forecast performance of the model on new data.
Meanwhile, other model selection procedures such as cross-validation typically assume the data to be i.i.d., but have often been applied to time series data without regard to temporal structure. For example, the very popular k-fold cross-validation is done by splitting the data into k sets, treating k-1 of them collectively as the training set, and using the remaining set as the test set. While the data within each set retain their original ordering, the test set may occur before portions of the training data. So while cross-validation makes full use of the data, it partly ignores its time ordering.
The two time series cross-validation methods introduced in EViews 12 combine the benefits of temporal awareness of traditional time series econometrics with the use of the entire dataset from cross-validation. More details about these procedures can be found in the EViews documentation. We have chosen to demonstrate ENET with rolling time series cross-validation, which “rolls” a window of constant length forward through the dataset, keeping the test set after the training set.
In order to illustrate another method in the family of elastic net shrinkage estimators we use ridge regression for this analysis. Ridge regression is another penalized estimator that is related to Lasso (more details are in this blog post). Instead of adding an L1 penalty term to the linear regression cost function as in Lasso, we add an L2 penalty term: \begin{align*} J = \frac{1}{2m}\xsum{i}{1}{m}{\rbrace{y_i - \beta_0 -\xsum{j}{1}{p}{x_{ij}\beta_j}}} {\color{red}{+\lambda\xsum{j}{1}{p}{\beta_j^2}}} \end{align*} where the regularization parameter $\lambda$ is chosen by cross-validation.
Dataset
The data for this demonstration consist of 108 monthly US macroeconomic series from January 1959 to December 2007. This was part of the dataset used in Stock and Watson (2012) (we only use the data on “Sheet1”). Each time series is stationary transformed according to the specification in the data heading. The stationarity transformation is important to ensure that the series are identically distributed and so that the simple split into training and test data in the first part of our analysis does not produce a test set that is significantly different from our training set. In the table below we show part of the data used for this example.
|
|
Analysis
We take each series in turn as the dependent and treat the other 107 variables as independent variables for estimation and forecasting. Each regression then has 108 variables, plus one intercept. The independent variables are lagged by one observation, which is one month. The first 80% of the dataset is used to estimate the model (the "estimation sample") and the last 20% is reserved for forecasting (the "forecasting sample").Because we want to compare each model type (least squares, simple split, and rolling) on an equal basis, we have chosen to take the coefficients estimated from each model and keep them fixed over the forecast period. In addition, while it might be more interesting to use pseudo out-of-sample forecasting over the forecast period rather than fixed coefficients, rolling cross-validation is time intensive and we preferred to keep the analysis tractable.
The first model is a least squares regression on each series over the estimation sample as a baseline. With the coefficients estimated from OLS we forecast over the forecast sample.
Next, we use ridge regression with a simple split on the estimation sample as a comparison. (Simple Split is a new addition to ENET cross-validation in EViews 12 that divides the data into an initial training set and subsequent test set.) We then split this first 80% of the dataset further into training and test sets using the default parameters. Cross-validation chooses a set of coefficients that minimize the mean squared error (MSE). Using these coefficients we again forecast over the remaining forecast sample.
Finally, we apply rolling time series cross-validation to the same split of the data for each series: the estimation sample as a training and test set for rolling cross-validation and the forecast sample for forecasting using the coefficients chosen for each series. We use the default parameters for rolling cross validation and again minimize the MSE.
After generating 324 forecasts with our 108 variables and three different models, we collected the root mean squared error (RMSE) of each forecast into a table. This table is shown below.
|
|
Below we include the equivalent table for mean absolute error (MAE). Percentages for this error measure are 20% for OLS, 31% for simple split cross-validation, and 50% for rolling cross-validation.
|
|
We leave further investigation of these time series, and their estimation and forecasting properties with methods that are temporally aware, to the reader.
No comments:
Post a Comment