Tuesday, February 16, 2021

Lasso Variable Selection

In this blog post we will show how Lasso variable selection works in EViews by comparing it with a baseline least squares regression. We will be evaluating the prediction and variable selection properties of this technique on the same dataset used in the well-known paper “Least Angle Regression” by Efron, Hastie, Johnstone, and Tibshirani. The analysis will show the generally superior in-sample fit and out-of-sample forecast performance of Lasso variable selection compared with a baseline least squares model.

Lasso variable selection, new to EViews 12 and also known as the Lasso-OLS hybrid, post-Lasso OLS, the relaxed Lasso (under certain conditions), or post-estimation OLS, uses Lasso as a variable selection technique followed by ordinary least squares estimation on the selected variables.

Table of Contents

  1. Background
  2. Dataset
  3. Analysis



Background

In today’s data-rich environment it is useful to have methods of extracting information from complex datasets with large numbers of variables. A popular way of doing this is with dimension reduction techniques such as principal components analysis or dynamic factor models. By reducing the number of variables in a model, we can reduce overfitting, reduce the complexity of the model and make it easier to interpret, and decrease computation time. However, dimension reduction methods have the risk of losing useful information contained in variables that are not included in the reduced set, and may potentially have poorer predictive power.

Lasso is useful because it is a shrinkage estimator: it shrinks the size of the coefficients of the independent variables depending on their predictive power. Some coefficients may shrink down to zero, allowing us to restrict the model to variables with nonzero coefficients.

Lasso is just one method out of a family of penalized least squares estimators (other members include ridge regression and elastic net). Starting with the linear regression cost function: \begin{align*} J = \frac{1}{2m}\xsum{i}{1}{m}{\rbrace{y_i - \beta_0 -\xsum{j}{1}{p}{x_{ij}\beta_j}}} \end{align*} where $y_i$ is the dependent variable, $x_{ij}$ are the independent variables, $\beta_j$ are the coefficients, $m$ is the number of data points, and $p$ the number of independent variables, we obtain the coefficients $\beta_j$ by minimizing $J$. If the model based on linear regression is overfit and does not make good predictions on new data, then one solution is to construct a Lasso model by adding a penalty term: \begin{align*} J = \frac{1}{2m}\xsum{i}{1}{m}{\rbrace{y_i - \beta_0 -\xsum{j}{1}{p}{x_{ij}\beta_j}}} {\color{red}{+\lambda\xsum{j}{1}{p}{|\beta_j|}}} \end{align*} where the parameters are the same as before with the addition of the regularization parameter $\lambda$. By adding these extra terms the cost of $\beta_j$ is increased, so to minimize the cost function the values of $\beta_j$ have to be reduced. Smaller values of $\beta_j$ will "smooth out" the function so it fits the data less tightly, leaving it more likely to generalize well to new data. The regularization parameter $\lambda$ determines how much the cost of $\beta_j$ is increased. Lasso estimation in EViews can automatically select an appropriate value with cross-validation, which is a data-driven method of choosing $\lambda$ based on its predictive ability.

If we have a dataset with many independent variables, ordinary least squares models may produce estimates with large variances and therefore unstable forecasts. By applying Lasso regression to the data and removing variables that have been shrunk to zero, then applying OLS to the reduced number of variables, we may be able to improve forecasting performance. In this way we can perform dimension reduction on our data based on the predictive accuracy of our model.



Dataset

In the table below we show part of the data used for this example.


Figure 1: Data Preview
(Click to enlarge)

The ten original variables are age, sex, body mass index (bmi), average blood pressure (bp), and six blood serum measurements for 442 patients. They have all been standardized as described in the paper. The dependent variable is a measure of disease progression one year after the other measurements were taken and has been scaled to have mean zero. We are interested in the accuracy of the fit and predictions from any model we develop of this data and in the relative importance of each regressor.



Analysis

We first perform an OLS regression on the dataset to give us a baseline for comparison.


Figure 2: OLS Regression
(Click to enlarge)

One thing to note in this estimation result is that the adjusted R-squared for this model is .5066, indicating that the model explains approximately 51% of the variation in the dependent variable. We see that certain variables (BMI, BP, LTG, and SEX) have both a greater impact on the progression of diabetes after one year and are the most statistically significant.

Next, we run a Lasso regression over the same dataset and look at the plot of the coefficients against the L1 norm of the coefficients. This gives us a sense of how each coefficient contributes to the dependent variable. We can see that as the degree of regularization decreases (the L1 norm increases) more coefficients enter the model.


Figure 3: Coefficient Evolution
(Click to enlarge)

Let’s take a closer look at the coefficients.


Figure 4: Lasso Regression
(Click to enlarge)

The set of coefficients at the minimum value of lambda (.004516) are all nonzero. However, when we move to the lambda value in the next column (6.401), which is the largest value of lambda that is within one standard deviation of the minimum, we see that only four of the original ten regressors are nonzero. Compared with least squares, most of the coefficients in the first column have shrunk slightly toward zero, and more so in the next column with a larger regularization penalty (with the exception of an interesting sign change for HDL). Three of the variables retained (BMI, BP, and LTG) are the same as the variables identified by least squares as being both more influential on the outcome and statistically significant. But compared to least squares, this is a less complex model. Does reducing the number of variables in this way lead to a better fitting model? Evaluate a Lasso variable selection model with the same options and see.


Figure 5: Lasso Variable Selection
(Click to enlarge)

The unimpressive result of OLS applied to the variables selected from the Lasso fit is that adjusted R-squared has increased ever-so-slightly to .5068. Another thing to note is that while Lasso generally shrinks, or biases, the coefficients toward zero, OLS applied to Lasso expands, or debiases, them away from zero. This results in a decrease in the variance of the final model, as you can see by comparing the errors for the Lasso variable selection model with the first OLS model.

You may have noticed that the set of nonzero coefficients here is different than that for the Lasso example earlier. That’s because Lasso variable selection uses a different measure (AIC) to select the preferred model compared to Lasso. This is the same measure used for the other variable selection methods in EViews.

What about out-of-sample predictive power? We have randomly labeled each of the 442 observations as either training or test datapoints (the split is 70% training, 30% test). After doing least squares and Lasso variable selection on the training data, we use Series->View->Forecast Evaluation to compare the forecasts for least squares and Lasso variable selection over the test set:


Figure 6: Lasso Predictive Evaluation
(Click to enlarge)

We have achieved very slightly better predictive performance for some measures (MAE, MAPE) and very slightly worse for others (RMSE, SMAPE).

This is all mildly interesting. But the real power of variable selection techniques comes when you have a larger dataset and want to reduce the set of variables under consideration to a more manageable set. To this end, we use the “extended” dataset provided by the authors that includes the ten original variables plus squares of nine variables and forty-five interaction terms, for a total of sixty-four variables.

First, we repeat the OLS regression from earlier with the new extended dataset:


Figure 7: Extended OLS
(Click to enlarge)

Adjusted R-squared is actually higher than it was for the original ten variables, at .5233, so the additional variables have added some explanatory power to the model.

Next, let’s go straight to Lasso variable selection on the extended dataset.


Figure 8: Extended Lasso Variable Selection
(Click to enlarge)

Out of sixty-four original search variables, the selection procedure has kept fourteen. This is a significant reduction in complexity. The adjusted R-squared has increased from .5233 to .5308, and the standard error of the regression has decreased.

The in-sample R-squared and errors have moved in a modest but promising direction. What about out-of-sample prediction? We again compare the forecasts for least squares and Lasso variable selection over the test set:


Figure 9: Extended Lasso Predictive Evaluation
(Click to enlarge)

Now we can see a meaningful improvement in forecasting performance. All of the error measures have improved, some significantly. Applying Lasso variable selection to this larger dataset has led to reduced model complexity, a slight improvement in the in-sample fit, and improved forecasting performance over least squares.



Request a Demonstration

If you would like to experience Lasso methods in EViews for yourself, you can request a demonstration copy here.

3 comments:

  1. How to estimate Standard Error for the coefficients in ridge regression aproach.

    ReplyDelete
  2. How to estimate Standard Error for the coefficients in ridge regression approach. Please reply at javid@pide.org.pk

    ReplyDelete