Tuesday, May 4, 2021

SpecEval Add-In

Authors and guest post by Kamil Kovar

This is the first in a series of blog posts that will present a new EViews add-in, SpecEval, aimed at facilitating time series model development. This blog post will focus on the motivation and overview of the add-in functionality. Remaining blog posts in this series will illustrate the use of the add-in.

Table of Contents

  1. Basic Principles
  2. Comprehensiveness: What Does SpecEval Do?
  3. Flexibility in Practice
  4. What’s Next?
  5. Footnotes

Basic Principles

The idea behind SpecEval is simple: to do model development effectively – especially in time constrained environment – one should have a tool that can quickly produce and summarize information about particular model. Such tool should satisfy three key requirements:

  1. It should be very easy to use, so that its use does not introduce additional costs into the model development process.
  2. It should be comprehensive in the sense that it includes all relevant information one would like to have when evaluating particular model.
  3. It should be flexible so that user can easily change what information is included in particular situations. Flexibility is a necessary counterpart of comprehensiveness so that one avoids congestion.
The first requirement is facilitated by EViews add-in functionality which allows execution either through GUI or command, so that model evaluation can be performed repeatedly through one quick action. Apart from this, the add-in functionality and options are designed in a way that allows the user to easily adjust the execution settings. For example, the add-in can be executed both for one model at a time or for multiple models at the same time. Furthermore, including multiple models is as simple as just listing them (wildcards are acceptable). Meanwhile, each output type can be specified as part of the execution list, making it easy to include additional outputs.



Comprehensiveness: What Does SpecEval Do?

So what does SpecEval add-in do? In broad terms, it produces tables and graphs that provide information about the model, and especially its behavior. Note here that discussing the set of possible outputs (listed in the table below) is not in the scope of this blog post since most functionality will be illustrated in the blog posts to follow. Instead the table should highlight that the add-in is indeed comprehensive from a model development perspective.1

Object Name Description
Estimation output table Adjusted regression output table
Coefficient stability graph Graph with recursive equation coefficients
Model stability graph Graph with recursive lag orders
Performance metrics tables Table with values of forecast performance metrics
Performance metrics tables (multiple specifications) Table with values of forecast performance metrics for given metric for all specifications
Forecast summary graph Graph with all recursive forecasts with given horizons
Sub-sample forecast graph Graph with forecast for given sub-sample
Subsample forecast decomposition graph Graph with decomposition of sub-sample forecast
Forecast bias graph Scatter plot of forecast and actual values for given forecast horizon (Minzer-Zarnowitz plot)
Individual conditional scenario forecast graph (level) Graph with forecast for single scenario and specification
Individual conditional scenario forecast graph (transformation) Graph with transformation of forecast for single scenario and specification
All conditional scenario forecast graph Graph with forecasts for all scenarios for single specification
Multiple specification conditional scenario forecast graph Graph with forecasts for single scenario for multiple specifications
Shock response graphs Graphs with response to shock to individual independent variable/regressor

The first category of outputs includes information about the model in form of estimation output, with several enhancements that facilitate quick evaluation such as suitable color-coding. Moreover, the information about the model is not limited to final model estimates, but also includes information about recursive model estimates (e.g. recursive coefficients and/or lag orders). See figures below for illustration of both outputs.


Figure 1: Estimation Example


Figure 2: Coefficient Stability

Nevertheless, far more stress is put on information about forecasting performance, which is the key focus of the add-in. Correspondingly, the add-in contains several outputs that either visualize historical (backtest) forecasts2, or that provide numerical information about the precision of these forecasts. The main graph – indeed in some sense the workhorse graph of the add-in – displays all available historical forecasts together with the actuals, see figure below. Apart from listing multiple horizons, the user can also include additional series in the graph or decide to use one of four alternative transformations.


Figure 3: Conditional Forecasts

The next table summarizes measures of precision of historical forecasts. The table displays the values of particular precision metrics (MAE, RMSE or bias) for alternative specifications and for multiple horizons. Crucially, this table is color-coded facilitating quick comparison across specifications.


Figure 4: Forecast Precision

Lastly, the add-in also provides detailed information about the behavior of the model under different conditions. This includes two types of exercises. The first exercise consists of creating and visualizing conditional scenario forecasts. This is useful both as a goal in itself, when scenario forecasting is an important use of the model, but more importantly also for instrumental reasons: thanks to their controlled-experiment nature, scenario forecasts can help identify problems with the model. The add-in produces several types of graphs visualizing scenario forecasts, see figure below for illustration.


Figure 5: Model Scenarios

The second exercise is creating and visualizing impulse shock responses, i.e. introducing shocks to a single independent variable or regressor and studying the response of the dependent variable. This allows the modeler to assess the influence a particular independent variable/regressor has on the dependent variable, as well as the dynamic profile of responses. See figure below for illustration.


Figure 6: Impulse Responses

The above discussion makes it clear that the focus here is on graphical information, rather than on numerical information as is more customary in model development toolkits. This is motivated by two considerations. First, graphical information is significantly more suitable for the interactive model development process in which the modeler comes up with improvements to the current model based on information on its performance. Second, the human brain is able to process graphical information faster than numerical information; hence even when numerical information is presented, it is associated with graphical cues to increase the processing speed, such as color-coding of the estimation output.



Flexibility in Practice

The third basic principle – flexibility – is in practice embodied in the ability of the user to adjust the processes or the outputs via add-in options. There are altogether almost 40 user settings – all listed and explained in the add-in documentation - which can be divided into several categories.

First, general options focus on which of the in-built functionality is going to be performed and on which objects/specifications. Next, there is a group of options that allows customization of the outputs, such as specification of horizons for tables and/or graphs, transformations used in graphs, or additional series to be included in graphs. Third group of options allows for some basic customization of the forecasting processes. For example, one can choose between in-sample and out-of-sample forecasting, or one can specify additional equations/identities to be treated as part of the forecasting model.2 These are just two examples in which the forecasting process can be customized.

Final two groups focus on control of samples used in the various procedures and on customization of storage settings. The former includes for example an option to manually specify sample boundaries for the backtesting procedures, or for the conditional scenario forecasts. The latter then allows the user to determine which objects will be kept in the workfile after the execution and under what names or aliases.



What's Next

Future blog posts in this series will focus on illustrating both the use of the add-in, highlighting the ease of use and flexibility, and on the outputs. Each will follow a particular application, always focusing on a particular feature(s) of the add-in. First in the series will provide overview of basics of using the add-in, highlighting the key outputs and the customization of the process and the outputs. Second in the series will then stress the ability - and power - of using transformations in model development. Third post will focus on creating unconditional forecasts, while the last post will conclude with a brief look at recursive model structures.




Footnotes

1. Of course, comprehensiveness is more a goal rather than a state in that there will always be additional functionalities that could/should be included. See model development list on the add-in GitHub site for what additional functionality is on the roadmap, but feel free to also make suggestions there.
Also, the add-in is comprehensive in terms of its focus, which is forecasting behavior of a given model – as opposed to econometric characteristics of the model. This means that currently the add-in does not include any information in the form of outputs of econometric tests.

2. By historical forecasts I mean conditional forecasts, which are potentially multistep and dynamic, and/or recursive.
3. Note that these two features – in-sample forecasting and inclusion of multiple equations in the forecasting model – are possible thanks to in-built EViews functionality and hard to replicate in other statistical programs. The former is thanks to the separation between estimation and forecasting samples, the latter thanks to flexible model objects.

No comments:

Post a Comment