University of Michigan professor Scott Page reminds us that more is better for using models in the art/science of forecasting. Writing in the Harvard Business Review, he advises that “with an ensemble of models, you can make up for the gaps in any one of the models.”
Researchers have been making this point for decades, starting with the seminal 1969 paper “The Combination of Forecasts.” In a new book published this week, Page refreshes the idea. “To rely on a single model is hubris. It invites disaster. To believe that a single equation can explain or predict complex real-world phenomenon is to fall prey to the charisma of clean, spare mathematical forms,” he explains in The Model Thinker: What You Need to Know to Make Data Work for You.
We need many models to make sense of complex systems. Complex systems like politics, the economy, international relations, or the brain exhibit ever-changing emergent structures and patterns that lie between ordered and random.
With that in mind, The Capital Spectator today previews a new research project using multiple models to forecast the near-term trend (based on one-year changes) for key economic indicators. The game plan is to launch a research service and routinely update the outlook. But first, let’s review the methodology and run the numbers for tomorrow’s October report on consumer spending as an example.
The focus is the average forecast via eight models generate with R code. Adhering to best practices for combining forecasts, each model employs a different methodology. Seven of the models use univariate methods – analyzing the indicator under scrutiny in isolation – while the eighth model draws on multiple indicators. Here’s a brief summary of each model:
Exponential smoothing state space model: the average forecast is used from 100 simulations based on bootstrap aggregating via the forecasting package. The data set is the historical record for the target indicator.
Autoregressive integrated moving average model: the average forecast is used from 100 simulations based on bootstrap aggregating via the forecasting package. The data set is the historical record for the target indicator.
Neural network model: the average forecast is used from 100 simulations via the forecasting package. The data set is the historical record for the target indicator.
Naïve model: this forecast simply extracts the last data point and assumes that it will prevail for the next 12 months.
Cubic Spline model: a local linear forecasts using cubic smoothing splines via the forecasting package. The data set is the historical record for the target indicator.
Facebook’s Prophet forecasting tool. The data set is the historical record for the target indicator.
Theta method forecast model: the methodology is a simple exponential smoothing with drift via the forecasting package.
Vector autoregression model: this multi-variate methodology (via the vars package) uses the following datasets:
Personal consumption expenditures
10-year Treasury yield
Effective Federal funds rate
Consumer Price Index
University of Michigan Consumer Sentiment Index
Disposable personal income
Average hourly earnings of all private employees
Kansas City Fed Labor Market Conditions Index
University of Michigan consumer inflation expectations
Taking the mean from the eight forecasts provides the following point forecasts (and prediction intervals at the 95% confidence level) for the one-year percentage change for personal consumption expenditures.
The projections tell us that the growth trend for consumer spending will edge lower in the months ahead, based on the point forecasts. The outlook for a mildly softer trend aligns with yesterday’s profile of GDP nowcasts for the fourth quarter.
Readers might wonder why the focus is on the one-year change? The short answer: there’s considerably less noise (and more signal) in one-year data vs. the monthly changes that grab most of the attention in the media updates of economic news.
The real value of monitoring one-year estimates of key economic indicators arises when we review the projections across a wide range of datasets. In turn, the insight offers the potential for a robust review of the big-picture economic trend.
Yes, all the usual caveats apply. But in the necessary task of developing intuition about where the economy is headed in the near term, by combining forecasts for one-year changes for a broad set of indicators, promises to offer a substantial improvement over the usual routine.
The real test, of course, will be how the forecasts stack up against the reported data through time. If history is a guide, the outlook for this project is encouraging. Numerous studies over the years document an improvement in accuracy with ensemble forecasting methods and so there’s a strong case for expecting no less here. Stay tuned for updates to learn how the actual results compare.