Two analysts recently warned that so-called soft economic data has become less reliable for profiling the macro trend. “We believe the soft data, and subsequent forecasts, are suffering from a circular reference due to groupthink, herd mentality and political preferences,” Jim Bianco (Bianco Research) and Ben Breitholtz (Arbor Research & Trading) opined last month. If they’re right, indicators such as the ISM Manufacturing Index and the University of Michigan Consumer Confidence Survey offers less insight for monitoring the business cycle.
“Starting in the mid-1990s, and especially following the financial crisis, these surveys have become hostage to ‘bandwagon biases’ due to the growing influence of internet and such things as politics,” they explained.
Case in point: the University of Michigan’s confidence survey released Friday revealed a record 35 percent of respondents heard favorable news about the economy. Is there really five times more positive economic news now than was present at the peak of the tech and housing bubbles?
Or is it that Twitter (started in 2006) and Facebook news feeds (which were in their infancy in 2006) plus other social media are contributing to an internet-driven bandwagon effect? If so, soft data and forecasts may reflect interpretations of the herd mentality and not the economy.
If survey indicators are less reliable, the challenge is greater for reliably estimating recession risk in real time with these numbers. Soft data is widely used in connection with hard data (employment reports, consumer spending numbers, etc.) to develop a profile economic activity. Accepting Bianco and Breitholtz’s analysis at face value suggests that the dependability of business-cycle analytics has been degraded.
Maybe, but the counterpoint is that every indicator in isolation has been of questionable value. That’s old news. Recall, for example that there’s been an increase lately in criticism of the Treasury yield curve as a tool for estimating recession risk. Federal Reserve Chairman Jerome Powell, for example, questioned the value of using the difference between short and long rates for estimating the probability that an economic downturn is approaching.
In fact, it’s reasonable – essential, really – to assume that every indicator’s worth waxes and wanes through time as a real-time metric for business-cycle analysis. Not surprisingly, you can always find someone casting aspersions on one indicator or another.
The opposite is true as well. Some analysts, for instance, have recommended favoring the ISM Manufacturing Index, as a preferred measure of sentiment in the sector, for modeling the business cycle. As valuable as this metric is, it still falls short of perfection for estimating the timing of US recessions. In fact, in early 2016 this indicator fell into negative terrain, which some saw as a sign that a new downturn for the economy overall was in progress. But that turned out to be another false signal from the perspective of NBER-defined recessions.
Par for the course when trying to elevate any one indicator (or two or three) to god-like status for divining the ebb and flow of the US economy. The same can be said for arguing that a given dataset has fallen from grace.
In fact, it’s always short-sighted to promote or denigrate one or even a handful of indicators. Just as you can never rely on one or two securities to build a robust investment portfolio, the same principle applies to business-cycle analytics. Every indicator is in a constant state of flux in terms reliability. Unfortunately, it’s unclear which ones are truly useful and which ones aren’t at any one point in time. What’s more, the list of useful and less-than-useful indicators keeps changing.
The solution is to routinely monitor a diversified set of business-cycle metrics (soft and hard data, economic and financial indicators) to minimize the risk that any one dataset is misleading us. That’s the inspiration for why I track a diversified mix of indicators to estimate recession risk. But even that’s not good enough if the goal is developing a timely and relatively reliable measure of the macro trend. For that standard, it’s necessary to aggregate several business-cycle benchmarks, each favoring a different methodology and a varying set of economic and financial numbers.
That’s a high bar, but a necessary one to meet if you’re looking for comparatively dependable real-time estimates of recession risk. It’s a standard that routinely presented in the updates of the Composite Recession Probability Index (CRPI), which is included in every update of The US Business Cycle Risk Report. CRPI is an index of index, comprised of the following:
- Economic Trend Index
- Economic Momentum Index
- Macro-Markets Risk Index
- Chicago Fed National Activity Index
- ADS Index
Each of these business-cycle benchmarks has its own particular set of strengths and weaknesses. Using them as inputs via a probit model to estimate real-time recession risk on a daily basis in CRPI will likely provide a high degree of reliability and timeliness for deciding when a downturn has started or not. It’s not about forecasting, which is far less reliable. Rather, CRPI’s goal is to offer the highest quality of real-time insight about what’s just happened in the economy for making informed decisions about the macro trend.
On that note, recession risk was virtually nil as of last week’s close, based on CRPI’s update in the June 4, 2018 issue of The US Business Cycle Risk Report. No doubt that some of CRPI’s inputs are of questionable value. But that’s not a problem since this benchmark is broadly diversified and so any issues for a specific dataset — or three — are a minor challenge at worst.
The caveat is that current conditions change and so today’s blue skies for the economy can turn stormy, perhaps in a flash. Constant vigilance, as a result, is required. The catch is that you’re monitoring efforts may be in vain if your economic perspective is too narrowly focused.