How Is This Research Useful to Practitioners?
The authors begin by examining the relationship between analyst forecast horizons (the number of days between the forecast and the actual EPS announcement) and forecast error. Their analysis confirms prior findings that analyst forecasts are inaccurate but provides only weak evidence that forecasts with short (long) horizons are more (less) accurate.
Analyst overconfidence is commonly invoked as an explanation for forecast inaccuracy. Citing the psychological literature, the authors introduce the notion of assessing overconfidence based on the degree of calibration—or rather, miscalibration—as measured by the percentage of true values falling outside of individually determined intervals (e.g., an analyst’s annual EPS forecast ranges).
To measure the degree of analyst calibration, the authors examine “hit rates” over different periods. Hit rates reflect how often actual EPS falls between the minimum and maximum forecasts for a given company within a given period. When forecast horizons are broken up into 100-day periods, the overall hit rate is around 45%; when they are broken up into 50-day periods, it is around 35%. According to the authors, these results “establish an empirical baseline for analyst confidence, and can be used as an effective way to evaluate or select some financial analysts.”
Companies have an overall hit rate of around 70%, and for individual companies, analysts have an overall hit rate of around 40%. These additional baselines can help investors learn to trust the forecasts for some companies more than those for other companies. The authors note that Starbucks analysts have a hit rate of around 60%.
Finally, when analyst-company hit rates are segmented by industry, they can range from around 35% to around 50%, with the highest being in the retail industry and in the agriculture, forestry, and fishing industries.
How Did the Authors Conduct This Research?
The authors obtain EPS forecasts for fiscal year 2014 for the United States from the I/B/E/S database maintained by Thomson Financial. This dataset contains 205,664 forecasts issued by 5,197 analysts for 6,010 companies.
Four measures of accuracy—individual error, absolute error, relative individual error, and relative absolute error—are calculated. By running four separate regressions (one for each of the measures), the authors conduct an initial examination of how forecast accuracy changes as a function of time.
They then examine both relative absolute error and hit rates in different periods, stratifying the forecast horizons across companies in two different ways:
-
Four 100-day periods (<100 days, 100–199 days, 200–299 days, >300 days)
-
Ten 50-day periods (<50 days, . . ., >400 days)
Finally, they aggregate across periods and examine company hit rates and analyst-company hit rates. For company hit rates, hits occur when the actual EPS falls between the minimum and maximum forecasts for an individual company. For analyst-company hit rates, hits occur when the actual EPS falls between the minimum and maximum forecasts made by an individual analyst following an individual company. The authors also present analyst-company hit rates segmented by industry.
Abstractor’s Viewpoint
The authors’ findings regarding overall forecast hit rates, company forecast hit rates, and analyst-company forecast hit rates add to a well-studied, complex branch of the behavioral finance literature and should be of interest to investment practitioners.
It is not the authors’ immediate goal to assess analyst overconfidence itself; rather, they aim to provide a tool, in the form of an empirical methodology for calculating baseline confidence measures, that can in turn be used to assess overconfidence. For example, if overall analyst EPS forecast hit rates are around 35%–45%, an individual analyst whose hit rate is 25% may be deemed overconfident, or at least more overconfident than his peers.
This study shines brightest when the authors discuss hit rates for specific companies (e.g., Starbucks) and industries (e.g., retail) relative to the baseline measures. Hopefully, the authors will demonstrate more-extensive applications of their baseline confidence measure methodology in future studies.
Investment professionals looking for insights regarding analyst overconfidence are likely to be disappointed. Those working within academia and within large in-house investment firm research departments may benefit from this study as well as from the authors’ extensive review of calibration and overconfidence within the psychology and behavioral finance literature.