Bridge over ocean
1 June 2016 CFA Institute Journal Review

Does Academic Research Destroy Stock Return Predictability? (Digest Summary)

  1. Rich Wiggins, CFA

Financial research has uncovered many new factors (e.g., small cap, value, momentum, low beta) that explain stock returns; in fact, many of these factors have already been commercialized into financial products. The authors examine whether these historical insights and return patterns remain after the academic research discovering them is published.

What’s Inside?

Market commentators have pointed out that since the publication of research on the value and size effects, index funds based on these variables have failed to generate the expected level of excess returns that the original research predicted. To address this issue, the authors study the return predictability of 97 variables shown to predict cross-sectional stock returns outside each study’s original sample. The authors discover that portfolio returns are 26 percentage points (pps) lower when a new study is commenced after the completion of the original study (referred to as “out of sample”) and 58 pps lower after publication. The authors estimate that this additional 32 pp (58% – 26%) reduction in the return after publication is because of other investors taking advantage of what they have learned (i.e., “informed trading”). The authors’ findings, therefore, suggest that investors learn about mispricing from academic publications and help move prices toward fundamentals.

How Is This Research Useful to Practitioners?

There has been an explosion of “academically verified” investment approaches in recent years—particularly regarding exchange-traded funds—so the durability of these strategies is the final piece of the puzzle. Beyond the studies’ historical insights, the million-dollar question is, “Do these academic studies affect prices immediately or do these new factors become understood very slowly after they have been exposed to public scrutiny for years?” Fama (Journal of Finance 1991) famously conjectured that much of the return predictability in academic studies is the outcome of data mining. Thus, studies that explore whether these relationships continue out of sample not only address the “shelf life” of these risk premiums but also shed light on why cross-sectional return predictability is observed in the first place.

No previous study has compared in-sample returns, post-sample returns, and post-publication returns for such a large sample of predictors. Another difference is that previous studies assumed that the informed trader knew about the predictor before (and after) the publication date. The authors find that an anomaly is more easily accepted (and returns decline quickly) when the pattern of returns is not too noisy and the payoff horizon is short (e.g., the small-firm effect in January). A noisy anomaly, such as the value/growth distinction, can take decades to play out, so it is not quickly arbitraged away.

How Did the Authors Conduct This Research?

The crux of the paper is to understand what happens to return predictability outside a study’s sample period, so the authors compare each predictor’s returns over three distinct periods: (1) the original study’s sample period, (2) the period after the original sample but before publication, and (3) the post-publication period. Previous studies attributed cross-sectional return predictability to statistical biases, rational pricing, and mispricing. By examining return predictability over the three periods, the authors can differentiate between these explanations. If return predictability in published studies results solely from a statistical bias, such as data mining, the predictability should disappear out of sample. If return predictability reflects mispricing, one would expect the returns associated with a predictor to disappear or at least decay after the paper is published if investors learn to trade against the mispricing. If frictions prevent arbitrage from fully eliminating mispricing, return predictability will not disappear entirely. Predictability will also persist if it reflects risk.

The authors replicate or approximate the methodology of the original research; in some cases, they are unable to reconstruct the study exactly. They examine increases in trading volume and short interest that are consistent with the notion that academic research draws attention to predictors. They determine the publication date by the year and month on the cover of the journal and examine a number of pre- and post-publication time partitions to control for time trends and persistence.

The authors’ results suggest that academic publications transmit information to sophisticated investors because returns dropped considerably. For the 97 portfolios, the average monthly in-sample return is 0.582%, the average out-of-sample pre-publication return is 0.402%, and the average post-publication return is 0.264%.

Abstractor’s Viewpoint

This study is an investigation of the effects of publication, not to be confused with the impact of when a factor is “known” or “discovered.” The end of the original sample provides a clear demarcation for estimating statistical bias, but the publication date provides only a proxy for when market participants “learn” about a predictor. Alfred Cowles and Herbert Jones wrote a paper decades ago (Econometrica 1937) that captured the elements of trend following and momentum—so, was the concept “known” in 1937 or when “Time Series Momentum” was published in the Journal of Financial Economics in May 2012? Many of these factors are open secrets, if they are secret at all. It is also important to remember that there are structural reasons why premiums may persist even after they are known, as outlined in Shleifer and Vishny’s “The Limits of Arbitrage” (Journal of Finance 1997), which illustrates how the arbitrage process can be quite ineffective in bringing prices back to fundamental values even in extreme circumstances.

We're using cookies, but you can turn them off in Privacy Settings. Otherwise, you are agreeing to our use of cookies. Learn more in our Privacy Policy.