A portfolio’s performance is typically measured in terms of its alpha—its return after adjustment for market risk. But unless the reported alpha is large compared to the errors incurred in measuring it, it may be statistically insignificant. Because the alphas for actual portfolios are rarely that large, some observers have concluded that investment managers are unable to outperform the market. The real explanation, however, is that current statistical techniques cannot detect good or bad performance at levels managers can realistically be expected to achieve.
To prove this point, the author simulated the performance of 100 portfolio managers over 10 years. Ten of these managers were programmed to be long-term outperformers and 10 were programmed to be underperformers; the remaining 80 were random performers. Although, over the 10-year period, the outperformers had a group return over twice that of the underperformers, random effects made it possible for the true outperformers to underperform in any one quarter, or for the true underperformers to outperform.
The question is whether the true outperformers and underperformers could be distinguished individually from the 80 random performers and from each other. Six of the top 10 performers over the first half of the sample period were actually random performers. The only two portfolios that achieved significantly positive alphas over the second half of the period belonged to random performers. Over the full 10-year period, of the three managers with significantly positive alphas, two were random performers.
Given enough time, the outperformers should produce results significantly superior to those of the random performers. But the time required undoubtedly exceeds the lifetimes of the managers being measured.