The fact that relatively few investment organizations have actually adopted systems for measuring analysts’ performance may reflect a certain healthy caution: The commitment to a measurement system is not one to make casually, since measurement of analysts can alter both the scope of their jobs and their motivation.
Unless an investment organization is adopting a measurement system with the deliberate intention of forcing change—probably a dangerous practice—the system should reinforce and sharpen existing philosophies and procedures. Hence design of the measurement system requires a thorough understanding of the analyst’s present role within the organization. If, for example, the organization does not make clear-cut use of quantified output from the analyst, a measurement system is probably irrelevant, or worse. If, on the other hand, the analyst does provide quantified output, is it in the form of recommendations or forecasts? If forecasts, are they forecasts of earnings movements or price movements? Absolute or relative? Does the organization expect the analyst to make selections within his industry specialty, judgments about his industry relative to the rest of the market, or both?
If the analyst’s informal communications are deemed as important as his quantified forecasts or recommendations, the measurement system must take this into account. Otherwise it is going to encourage the analyst to emphasize his formal output and slight his informal communications.
One of the important design decisions for a measurement system is the time horizon over which an analyst’s advice is expected to work out. Too short a time horizon may fail to recognize valuable insights eventually reflected in the price action of the analyst’s stocks. Too long a time horizon may encourage the analyst to take a lax attitude toward short-run performance.
The author discusses this challenge in the context of his own organization’s experience with an actual measurement system.