Statistics is second only to accounting among the technical skills necessary for engaging in modern finance and is fundamental for anyone who considers quantitative work a core element of finance. Unfortunately, practitioners’ actual statistical and econometric knowledge and intuition may be more limited than their level of academic exposure would suggest. No one willingly reads a book on econometrics to close this knowledge gap, so technical skills atrophy rather than advance with money management experience.
Mastering ’Metrics: The Path from Cause to Effect is a short book that helps bridge the gap between classroom recipes and reality. A breezy presentation of very general topics, it is nevertheless packed with good, commonsense advice on how to perform econometric analysis. Readers who seek core insights into finance will be disappointed, but what the book lacks in financial specifics, it makes up for with clear thinking. It is a readable compilation of the accumulated fundamental wisdom of two econometric experts.
Written by leading labor econometricians Joshua D. Angrist and Jörn-Steffen Pischke, Mastering ’Metrics helps readers avoid the use of brute-force computing power to analyze data. Rather, it focuses on how to develop the intuition to finesse data and formulate the right test. It demonstrates how econometric techniques can be used to arrive at the critical point of “other things being equal” despite such key obstacles as selection and omitted-variables bias.
Angrist and Pischke use case examples to focus on five core econometric topics: random assignments, regression, instrumental variables, regression discontinuity, and differences in differences. These cases include factors in the success of charter schools, drivers of SAT scores, and bank failures during the Great Depression. Albeit generally unrelated to finance, these are all fascinating topics for anyone interested in critical policy issues. The cases show the power of using the right technique to find answers that are not always obvious.
The first section provides a foundation for thinking about any statistical test and getting the sample right. One of the limitations of finance is that randomized trials cannot be used to answer key questions; nevertheless, focusing on how to conduct randomized trials and formulate tests is critical to doing good econometric work. This section helps the reader better appreciate the critical process of properly formulating tests before commencing analysis.
The authors discuss regression unconventionally by showing how to get to the point where the key economic phrase ceteris paribus (other things being equal) can be applied. The purpose of regression is not merely to fit a line but, rather, to examine a number of independent variables that can explain or control the behavior of the dependent variable and arrive at an understanding of the core relationships. Regression modeling cannot actually make all things equal, but using the right conditional variables can identify clearly the key influences on a sample.
Simple regression is a powerful tool, but researchers often find that independent variables are noisy, contain errors, and may have endogeneity problems. Although instrumental variables can be used as a technique to generate better estimates, there are subtleties and technical difficulties in the use of this technique. These issues are often overlooked by those who want quick answers. Here again, the authors focus on thinking through the formulation of tests and when to use these techniques. They stress the importance of carefully thinking about test structure and ferreting out relationships between variables that are not obvious. Understanding the dependencies within the data will strongly influence the choice of strategies for devising useful models.
In the fourth section, the authors cover how breaks and discontinuities offer opportunities for uncovering useful information. For example, changes in rules create ideal conditions for generating econometric tests between pre- and postperiods. Using dummy variables is an effective technique for identifying key breaks or changes within data. Measuring these before-and-after effects is the foundation of any event study but, again, requires careful structure and control for other factors.
The final concept presented is the measurement of differences in differences. Measuring acceleration or movement away from core tendencies or trends is essential for the analysis of most data. Teasing out these divergences from trends is a critical skill that requires the econometrician to delineate changes in data over time. Unfortunately, the book does not offer a more focused discussion of time series, given that this area is where causal reasoning is especially critical in finance. Nevertheless, analyzing differences in differences focuses on a perspective that is seldom directly described.
On the stylistic front, the authors’ attempts to lighten the discussion by interjecting kung fu quips are not necessary to make the book worth reading and prove somewhat distracting. The math presented is clear and approachable even for novice empiricists, but readers should be forewarned that these topics involve subtle issues requiring careful thought.
I would be hard pressed to name another econometrics book that can be read for enjoyment yet provides useful quantitative insights. Although removed from financial research, Mastering ’Metrics offers something in short supply: the intuition for understanding what econometrics has to offer for revealing causal relationships. Models are often misused or misspecified in order to advocate or “prove” conclusions. Through careful descriptions of underlying principles, Angrist and Pischke clearly take some of the con out of econometrics.