Value at risk (VaR) is an important risk management tool for regulators and for companies. Traditional VaR models have struggled during periods of market turmoil, so recent extreme value theory (EVT) models have focused more on extreme return distributions. Although the models work in their conditional form, none of them passes an advanced test in their unconditional form.
How Is This Research Useful to Practitioners?
Value at risk (VaR) has increased in importance for financial firms (because the metric is used by the Bank of International Settlements for capital adequacy requirements) and for non-financial firms. Traditional VaR models require an assumption for the entire return distribution, and extreme value theory (EVT) models are concerned about extreme returns, making the latter a potentially better predictor in times of high market volatility.
Using out-of-sample forecasting, the authors test how models behave under different circumstances. They use different models and sample sizes as well as robustness checks in their analyses. In their unconditional forms, none of the EVT models is accurate across all time series and confidence levels. When using a conditional form, some EVT models are acceptable, but the historical simulation approach is sometimes better than the EVT models.
The authors also show that simple specifications and backtests can be quite powerful. Adding additional methods, such as different sample sizes, has little effect on performance.
According to the authors, future research could also include highly liquid individual stocks and a simulation study for VaR estimators in different settings. Research on expected shortfall (ES) could be interesting as well because the Basel Accords focus on ES as a market risk–measuring metric.
Because of the growing importance of risk management and VaR in the financial sector, the authors’ results will be interesting for many professionals involved in risk quantification and management as well as for those who wish to understand the underlying assumptions of risk-based regulations.
How Did the Authors Conduct This Research?
The authors perform their study not only on stocks but also on a multi-asset portfolio including commodities, bonds, and currencies. They use the CRSP value-weighted index and the S&P 500 Index for stocks; the Goldman Sachs commodity index and a gold market quotation for commodities; a US government bond index; and a USD-to-GBP exchange rate for currencies. Daily prices from 1 January 1996 to 29 February 2016 are used, providing 7,869 data points. The data are then converted to log returns.
Traditional VaR estimators are the normal and Student’s t distributions and the simulation method. The latter does not assume a distribution of returns. The EVT estimators are the methods of block maxima, peak over threshold, Box–Cox, L-moment, and Johnson. Block maxima divides the observations in intervals (blocks) and then searches for a distribution for block maxima when the block size increases. Its drawbacks are that only block maxima are considered (i.e., not other extremes within blocks) and that the choice of block length is subjective. Peak over threshold takes into account observations above a certain threshold and is more efficient. Box–Cox takes into account both the frequency and severity of extreme events. The L-moment approach is based on maximum likelihood. Finally, the Johnson method uses information from both the tails of the distribution and the center.
A drawback of VaR estimators is that they assume returns are identical and independently distributed (i.i.d.), which is not in line with actual data. Empirical studies use standardization by a conditional mean volatility model to approach i.i.d. An alternative approach for filtering is to use a non-linear model—a Markov chain. The authors show that forecasting improves when using filtered returns.
To test their VaR estimators, the authors use a rolling-window analysis consisting of 1,000 daily returns, which implies that parameters change over time. They also apply different confidence intervals and then perform out-of-sample testing by using unconditional coverage, independence, and quantile tests. The authors focus on the 1% error level because it is the focus of the Basel framework. They conclude that none of the unconditional models passes the tests. Lastly, they perform such robustness checks as alternative filters, window sizes, and short positions, which confirm that Box–Cox is the best EVT method.
Abstractor’s Viewpoint
Risk management and measurement have become more important in the financial industry over the past decade following the 2008 crisis. VaR has been one of the key metrics for translating statistical metrics, such as volatility, to real numbers. One of the drawbacks of VaR is that it provides a single number without giving insight into the other possible loss outcomes.
This research provides a good introduction to VaR metrics through the comparison of traditional methods with EVT methods. Although the formulas might be overwhelming at first sight, the explanations in the text provide a good overview of the various methods and their strengths and weaknesses. Also, the discussion on filtering and standardization of data is very interesting because it also applies to other (non-financial) time-series data analyses. The authors have usefully limited their research to the stock market and extended it to other markets as well. Their conclusion that the traditional VaR metrics sometimes outperform the EVT metrics might imply that advanced modeling does not always add to additional insight, a pitfall to which finance practitioners are prone.
This research is very relevant for today’s regulations across the finance sector because both the Basel bank regulations and the Solvency II insurance regulations are risk based and use VaR metrics to determine capital requirements and capital adequacy. Although the research is quite technical, it is also relevant for professionals not working directly in risk management; these regulations will shape the financial industry in the near future.