We effect a deep analysis of the State of the Art, from standard approaches for measuring VaR to the more evolved, while highlighting their relative strengths and weaknesses. We will also review the backtesting procedures used to evaluate VaR approach performance.
From a practical perspective, empirical literature shows that approaches based on the Extreme Value Theory and the Filtered Historical Simulation are the best methods for forecasting VaR. The Parametric method under skewed and fat-tail distributions also provides promising results especially when the assumption that standardised returns are independent and identically distributed is set aside and when time variations are considered in conditional high-order moments.
Lastly, it appears that some asymmetric extensions of the CaViaR method provide results that are also promising. This accord provides recommendations on banking regulations with regard to credit, market and operational risks. Its purpose is to ensure that financial institutions hold enough capital on account to meet obligations and absorb unexpected losses. For a financial institution measuring the risk it faces is an essential task. In the specific case of market risk, a possible method of measurement is the evaluation of losses likely to be incurred when the price of the portfolio assets falls.
This is what Value at Risk VaR does. The portfolio VaR represents the maximum amount an investor may lose over a given time period with a given probability. Since the BCBS at the Bank for International Settlements requires a financial institution to meet capital requirements on the basis of VaR estimates, allowing them to use internal models for VaR calculations, this measurement has become a basic market risk management tool for financial institutions.
Although the VaR concept is very simple, its calculation is not easy. The methodologies initially developed to calculate a portfolio VaR are i the variance—covariance approach, also called the Parametric method, ii the Historical Simulation Non-parametric method and iii the Monte Carlo simulation, which is a Semi-parametric method.
As is well known, all these methodologies, usually called standard models, have numerous shortcomings, which have led to the development of new proposals see Jorion, The major drawback of this model is the normal distribution assumption for financial returns. Empirical evidence shows that financial returns do not follow a normal distribution. The second relates to the model used to estimate financial return conditional volatility.
The third involves the assumption that return is independent and identically distributed iid. There is substantial empirical evidence to demonstrate that standardised financial returns distribution is not iid.
Given these drawbacks research on the Parametric method has moved in several directions. The first involves finding a more sophisticated volatility model capturing the characteristics observed in financial returns volatility. The second line of research involves searching for other density functions that capture skewness and kurtosis of financial returns. Finally, the third line of research considers that higher-order conditional moments are time-varying.
In the context of the Non-parametric method, several Non-parametric density estimation methods have been implemented, with improvement on the results obtained by Historical Simulation. In the framework of the Semi-parametric method, new approaches have been proposed: In this article, we will review the full range of methodologies developed to estimate VaR, from standard models to those recently proposed.
We will expose the relative strengths and weaknesses of these methodologies, from both theoretical and practical perspectives. The article's objective is to provide the financial risk researcher with all the models and proposed developments for VaR estimation, bringing him to the limits of knowledge in this field. The paper is structured as follows.
In the next section, we review a full range of methodologies developed to estimate VaR. Parametric approaches are offered in Section 2. In Section 3 , the procedures for measuring VaR adequacy are described and in Section 4 , the empirical results obtained by papers dedicated to comparing VaR methodologies are shown. In Section 5 , some important topics of VaR are discussed.
The last section presents the main conclusions. The VaR is thus a conditional quantile of the asset return loss distribution.
Among the main advantages of VaR are simplicity, wide applicability and universality see Jorion, , This quantile can be estimated in two different ways: Hence, a VaR model involves the specifications of F r or G z.
The estimation of these functions can be carried out using the following methods: Below we will describe the methodologies, which have been developed in each of these three cases to estimate VaR. The essence of these approaches is to let data speak for themselves as much as possible and to use recent returns empirical distribution — not some assumed theoretical distribution — to estimate VaR. All Non-parametric approaches are based on the underlying assumption that the near future will be sufficiently similar to the recent past for us to be able to use the data from the recent past to forecast the risk in the near future.
The Non-parametric approaches include a Historical Simulation and b Non-parametric density estimation methods. To calculate the empirical distribution of financial returns, different sizes of samples can be considered.
The advantages and disadvantages of the Historical Simulation have been well documented by Down The two main advantages are as follows: The biggest potential weakness of this approach is that its results are completely dependent on the data set.
If our data period is unusually quiet, Historical Simulation will often underestimate risk and if our data period is unusually volatile, Historical Simulation will often overestimate it. In addition, Historical Simulation approaches are sometimes slow to reflect major events, such as the increases in risk associated with sudden market turbulence.
The first papers involving the comparison of VaR methodologies, such as those by Beder , , Hendricks , and Pritsker , reported that the Historical Simulation performed at least as well as the methodologies developed in the early years, the Parametric approach and the Monte Carlo simulation.
The main conclusion of these papers is that among the methodologies developed initially, no approach appeared to perform better than the others. However, more recent papers such as those by Abad and Benito , Ashley and Randal, , Trenca , Angelidis et al. In comparison with other recently developed methodologies such as the Historical Simulation Filtered, Conditional Extreme Value Theory and Parametric approaches as we become further separated from normality and consider volatility models more sophisticated than Riskmetrics , Historical Simulation provides a very poor VaR estimate.
It also has the practical drawback that it only gives VaR estimates at discrete confidence intervals determined by the size of our data set. The idea behind Non-parametric density is to treat our data set as if it were drawn from some unspecific or unknown empirical distribution function.
One simple way to approach this problem is to draw straight lines connecting the mid-points at the top of each histogram bar. With these lines drawn the histogram bars can be ignored and the area under the lines treated as though it was a probability density function pdf for VaR estimation at any confidence level. However, we could draw overlapping smooth curves and so on. This approach conforms exactly to the theory of non-parametric density estimation, which leads to important decisions about the width of bins and where bins should be centred.
These decisions can therefore make a difference to our results for a discussion, see Butler and Schachter or Rudemo A kernel density estimator Silverman, ; Sheather and Marron, is a method for generalising a histogram constructed with the sample data.
A histogram results in a density that is piecewise constant where a kernel estimator results in smooth density. Smoothing the data can be performed with any continuous shape spread around each data point. As the sample size grows, the net sum of all the smoothed points approaches the true pdf whatever that may be irrespective of the method used to smooth the data. The smoothing is accomplished by spreading each data point with a kernel, usually a pdf centred on the data point, and a parameter called the bandwidth.
A common choice of bandwidth is that proposed by Silverman There are many kernels or curves to spread the influence of each point, such as the Gaussian kernel density estimator, the Epanechnikov kernel, the biweight kernel, an isosceles triangular kernel and an asymmetric triangular kernel. From the kernel, we can calculate the percentile or estimate of the VaR. The expression of this model is as follows: Empirical evidence shows that financial returns do not follow normal distribution.
The skewness coefficient is in most cases negative and statistically significant, implying that the financial return distribution is skewed to the left.
This result is not in accord with the properties of a normal distribution, which is symmetric. Also, empirical distribution of financial return has been documented to exhibit significantly excessive kurtosis fat tails and peakness see Bollerslev, Consequently, the size of the actual losses is much higher than that predicted by a normal distribution.
The second drawback of Riskmetrics involves the model used to estimate the conditional volatility of the financial return. The EWMA model captures some non-linear characteristics of volatility, such as varying volatility and cluster volatility, but does not take into account asymmetry and the leverage effect see Black, ; Pagan and Schwert, In addition, this model is technically inferior to the GARCH family models in modelling the persistence of volatility.
The third drawback of the traditional Parametric approach involves the iid return assumption. There is substantial empirical evidence that the standardised distribution of financial returns is not iid see Hansen, ; Harvey and Siddique, ; Jondeau and Rockinger, ; Bali and Weinbaum, ; Brooks et al.
Given these drawbacks research on the Parametric method has been made in several directions. The first attempts searched for a more sophisticated volatility model capturing the characteristics observed in financial returns volatility. Here, three families of volatility models have been considered: The second line of research investigated other density functions that capture the skew and kurtosis of financial returns. Finally, the third line of research considered that the higher-order conditional moments are time-varying.
Using the Parametric method but with a different approach, McAleer et al. As the authors remark, given that a combination of forecast models is also a forecast model, this model is a novel method for estimating the VaR. With such an approach McAleer et al. This model specifies and estimates two equations: The conditional variance properties of the IGARCH model are not very attractive from the empirical point of view due to the very slow phasing out of the shock impact upon the conditional variance volatility persistence.
The models previously mentioned do not completely reflect the nature posed by the volatility of the financial times series because, although they accurately characterise the volatility clustering properties, they do not take into account the asymmetric performance of yields before positive or negative shocks leverage effect.
Because previous models depend on the square errors, the effect caused by positive innovations is the same as the effect produced by negative innovations of equal absolute value. Nonetheless, reality shows that in financial time series, the existence of the leverage effect is observed, which means that volatility increases at a higher rate when yields are negative compared with when they are positive.
In Table 1 , we present some of the most popular.