Value-at-risk systems and their application in integrated risk management
Hari P. SharmaABSTRACT
Value-at-Risk (VAR) has become a standard benchmark for measuring financial risks. VAR systems and models identify a first-order magnitude of financial risks and provide a forward-looking measure of a portfolio's overall downside risk potential. The recent trend and motivation for using VAR systems are institutions' needs to integrate their financial risks such as market, credit and operations risks. VAR methodologies are evolving in finding ways of integrating diverse financial risks and will continue to advance worldwide standards. This study presents an overview of concept and quantitative techniques of VAR and how VAR systems evolve for managing and integrating financial risks.
1. INTRODUCTION
Financial and non-financial institutions are required to report value-at-risk (VAR), a risk measure for potential losses (financial risks) on a regular basis barring Congressional amendment of the new SEC Rule on Disclosures about Derivatives and Other Financial Instruments. Financial risks are those, which relate to possible losses in financial markets including losses from interest rate movements, defaults on financial obligations or operational inefficiencies. Risk managers must consciously plan for the consequences of adverse outcomes and, by so doing, are better prepared for inevitable uncertainties. Internal uses of VAR and other sophisticated risk measures are on the rise in many institutions and risk managers are expected to set VAR limits on amounts and probabilities, for trading operations and fund management (Berkelaar et al., 2002).
Early VAR estimates were linear multipliers of variance-covariance estimates of the risk factors. These types of market risk techniques soon became popular, mainly because of their link to modern portfolio theory. However, during worldwide market crises, users noticed that early models failed to provide good VAR estimates. The early VAR models were also referred to as parametric because of the strong theoretical assumptions they impose on underlying properties of the data (Barone-Adesi and Giannopoulos, 2001). VAR estimates are currently based on two main techniques: (1) the variance-covariance approach, and (2) simulation. VAR systems differ in computational methodology, computational time, and accuracy in nonlinear approximation. The Group of Thirty (1993) report on derivatives stated that "market risk is best measured as value-at-risk," because it provides a summary statistic of the order of magnitude of potential losses due to market risk (Jorion, 2002). The VAR revolution is the result of several factors, such as: (1) regulatory pressures for better control of financial risks, (2) globalization of financial markets, and (3) technological innovations in computational techniques. These factors made it possible to integrate and manage enterprise wide risk. VAR methodologies are expanding and finding ways of integrating diverse financial risks and will continue to evolve as worldwide standards for managing numerous types of financial risks.
The rest of this paper is organized as follows. Section 2, presents a brief review of literature. Section 3, describes mathematical foundation of VAR systems. Section 4 presents the various aspect of financial risk while section 5 discusses the integration of risks. The last section presents the conclusion.
2. LITERATURE REVIEW
Researchers have developed several quantitative techniques for managing financial risks. Markowitz (1952) used the standard deviation as an intuitive measure of dispersion with a major part of his study on explaining the tradeoff between expected return and risk in the mean-variance framework for normally distributed returns. Roy (1952) presented confidence-based risk measures and presented a "safety first" criterion for portfolio selection. He advocated choosing portfolios that minimize the probability of a loss greater than a disaster level. Sharpe's (1964) developed Capital Asset Pricing Model (CAPM). Baumol (1963) also proposed risk measurement criteria based on a lower confidence limit at a given probability level. Other developments include the Multiple Factor Model (1966), Black-Scholes Option Pricing Model (1973), Binomial Optional Model (1979), Risk Adjusted Return on Capital (1983), Limits on Exposure by Duration Bucket (1986), Risk-Weighted Assets for Banks Limits on "Greeks" (1988), and Stress Testing (1992).
Risk Management has truly experienced a revolution since the early 1990s. In November 1994, Orange County's investment pool lost $1.7 billion from the structured notes and leveraged repurchase agreements or "repos". Repos are contracts in which the seller of securities, such as Treasury Bills, agrees to buy them back at a specified time and price. In February 1995, Baring Plc lost $1.5 billion because a Singapore based trader, Nick Leeson, took unauthorized futures and options positions linked to the Nikkei 225 and Japanese government bonds. The common lesson of these disasters is that billions of dollars can be lost through poor supervision and management of financial risks. These disasters forced financial institutions and regulators to focus on VAR, a then new measure of financial market risk developed in response to these financial disasters. The VAR methodology was easy-to-understand and easy to apply in quantifying market risk. For instance, a bank might say that the daily VAR of its trading portfolio is $25 million at the 99% confidence level. In other words, there is only one chance in a hundred, under normal market conditions, for a loss greater than $25 million to occur. The recent developments include Risk Metrics (1994), Credit Metric and Credit Risk+ (1997), Integration of Credit and Market Risk (1998), and Enterprise wide Risk Management (2000). In January of 1997, the Securities and Exchange Commission (SEC) established rules for the quantitative and qualitative reporting of risks associated with highly market sensitive assets (i.e. derivatives positions) of reporting firms (Jorion, 2000).
Recently, Vlaar (2000), Brooks and Persand (2000) suggested that VAR analysis could produce very inaccurate results when the "right" historical sample length is not selected. Hendricks (2000) own VAR model comparison revealed that risk measures from the various VAR approaches for the same portfolio on the same data could differ substantially. Differences in the accuracy across models were also sensitive to the level of probability chosen and used in the VAR calculation. Pristsker (2000) reviews the assumption and limitations of Historical Simulation methods and Weighted Historical Simulation (Boudoukh et al., 1998). He points that both methods associate risk with only the lower tail of the distribution (Barone-Adesi and Giannopoulos, 2001).
3. MATHEMATICAL FOUNDATION OF VAR SYSTEMS
As already discussed, the methodology behind VAR is not new. Markowitz developed the basic mean-variance framework in 1952. Recent financial models also include a number of ways for expressing risk including standard deviation of yields, variance of past yields, maximum loss in a set of given market scenarios over a horizon, the Monte Carlo simulation, and VAR Systems. Major financial firms first used VAR in the late 1980s to measure the risks of their trading portfolios. Since then, VAR use has exploded. JP Morgan was one of the first users of VAR in 1994 through its RiskMetrics[TM] system (Linsmeier and Pearson, 2000). It is explicit that VAR summarizes the expected maximum loss over a target horizon within a given confidence interval. Jorion (2000) defines the mathematical foundation of VAR as follows:
3.1 VAR for General Distribution VAR for general distribution defines portfolio value for the end-of-time horizon as follows: (3.1.1) W = [W.sub.0] (1+R)
Where W0 is the initial investment and R is the rate of return on the portfolio. The lowest end-of-period value of portfolio is:
(3.1.2) [W.sup.*] = [W.sub.0] (1+[R.sup.*])
Where [W.sub.0] is the initial investment at the beginning of the period, and [R.sup.*] is a critical level of portfolio return associated with a predetermined level of confidence (c). Therefore, [W.sup.*] can be thought of as the end-of-time horizon portfolio value when the lowest possible return of portfolio is ([R.sup.*]). The relative VAR as the dollar loss relative to the mean is:
(3.1.3) VAR (mean) = E (W) - [W.sup.*] = - [W.sub.0] ([R.sup.*]-[mu])
where [mu] is the expected return. The absolute VAR, that is the dollar loss relative to zero, is defined as:
(3.1.4) VAR (zero) = [W.sub.0] - [W.sup.*] = - [W.sub.0] ([R.sup.*])
For a specified confidence level (c) and a general distribution of future portfolio value f (W), VAR is defined as (Jorion, 1996 & 2000):
(3.1.5) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]
or as defied below such that the probability of a value lower than [W.sup.*], p = P (w [less than or equal to] [W.sup.*]) is 1-c:
(3.1.6) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]
Thus, the isolated area in the left tail of the distribution is associated with losses that are greater than or equal to the loss associated with confidence level (c); representing the downside risk, or Value-at-Risk of the portfolio.
3.2 VAR for Parametric Distribution
The VAR methods in this category are simple enough under the assumption of parametric distribution such as normal distribution. This parametric approach has often been referred to as the delta-normal method since normality is assumed. The credit of motivating parametric models of VAR goes to JP Morgan's Risk Metrics methodology for developing estimates of standard deviation and correlation among portfolio assets using an exponentially weighted average approach.
In order to design VAR system for parametric distribution, a normal deviate ([alpha]) is calculated as:
(3.2.1) -[alpha] = |[R.sup.*]|-[mu]/[sigma]
Where [mu] and [sigma] are translated from general distribution into standard normal distribution. |[R.sup.*]| is the cutoff return and is generally negative.
Associating the normal deviate ([alpha]) with [R.sup.*], Jorion (1996, 2000) shows that
(3.2.2) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]
From the equality in equation (3.1.8), the problem of VAR is equivalent to finding the deviate [alpha] such that the area to the left of it is equal to 1-c. This is done by turning to tables of the cumulative standard normal distribution function, that is, the area to the left of a standard normal variable with value equal to d:
(3.2.3) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.]
For instance, at the 95% confidence level, 1-c = 5%. Therefore, the associated [alpha] corresponding to the lower 5% of the normal distribution is equal to 1.65. Jorion (1996, 2000) notes that equation (3.2.3) provides an illustrative linkage that shows that "VAR may be found in terms of portfolio value ([W.sup.*]), cutoff return ([R.sup.*]), or normal deviate ([alpha])." Therefore, VAR under the assumption of normality is:
(3.2.4) VA[R.sub.[mu]] = - [W.sub.0]([R.sup.*]-[mu]) = [W.sub.0] [alpha][sigma][square root of]/[DELTA]t
Where [W.sub.0] is defined as before to be initial portfolio value, [alpha] is the normal deviate associated with (1-c) and [sigma] the standard deviation of portfolio returns. To find the VAR of a portfolio, one needs to multiply the estimated [sigma] by the relevant percentile and initial investment. Obviously, under the assumption of normality, the only true unknown is the estimate of [sigma]. Therefore, the problem becomes one of forecasting the volatility and correlations between individual assets and subsequently portfolio volatility. When VAR is defined as an absolute dollar loss, the formula is as follows:
(3.2.4) VAR(Zero) = - [W.sub.0]([R.sup.*]) = [W.sub.0] ([alpha][sigma][square root of]/[DELTA]t - [mu] [DELTA]t)
4. VALUE AT RISK SYSTEMS
The calculation of value-at-risk (VAR) for large portfolios of complex derivative securities presents a tradeoff between speed and accuracy. The fastest methods rely on simplifying assumptions about changes in underlying risk factors and about how a portfolio value responds to these changes in the risk factors. Various methods are possible in computing VAR. These methods basically differ in distributional assumptions for the risk factors and linear versus full valuation, where linear valuation approximates the exposure to risk factors by a linear model. The simplest methods are the variance-covariance solution popularized by RiskMetrics[TM], and the delta-gamma approximations described by Britten-Jones and Schaefer (1999), Rouvinez (1997) and Wilson 1999. These rely on the assumption that a portfolio value changes linearly or quadratically with changes in market risk factors. One of the most difficult aspects of calculating VAR is selecting among the many types of VAR methodologies and their associated assumptions (Minnich, 1998). In this section, the focus is on the three classic methods: (1) variance-covariance matrix (delta and delta-gamma approaches), (2) historical simulation, and (3) Monte Carlo simulation.
4.1 Delta-Normal Method (Variance-Covariance Matrix)
Delta Normal methodology is the simplest method. This method allows an estimate to be made of a portfolio's potential future losses through using statistics on past volatile values in the past and correlations between changes in their values. The underlying assumptions that all securities are linear for all risk factors and the risk factors are normally distributed make VAR calculations easily understandable for all risk management people involved. Also, rapid calculation, an important feature in real time environments, makes this method easy to implement. However, considering the growing use of non-linear derivatives (especially options) within portfolios, the linear approach becomes less useful as the linearity assumption makes the method theoretically applicable only to linear portfolios. When using this method, it is particularly necessary to consider the fact that: (1) market price movements exhibit so-called heavy tails, measuring a tendency to have relatively more frequent occurrences of extreme values than do a normal distribution, (2) models may not be sophisticated enough to appropriately depict market risk ensuing from extraordinary events, and (3) the past is not by itself an ideal solution but a guide to the future. The VAR estimation through this approach overestimates VAR for small confidence levels and underestimates for big levels of probability. This drawback comes from assuming normality in the portfolio returns and causes excesses or underestimation in the required level of capital in order to face the market risk by the supervisory authorities, with the resulting repercussions in the financial institution's solvency. The steps involved in the approach are as presented in Figure 1 (Jorion, 2000).
[FIGURE 1 OMITTED]
4.2 Historical-Simulation Method
This method consists of going back in time, for example the last 250 days, and applying current weights to a time-series of historical asset returns. This return does not represent an actual portfolio but rather reconstructs the history of a hypothetical portfolio using the current position. Of course, if asset returns are all normally distributed, the VAR obtained under the historical-simulation method should be the same as that under the delta-normal method. It requires a time-series of actual movements, and positions on risk factors. Jorion (2000) presents in figure 2 the steps involved in historical simulation method for straightforward implementation of full valuation.
[FIGURE 2 OMITTED]
This method is simpler in comparison to Variance-Covariance Matrix, since it does not require demanding work breaking down the probability of risk factors and determining correlations between risk factors. But distributions can be non-normal, and securities can be non-linear. The drawback is the need for a sufficient quantity of historical simulations.
4.3 Monte Carlo Method
Monte Carlo simulations are widely used in pricing and risk management of complex financial instruments. Quasi-Monte Carlo methods, which are deterministic methods because they are based on low-discrepancy sequences, have been found far superior to Monte Carlo for pricing of financial derivatives in term of both speed and accuracy (Papageorgiou, 1999). In brief, the basic concept behind the Monte Carlo approach is to simulate repeatedly a random process for the financial variable of interest, covering a wide range of possible situations. The simulation process requires two steps. First, it requires that a stochastic process for financial variables as well as process parameters be specified and the choice of distributions and parameters such as risk and correlations be derived from historical data. Second, fictitious price paths are simulated for all variables of interest. At each horizon considered, the portfolio is marked-to-market using full valuation. Each of these "pseudo" realizations is then used to compile a distribution of returns, from which a VAR figure can be measured. Jorion (2000) presents the steps in this process as shown in Figure 3 below.
The calculation of VAR for large portfolios presents a tradeoff between speed and accuracy, with the fastest methods relying on rough approximations and the most realistic approach, Monte Carlo simulation, often too slow to be practical. The use of information technology now provides almost unlimited resources for programming and for generating more perfect models. Since many financial institutions have only recently moved toward using special programming systems for managing risk, an assessment has not yet been made of the results of new information technology applications in this field.
5. APPLICATIONS OF VAR IN MANAGING FINANCIAL RISKS
Originally, VAR was developed to deal with one aspect of financial risk, that is, derivative market risk. It is now widely applied in financial institutions measuring all kinds of financial risks including market, credit, liquidity, operational, and legal risks. VAR has moved well beyond exclusive use in financial institutions and is embraced by an ever-increasing number of individual companies as their chosen method of managing enterprise-wide risk management.
5.1 Market Risk
Market risk arises from changes in the price or value of an asset. It is the uncertainty of future returns due to fluctuations of financial asset quantities such as stock prices, interest rates, exchange rates, and commodity prices (Papageorgiou, 1999). Market risk can also be defined into two forms: absolute risk, measured in terms of the relevant currency, describing the volatility of total returns, and relative risk, measured relative to a benchmark risk and measured in terms of tracking error, or deviation for the index. The volatility is measured by the standard deviation of unexpected outcomes or sigma ([sigma]). Other dimensions of market risk are directional and non-directional risks. Directional risks are measured by linear approximation such as beta ([beta]) for exposure to stock market movements, duration for exposure to interest rates, and delta ([DELTA]) for exposure of options to the underlying asset price. Non-directional risks consist of non-linear exposures to hedged positions or to volatilities. Second-order (quadratic exposures) are measured by convexity when dealing with interest rates and gamma ([gamma]) when dealing with options. Table 1 describes the probability of a loss over a given measurement period, assuming a normal distribution.
TABLE 1: RISK AND RETURN FOR U.S. STOCKS FOR THE PERIOD 1973-1988 Mean Risk Horizon Years (T) ([micro]) % ([sigma]) % Annual 1.00000 14.05 15.55 Quarterly 0.25000 3.513 7.78 Monthly 0.83333 1.171 4.49 Weekly 0.01918 0.270 2.15 Daily 0.00050 0.007 0.35 Ratio Probability of Horizon ([micro]/[sigma]) Loss ([epsilon]) % Annual 0.9035 18.3 Quarterly 0.4518 32.6 Monthly 0.2608 39.7 Weekly 0.1251 45.0 Daily 0.0201 49.2
This observation is sometimes taken as support for the conventional wisdom that stocks are less risky in the long run than over a short horizon. Unfortunately, this is not necessarily correct, since the dollar amount of the loss also increases with time (Merton and Samuelson, 1974).
5.2 Credit Risk
Credit risk is broadly defined as the risk of financial loss due to counter party failure to perform their obligations. It is measured by the cost of replacing cash flows if the other party defaults. Bonds, loans, and derivatives are all exposed to credit risk. Credit risk can be attributed to two factors: (1) Default risk, assessed by the probability of default combined with the default loss value; and (2) Market Risk, which drives the market value of the obligation. Credit risk is much harder to precisely measure than market risk. This is because default probabilities and their correlations are much more difficult to measure during market movements. Credit risk is controlled through credit limits on notional, current and potential exposures, and increasingly, credit-enhancement features like requiring collateral of marking to market. Methods for quantifying market risks are now being extended to credit risks. If default occurs before the settlement due date of the underlying transaction, there may be a "replacement risk" of having to bear any costs of replacing or canceling the transaction. Settlement Risk arises when you pay cash or deliver assets before your counter party is known to have performed their part of the deal. This exposure is normally for the total amount of the transaction, and may exist during the course of a trading day, last overnight, or longer. Table 2 compares the leading credit risk models.
TABLE 2: COMPARISON OF CREDIT RISK MODELS Dimensions Credit Metrics Credit Risk+ Credit Portfolio View Originator J.P. Morgan Credit Suisse McKinsey Chase Philosophy Merton Model, Actuarial Econometric, Microeconomic Top-down, Macroeconomic causal No Causality causal Risk Definition Market Value Default Losses Market Value Risk Drivers Asset Values Default Rates Macro Factors Correlation From Equities Default Process Factor Model Recovery Rates Random Constant Random Solution Simulation/ Analytical Simulation Analytical
The models differ in a number of key dimensions including risk definition, risk drivers, correlations, recovery rates and solutions. These models take different approaches to credit risk, however, they have a very similar underlying mathematical structure (Koyluoglu and Hickman 1999, and Gordy 2000).
5.3 Liquidity Risk
Liquidity risks can be classified into two categories: asset liquidity risk and funding liquidity risk. Asset liquidity can be measured by price-quantity function, also known as the market impact effect. Highly liquid assets, such as Treasury bonds have very little price impact of the quantity traded. The price-quantity function is illustrated in Figure 4.
If the quantity to be sold is below 10,000, liquidity is not a problem. In contrast, if the institution holds a large number of shares, liquidity should be a primary concern. Besides, varying across assets, liquidity is also a function of prevailing market conditions. Traditionally, asset liquidity risk has been controlled through position limits. The goal of position limit is to limit the exposure to a single instrument. Funding liquidity risk (cash-flow risk) refers to the inability of institutions that run out of the cash to meet payment obligations. Funding risk arises from the use of high leverage, whereby institutions borrow to expand their assets. Looking at the asset side, potential demands on cash resources depends on (1) variation margin requirements due to marking to market, (2) mismatch in the timing of collateral payments, and (3) changes in collateral requirements. Examining the liability side is also important. An institution may be able to meet margin calls by raising funds from other sources, such as a line of credit or new equity issues. The problem is that it may be difficult to raise new funds precisely when the institution is not doing well.
5.4 Operational Risk
Operational risk was the primary cause of some financial disasters (Basel Committee, 1999). Four approaches have been used to define operation risk. The first approach defines operational risk as any financial risk, other than market and credit risk, but also includes business risk. A second, much narrower approach, defines operational risk as arising from operations, which involves transactions processing and systems failures. A third approach, which is slightly broader, describes operational risk as any risk over which the institution has control. A fourth approach, which is more accepted by industry, views the risk of direct and indirect loss resulting from failed or inefficient processes, systems, people, or from external events. This excludes business risk but includes external events such as political or regulatory risk, disaster risk, counter party risk, security breaches, and so on. Risk managers need to estimate loss frequency and severity from historical data as described in Table 3 (Jorion, 2000). Actuarial losses due to operation risk can usually be attributed to a combination of two separate random variables, the loss frequency and the loss severity when it occurs.
TABLE 3: SAMPLE LOSS FREQUENCY AND SEVERITY DISTRIBUTIONS Frequency Distribution Severity Distribution Probability Frequency Probability Severity P(N) (N) P(X) X 0.4 0 0.5 $1,000 0.3 1 0.3 10,000 0.3 2 0.3 13,600 E(N) 0.8 E(X) $6,220 E(S)= E(N) * E(X) = 0.7 * 6,220 = $5,598
6. INTEGRATION OF RISKS
Integrated risk management focuses on measuring, controlling, and managing the institution's overall risk across all risk categories and business lines. The first benefit of firm-wide risk management program is better control over global risks. Even if some risks (operational risks) are difficult to quantify, the integration process itself creates natural hedging, leading to better allocation of capital. Operational risk measurements are still in a developing stage. Some take a top-down approach, estimating risk based on firm-wide data. Top-down approaches view operational risk as any risk that is not captured by market or credit risk. This is calculated by subtracting market risk and credit risk components from overall earning volatility. This approach, however, does not provide a current measure of risk, nor does it shed much light on sources of operational risk, or ways to control it better. Much of the variability in earnings may be ascribed to business risk involving macroeconomic fluctuations, rather than operational risk. Top-down explanations are appealing because they go directly to results, but then lead nowhere. Bottom-up explanations build a foundation for deeper understanding and further research (Holton, 2003). Bottom-up approaches provide a structure that is much more useful in understanding causes of operational risk. Bottom-up models involve mapping workflows at the business unit level, which are used to identify potential failures and associated losses. The contrast between these two approaches is similar to that between historical versus VAR measures of market risk. Historical measures are backward looking and provide little information on current portfolio risks. In contrast, VAR measures are forward-looking and provide a process by which risk exposures can be controlled. In both cases, the distribution of losses can be measured using actuarial models. It is well established that some risks offset each other, with the most tangible benefits being cost reduction of hedging or insurance against firm-wide risks. By treating their risks as part of a single portfolio, institutions need not buy separate hedging instruments against each type of risk, benefiting from diversification. Financial researchers have identified conditions under which hedging activities that lower the volatility of cash flows or firm value, should add value (Stulz, 2000). Several advantages of integrated risk management include (a) stabilizing earnings by carefully understanding and neutralizing risks, and (b) reducing the cost of hedging by hedging only net risks instead of hedging individual risks. The VAR methodology, the essence of which is centralizing firm-wide risks, enables stronger enterprise level risk management. Within the best institutions, risk managers now have increasing responsibilities and are viewed as crucial to the survival of their institutions.
7. CONCLUSION
In this study, we present an overview of how VAR systems evolve for managing and integrating financial risks. Ideally, market, credit, operational and other risks should be measured in a comprehensive fashion. The advantages and limitations of VAR systems with lessons for risk management are presented in this study. Through the comparisons of the various techniques of measuring financial and non-financial VAR, the science of risk study demonstrates that risk management is an imperfect art in a world where the past provides lessons but few reliable precedents. Piecemeal approaches may miss significant risks or push risks into less visible places, creating a misleading sense of safety. One of the paths to success for a financial institution is risk management founded upon evaluating sets of measurable and non-measurable risks by using modern information technology. The study succinctly pointed out that only those financial institutions that are best able to classify possible risks, apply modern measurement models, and effectively use modern information technology, will gain a competitive edge and thereby profit most. As shown in the study, risk management is currently achieving new dimensions. The trend is toward combining capital charges for market, credit and operational risks and lowering hedging costs by pruning unnecessary transactions. The analysis of VAR systems and the methodological approaches discussed show that active firm-wide risk management can increase shareholder value substantially and that the developing approaches to risk management cross the boundaries of traditional risk measurement, requiring new, more effective instruments which will be ever more difficult to handle.
REFERENCES
Alexander, Carol, "Volatility and Correlation: Measurement, Models and Applications," In Risk Management and Analysis, C. Alexander, (ed), Wiley, Chickester, England, 1998.
Artzner, Philippe, Freddy Delbean, Jean Marc Eber, and David Health, "Coherent Measure of Risk," Mathematical Finance, 9, 1999, 203-228.
Barone-Adesi, G. and Giannopoulos, K., "Non-parametric VaR Techniques, Myths and Realities," Economics Notes by Banca Monte dei Paschi di Siena SpA, 30(2), 2001, 167-181.
Basel Committee on Banking Supervision, International Convergence of Capital Measurement and Capital Standards, BIS, Basel, Switzerland, 1998.
Basel Committee on Banking Supervision, "Capital Requirements and Bank Behavior: The impact of the Basel Accord," Working Paper, BIS, Basel, Switzerland, 1999.
Basel Committee on Banking Supervision, Credit Risk Modeling: Current Practices and Applications, BIS, Basel, Switzerland, 1999.
Baumol, William, "An Expected Gain-Confidence Limit Criterion for Portfolio Selection," Management Science, 11, 1963, 174-182.
Berkelaar, A., Cumperayot, P. and Kouwenberg, R., "The Effect of VaR Based Risk Management on Asset Prices and the Volatility Smile," European Financial Management, 8(2), 2002, 139-164.
Boudoukh, Jacob, Mathew Richardson, and Robert Whitelaw, "A New Strategy for Dynamically Hedging Mortgage-Baked Securities," Journal of Derivatives 2, 1995, 60-77.
Britten-Jones, M. and Schaefer, S.M., "Non-linear value-at-risk," European Finance Review, 2, 1999, 161-187.
Brooks, C. and Persand, G., "Volatility Forecasting for Risk Management," Journal of Forecasting, 22, 2003, 1-22.
Cooper's and Lybrand, Generally Accepted Risk Principles, Cooper's & Lybrand, London, 1996.
De Grauwe, Paul, The Economics of Monetary Integration, Oxford University Press, New York, 1997.
Glasserman, P., Heidelberger, P. and Shahabudin, P., "Portfolio Value-At-Risk with Heavy-Tailed Risk Factors," Mathematical Finance, 12(3), 2002, 239-269.
Gordy, Michael, "A Comparative Anatomy of Credit Risk Models," Journal of Banking and Finance, 24, 2000, 119-149.
Greenspan, Alan, Remarks at the Financial Markets Conference of the Federal Reserve Bank of Atlanta, Board of Governors of the Federal Reserve System, Washington, D.C., 1996.
Hull, John, Options, Futures, and other Derivatives, Prentice-Hall, Upper Saddle River, N.J., 2000.
International Monetary Fund, International Capital Markets, 1999; IMF, Washington, D.C., www.imf.org.
Hendricks, D., Evaluation of Value-at-Risk Models Using Historical Data Federal Reserve Bank of New York Economic Policy Review, April, 1996, 39-69.
Holton, Glyn, Value-at-Risk Theory and Practice, Academic Press, San Diego, CA, 2003.
J.P. Morgan, Riskmetrics Technical Manual J.P. Morgan, New York, 1995.
Jorion, Philippe, "How Informative are Value at Risk Disclosures?," The Accounting Review, 77(4), 2002, 911-931.
Jorion, Philippe, Value at Risk: The New Benchmark for Managing Financial Risk, McGraw Hill, New York, 2000.
Jorion Philippe, "[Risk.sup.2]: Measuring the Risk in Value-at-Risk," Financial Analysts Journal, 52, 1996, 47-56.
Koyluoglu, Ugur, and Andrew Hickman, Reconcilable Differences, in Credit Risk: Models and Management, Risk Publications, London, 1999.
Krause, A., "Exploring the Limitations of Value at Risk: How Good Is It in Practice?," The Journal of Risk Finance, Winter, 2003, 19-28.
Linsmeier, T.J. and Pearson, N.D., "Value at Risk," Association for Investment and Research, March/April, 2000, 47-67.
Markowitz, H., "Portfolio Selection," Journal of Finance, 7, 1952, 77-91.
Markowitz, H., Portfolio Selection: Efficient Diversification of Investments Wiley, New York, 1959.
Merton, R., "On the Pricing of Corporate Debt: The Risky Structure of Interest Rates," Journal of Finance 29, 1974, 449-470.
Merton, R. and Samuelson, P., "Fallacy of Log-Normal Approximation to Portfolio Decision-Making Over Many Periods," Journal of Financial Economics 1, 1974, 67-94.
Minnich, Mike, A Primer on Value at Risk Capital Market Risk Advisors, 1998.
Modigliani, Franco, and Merton Miller, "The Cost of Capital, Corporation Finance and the Theory of Investment," American Economic Review, 48, 1958, 261-297.
Papageorgiou, Anargyros, "Deterministic Simulation for Risk Management," Journal of Portfolio Management, 3, 1999, 122-128.
Rouvinez, C., "Going Greek with VAR," Risk, 10(2), 1997, 57-65.
Roy, Andrew, D., "Safety First and the Holding of Assets," Econometrica, 20, 1952, 431-449.
Sharpe, W., "Capital Asset Prices: A Theory of Market Equilibrium under Conditions of Risk," Journal of Finance, 19, 1964, 424-442.
Seigl, Thomas and West, Ansgar, "Statistical bootstrapping methods in VAR Calculation," Applied Mathematical Finance, 8, 2001, 167-181.
Stulz, Rene, Financial Engineering and Risk Management Southwestern Publishing, New York, 2000.
Wilson, T., "Value at risk," In Risk Management and Analysis, C. Alexander, (ed.), Wiley, Chichester, England, 1, 1999, 61-124.
Pritsker, M., The Hidden Risks of Historical Simulation, Federal Reserve Board, Washington, 2000.
Vlaar, P., "Value at Risk Models for Dutch Bond Portfolios," Journal of Banking and Finance, 24, 2000, 1131-1154.
Author Profiles:
Dr. Hari P. Sharma received his Ph.D. in Applied Business Economics (Corporate Investments) from the Agra University, Agra (India). He has served as Vice President (Market Risk Manager) at Bank of America. His research interests are in the mathematical modeling, portfolio management and risk analysis of mortgage servicing, and designing data analysis tools and models in Finance. Dr. Sharma has published several journal articles in National and International Journals. He is currently an Associate Professor of Finance in the Department of Accounting and Finance at the Virginia State University, Petersburg, Virginia.
Dr. Dinesh K. Sharma earned his Ph.D. in Operations Research from the Chaudhary Charan Singh University at Meerut, India. He is currently an Associate Professor of Quantitative methods & Computer Applications in the Department of Business, Management and Accounting at the University of Maryland Eastern Shore. His research interests include multi-objective programming, nonlinear programming, and application of operations research to business & industry.
Dr. Julius A. Alade received his Ph.D. in Industrial Economics from the University of Utah. He has authored and co-authored several journal articles/abstracts in National and International Journals. In his research, he has combined theoretic economics with financial and operations management, using linear and goal programming models. Dr. Alade is a Professor of production management/quantitative methods and the Acting Chair in the Department of Business, Management and Accounting, University of Maryland Eastern Shore, Princess Anne, Maryland.
Haft P. Sharma ([email protected]), Virginia State University, Petersburg, Virginia
Dinesh K. Sharma ([email protected]), University of Maryland Eastern Shore, Princess Anne
Julius A. Alade ([email protected]), University of Maryland Eastern Shore, Princess Anne
COPYRIGHT 2004 International Academy of Business and Economics
COPYRIGHT 2005 Gale Group