Blog article
See all stories »

The Uncertainty of Pricing Models

I managed to catch some of the first day on Thursday of the "Pricing Model Validation: Mitigating Model Risk" conference. I thought it would be worthwhile going along since firstly the past 12-18 months have made model risk very topical both from the perspective of some of the failings in the use of certain complex derivative pricing models, and from the risk management models used to assess risk at a portfolio and bank-wide level. If interested then some earlier posts given a bit more context to this - maybe to a look at the following posts: Riskminds, the Modeller's Manifesto and Wilmott/Rowe.

Secondly more of our clients are looking at managing and centralising pricing models/curve calculators in addition to managing and centralising the underlying data. It seems that the "Golden Copy" concept for data management is extending/being applied to pricing/analytics/derived data given the drive for increased transparency - and maybe with the crisis as background there is more determination to break down some of the internal political fiefdoms/silos around instrument pricing.

A few of the talks are summarised below and the event was a good one (well lets put it this way, my consumption of coffee was a lot less than I thought it would be!...). I would particularly recommend the talk by Yuyal Millo on how the Black-Sholes-Merton option pricing model became a standard for options markets, even when the model was not giving "accurate" prices, and some of the comments from Tanguy Dehapiot on data were particularly interesting.

Model Risk 2009 defining and forecasting. First speaker was Professor Phillip Sibbertsen of the University of Hannover on defining and measuring model risk. Phillip started by saying that "Model Risk" was a new category of risk within the confines of "Operational Risk", and that operational risk as defined by the regulators does not yet currently include the "model risk" of market risk and credit risk, nor the "model risk" of the operational risk model itself. (I am sure I could write that up better!...). Phillip put forward that model risk is not formally a "risk" since it has no probability distribution and that he suggested it should be thought of as "model uncertainty". He also clarified that model risk applies both at the large, portfolio scale (e.g. choice of VAR model etc) and at the smaller, instrument level scale (i.e. pricing of derivatives).

Additionally in terms of measuring model risk then he excluded human failure from model risk measurement since in his view this was difficult to quantify - this approach did not meet with the approval of some of the audience were questioning how this could be excluded from a practical point of view. Phillip's colleague, Corinna Luedtke, then presented some work they had done on calibrating different GARCH models to observed data and showing how even a poor model could produce reasonable forecasts of risk if the time period was short. The work was interesting but again the audience highlighted that the human choice (failure?) in choosing the set of models to try was part of "model risk" and should not be excluded from the definition of model risk.

Is a model accurate? Testing the implementation of a model. Second speaker was David Chevance, Head of Equity & FX Model Validation at Dresdner Kleinwort. David outlined the different sorts of model risk: mathematical errors, missing risk factors, divergence from industry practice, model inconsistencies and implementation risk. He then outlined the sources of these risks: bugs, approximations, numerical precision, numerical boundaries and limitations on numerical methods (e.g. Sobol numbers in high dimension monte-carlo simulations).

David said a key area to start with in validating a model implementation was the front-office documentation of the product, its inputs and payoffs, its pricing model but also details of calibration methods used/needed etc. He made the point here that the documentation can sometimes specify just the deal, but sometimes can express the pricing methodology and pricing parameters. The emphasis was on completeness, accuracy and making use of all of the information available in the documentation. Obviously the ability to review the code used to implement the model was also necessary.

He discussed the trade-offs between a simple validation approach in terms of speed and efficiency of resources against the more time-consuming, resource hungry but more accurate approach of full replication of the model. He also suggested that in choosing a method of validation it was important to balance resource demands against what is actually being validated: payoffs from a single trade, a type of pricing model or a family of financial products. Desired accuracy of the validation was also important, given the trade-off between accuracy and effort, and the fact that small bugs are much more common than large.He finally discussed model version control, the necessary discipline of documenting changes and regression tests for new models, and the regular cycle of model review. Overall it was an interesting talk with a good practical focus.

Practical aspects of valuation model control process. One of the most entertaining and interesting speakers of the day was Tanguy Dehapiot, Head of Validation and Valuation, Group Risk Management at BNP Paribas. He started by referring to a few documents "Supervisory guidance for assessing banks’ financial instrument fair value practices", April 2009 (BCBS 153) which was then implemented within “Enhancement to the Basel II framework” (BCBS 157). The first part of his presentation was around these documents and what the regulators expect to be in place, so I guess the best approach is to read them (the BCBS 153 document content is only 12 pages long, quite short for a regulator!)

Tanguy pointed out that in his view "Mark to Market" and "Mark to Model" are often misleading as both are often required. He prefers the term "Valuation Methodology". He proposed four valuation modes: Direct Price Quotation, Use of Similar Instruments, Risk Replication, Expected Uncertain Cashflows (NPV) and categorised a useful hierarchy/matrix of which financial products fit into which valuation mode and for what purposes. Within model risk, he split off judgemental errors (choice of model etc) as part of market risk and credit risk and operational errors (model implementation and coding) as more definable and avoidable parts of operational risk.

He had some interesting slants on data, saying that he had been surprised that even getting all of the static data necessary to price simpler instruments like bonds had proven difficult. He outlined how model parameters are often stored across a variety of systems (curve definitions in one place, pricing methodology somewhere else) implying to me that this is sometimes difficult to pull together and needs some centralisation to improve transparency around this.

His opinion on market parameters (both observed prices and derived data such as implied volatility surfaces) were often stored in a larger central database but warned that this market parameter database needs to be reviewed as part of the model validation process since some of its data is derived (i.e. calculated, maybe using a model!) and as such should not be taken as perfect for all time and for all purposes. He said that it was important to categorise the origin of data and suggested the following types:

  • Quoted on an active exchange
  • Actual private transaction in an active market
  • Tradable broker quotes
  • Consensus prices from market makers
  • Non-binding indicative prices from market makers
  • Counterparty valuation, collateral valuation
  • Actual transactions in inactive market

Tanguy proposed that there should a valuation matrix for each instrument, where there might a different valuation methodology used for end of day valuation verses intraday, for risk or for trading, for pricing individually or within a portfolio reval. I guess here the rational is appropriateness, efficiency and transparency about what needs to used when. He also added that he disliked the term "Model Validation" since it seemed to imply that a model was "valid" and preferred "Model Approval" to cover the decision to use a model and "Model Review" to cover model analysis. He said he found managing the "stock" of existing models (and keeping up with when to review them) more difficult than managing the "flow" of new models and products.

Overall Tanguy was a very interesting and funny speaker with lots of practical insights and a fair amount of opinion thrown in, which is always good in my view.

The usefulness of inaccurate models: Financial risk management "in the wild". This talk was given by Dr Yuyal Millo of the London School of Economics and he focussed on the evolution of the use of the Black Scholes Merton (B-S-M) model at the CBOE and how the model came to be the means by which the whole options market "communicated". Yuyal is a social scientist and prefaced his talk by stating that "Social Sciences are good at predicting the past"

First thing I didn't know (amongst the many things I do not know...) is that the B-S-M model was not published until a couple of weeks after the CBOE started trading stock options in April1973. Yuyal said that initially the B-S-M derived prices were not accurate at all (around 25% off the market price on CBOE) and that the model was based on assumptions that plainly were not the case on the exchange (only calls available, no short selling, no continuous trading). The model was used by local Chicago trading firms and the story goes that Fischer Black sold large paper "sheets" of option pricing matrices to these traders (there being no calculators/PCs/mobiles around at the time).

As the markets developed, larger East Coast banks entered the market with stocks being held and traded in New York and options being traded in Chicago, so trading became geographically dispersed. This started the need for "early morning meetings" to discuss the market and the B-S-M model and its parameters became the "lingua franca" or means of communication of options market participants.

He described the first years of the Options Clearing Corporation (OCC) which was set up to ensure that the financial obligations of options and buyers were met. Around 1979-80 the OCC worked overnight to calculate margin requirements, based on the (now?) arcane idea that different margin amounts should be associated with different option strategies (straddles, butterflies etc) and the job of the OCC was to take a portfolio of Option and optimise which combination of strategies would minimise the margin required for the whole portfolio. He said that there were disputes between traders and the OCC around margin levels and difficulties for the SEC with updating their Net Capital Rules as each new option strategy was created. Eventually, the OCC adopted the B-S-M model and implied volatility as the means of calculating margin against market value which enabled them to move away from the operational difficulty of strategy optimisation.

So the B-S-M became the way in which traders communicated about the market but also the model became vital operationally within clearing for the market. By 1987 B-S-M had become the de-facto standard for the market, with the model driving the market in turn driving use of the model. During the Oct '87 crash the model proved to be very innaccurate but the use of the model did not diminish - maybe pschologically the market participants needed a model (even a wrong model) to make communication easier.

I found this talk very interesting and members of the audience asked whether any similar analysis was going to be done on the Gaussian Copula model used to price CDOs. Yuyal said that one of his colleagues was undertaking this research currently. Given that he seemed to be very positive about the use of the B-S-M model within options markets I asked whether he had any opinions on Taleb's criticism of fiancial engineers and modelling. Yuyal said that he and Nassim were friends and agreed to disagree on certain topics...

Stress testing modelling parameters. Next up was Peirpaolo Montana, Head of Model Validation at West LB. Having joined the finance industry out of a career in mathematics and then at a regulator, Pierpaulo began by saying that back in the heady days of 2004 the banks thought that their own risk management systems and practices were well ahead of the regulators. He said that in light of the crisis this proved not to be the case but he now feels that this is now more evenly balanced (not sure I would agree, still lots of catchin to do for some institutions I would suggest).

He said that whilst regulators require the validation of risk models and pricing models, and that stress testing of a portfolio is required, that the stress testing of a pricing model is not a requirement and has received much less attention and in his view was not done to much degree before 2007. His point here was that pricing models should work under stress too, otherwise they are a weak foundation for building other risk measures such as stressed VAR.

Whilst focussing on pricing models, he mentioned that risk models also need to be carefully chosen and appropriate to the institution and the types of trading activities it undertakes. As an example he put forward that a simple VAR calculator might be appropriate for a long only equity fund but completely innappropriate for a relative value portfolio.

He said that stress testing had recently received much more attention as a risk management tool and cited the BIS document "Revisions to the Basel II market risk framework" where stressed VAR is introduced as part of the regulatory capital charge calculation. He also mentioned that in order to avoid "standard model" treatment of complex securitised products an institution must be able to demonstrate that its VAR model can cope with these products under times of market stress.

Pierpaulo then described the stress testing of base correlation in CDO pricing, and how even moving the base correlation from its usual level of 70% to 99% would not have predicted the valuations observed in the recent crisis. In this way he says that stress testing of models can detect implementation problems and some model weaknesses, but it cannot assist in coping with structural breaks in the market. He also discussed how the B-S-M model is used everywhere (even places it should not really be valid for) since it is a robust model based on the no-arbitrage hypothesis - in contrast the CDO base correlation and other models are not so robust since they are not arbitrage free.

Summary. The event was good one with about 40 practitioners there on the first day and with some very good speakers. Sure some of it was common sense, some about process and some went into the ongoing risk/corporate governance debate, but I think the audience was most interested in the practical viewpoints and advice from the speakers. It is obviously easier for mathematicians to take in maths than corporate governance issues, but pricing model uncertainty is not without its place in the recent crisis and as such deserves attention.

I thought the talk on the creation of the options markets and the use of the Black-Scholes pricing model was particularly relevant to painting an historical backdrop to the recent pricing innovations in the credit markets. It seems that for both the new options markets of the 70's and the credit markets of the 00's then both were using inaccurate pricing models, however the main difference seems to be the no-arbitrage basis for pricing options produced a robust foundation for the market, whereas the models in the credit markets were not hedgeable/replicatable and hence not robust (as we have all seen).

Overall I believe that the area of pricing model innovation, management and control is still a neglected area in my view, and one where many institutions (even those doing good data management) could do a lot better.

6360

Comments: (0)

Member since

0

Location

0

More from member

This post is from a series of posts in the group:

Data Management 101

A community blog about data and how to manage it


See all

Now hiring