« S&P Capital IQ Data Integration for TimeScape | Main | Risk Data Aggregation and Risk Reporting from PRMIA »

23 October 2013

Model Risk Management from PRMIA

Guest blog post by Qi Fu of PRMIA and Credit Suisse NYC with some notes on a model risk management event held ealier in September of this year. Big thank you to Qi for his notes and to all involved in organising the event:

The PRMIA event on Model Risk Management (MRM) was held in the evening of September 16th at Credit Suisse.  The discussion was sponsored by Ernst & Young, and was organized by Cynthia Williams, Regulatory Coordinator for Americas at Credit Suisse. 

As financial institutions have shifted considerable focus to model governance and independent model validation, MRM is as timely a topic as any in risk management, particularly since the Fed and OCC issued the Supervisory Guidance on Model Risk Management, also known as SR 11-7.

The event brings together a diverse range of views: the investment banks Morgan Stanley, Bank of American Merrill Lynch, and Credit Suisse are each represented, also on the panel are a consultant from E&Y and a regulator from Federal Reserve Bank of NY.  The event was well attended with over 100 attendees.

Colin Love-Mason, Head of Market Risk Analytics at CS moderated the panel, and led off by discussing his 2 functions at Credit Suisse, one being traditional model validation (MV), the other being VaR development and completing gap assessment, as well as compiling model inventory.  Colin made an analogy between model risk management with real estate.   As in real estate, there are three golden rules in MRM, which are emphasized in SR 11-7: documentation, documentation, and documentation.  Looking into the future, the continuing goals in MRM are quantification and aggregation.

Gagan Agarwala of E&Y’s Risk Advisory Practice noted that there is nothing new about many of the ideas in MRM.  Most large institutions already have in place guidance on model validation and model risk management.  In the past validation consisted of mostly quantitative analysis, but the trend has shifted towards establishing more mature, holistic, and sustainable risk management practices. 

Karen Schneck of FRBNY’s Models and Methodology Department spoke about her role at the FRB where she is on the model validation unit for stress testing for Comprehensive Capital Analysis and Review (CCAR); thus part of her work was on MRM before SR 11-7 was written.  SR 11-7 is definitely a “game changer”; since its release, there is now more formalization and organization around the oversight of MRM; rather than a rigid organization chart, the reporting structure at the FRB is much more open minded.  In addition, there is an increased appreciation of the infrastructure around the models themselves and the challenges faced by practitioners, in particularly the model implementation component, which is not always immediately recognized.

Craig Wotherspoon of BAML Model Risk Management remarked on his experience in risk management, and comments that a new feature in the structure of risk governance is that model validation is turning into a component of risk management.  In addition, the people involved are changing: risk professionals with the combination of a scientific mind, business sense, and writing skills will be in as high demand as ever.

Jon Hill, Head of Morgan Stanley’s Quantitative Analytics Group discussed his past experience in MRM since 90’s, when then the primary tools applied were “sniff tests”.  Since then, the landscape has long been completely changed.  In the past, focus had been on production, while documentation of models was an afterthought, now documentation must be detailed enough for highly qualified individual to review.  In times past the focus was only around validating methodology, nowadays it is just as important to validate the implementation.  There is an emphasis on stress testing, especially for complex models, in addition to internal threshold models and independent benchmarking.  The definition of what a model is has also expanded to anything that takes numbers in and haves numbers as output.  However, these increased demands require a substantial increase in resources; the difficulty of recruiting talent in these areas will remain a major challenge.

Colin noted a contrast in the initial comments of the panelists, on one hand some are indicating that MRM is mostly common sense; but Karen in particular emphasized the “game-changing” implications of SR 11-7, with MRM becoming more process oriented, when in the past it had been more of an intellectual exercise.  With regards to recruitment, it is difficult to find candidates with all the prerequisite skill sets, one option is to split up the workload to make it easier to hire.

Craig noted the shift in the risk governance structure, the model risk control committees are defining what models are, more formally and rigorously.  Gagan added that models have lifecycles, and there are inherent risks associated within that lifecycle.  It is important to connect the dots to make sure everything is conceptually sound, and to ascertain that other control functions understand the lifecycles.

Karen admits that additional process requirements contain the risk of trumping value.  MRM should aim to maintain high standards while not get overwhelmed by the process itself, so that some ideas become too expensive to implement.  There is also the challenge of maintaining independence of the MV team.

Jon concurred with Karen on the importance of maintaining independence.  A common experience is when validators find mistakes in the models, they become drawn into the development process with the modelers.  He also notes differences with the US, UK, and European MV processes, and Jon asserts his view that the US is ahead of the curve and setting standards.

Colin noted the issue of the lack of an analogous PRA document to SR 11-7, that drills down into nuts and bolts of the challenges in MRM.  He also concurred on the difficulty of maintaining independence, particularly in areas with no established governance.  It is important to get model developers to talk to other developers about the definition and scope of the models, as well as possible expansion of scope.  There is a wide gamut of models: core, pricing, risk, vendor, sensitivity, scenarios, etc.  Who is responsible for validating which?  Who checks on the calibration, tolerance, and weights of the models?  These are important questions to address.

Craig commented further on the complexity and uncertainty of defining what a model is, and on whose job it is to determine that, amongst the different stakeholders.  It also needs to be taken into consideration that model developers maybe biased towards limiting the number of models.

Gagan followed up by noting that while the generic definition of models is broad, and will need to be redefined, but analytics do not all need to have the same standards, the definition should leave some flexibility for context.  Also, the highest standard should be assigned to risk models.

Karen adds that, defining and validating models used to have a narrow focus, and done in a tailor-controlled environment.  It would be better to broaden the scope, and to reexamine the question on an ongoing basis (it is however important to point out that annual review does not equal annual re-validation).  In addition to the primary models, some challenge models also need to be supported; developers should discuss why they’re happy with primary model, how it is different from challenger model, and how it impacts output.

Colin brought up the point of stress-testing.  Jon asserts that stress-testing is more important for stochastic models, which are more likely to break under nonsensical inputs.  Also any model that plugs into the risk system should require judicious decision-making, as well as annual reviews to look at changes since the previous review.

Colin also brought up the topic of change management: what are the system challenges when model developers release code, which may include experimental releases.  Often discussed are concepts of annual certification and checkpoints.  Jon commented that the focus should be on changes of 5% or more, with pricing model being less of a priority; and firms should move towards centralized source code depositories.

Karen also added the question of what ought to considered material change: the more conservative answer is any variation, even if a pure code change that didn’t change model usage or business application, may need to be communicated to upper management.

Colin noted that developers often have a tendency to encapsulate intentions, and have difficulty or reluctance to document changes, thus resulting in many grey areas.  Gagan added that infrastructure is crucial.  Especially when market conditions are rapidly changing, MRM need to have controls that are in place.  Also, models are in Excel make the change management process more difficult.  

The panel discussion was followed by a lively Q&A session with an engaged audience, below are some highlights.

Q:  How do you distinguish between a trader whose model actually needs change, versus a trader who is only saying so because he/she has lost money?

Colin:  Maintain independent price verification and control functions.

Craig:  Good process for model change, and identify all stakeholders.

Karen:  Focus on what model outputs are being changed, what the trader’s assumptions are, and what is driving results.

Q:  How do you make sure models are used in business in a way that makes sense?

Colin:  This can be difficult, front office builds the models, states what is it good for, there is no simple answer from the MV perspective; usage means get as many people in the governance process as possible, internal audit and setting up controls.

Gagan:  Have coordination with other functions, holistic MRM.

Karen:  Need structure, inventory a useful tool for governance function.

Q:  Comments on models used in the insurance industry?

Colin:  Very qualitative, possible to give indications, difficult to do exact quantitative analysis, estimates are based on a range of values.  Need to be careful with inputs for very complex models, which can be based on only a few trades.

Q:  What to do about big shocks in CCAR?

Jon:  MV should validate for severe shocks, and if model fails may need only simple solution.

Karen:  Validation tools, some backtesting data, need to benchmark, quant element of stress testing need to substantiated and supported by qualitative assessment.

Q:  How to deal with vendor models?

Karen:  Not acceptable just to say it’s okay as long as the vendor is reputable, want to see testing done, consider usage also compare to original intent.

Craig:  New guidance makes it difficult to buy vendors models, but if vendor recognizes this, this will give them competitive advantage.

Q:  How to define independence for medium and small firms?

Colin:  Be flexible with resources, bring in different people, get feedback from senior management, and look for consistency.

Jon:  Hire E&Y?  There is never complete independence even in a big bank.

Gagan:  Key is the review process.

Karen:  Consultants could be cost effective; vendor validation may not be enough.

Q:  At firm level, do you see practice of assessing risk models?

Jon:  Large bank should appoint Model Risk Officer.

Karen:  Just slapping on additional capital is not enough

Q:  Who actually does MV?

Colin:  First should be user, then developer, 4 eyes principle.

Q:  Additional comments on change management?

Colin:  Ban Excel for anything official; need controlled environment.



TrackBack URL for this entry:

Listed below are links to weblogs that reference Model Risk Management from PRMIA:


I missed this yesterday but listened to it today, a good webinar on pricing model validation from Numerix:


with presentation slides at


and a whitepaper at


The comments to this entry are closed.

Xenomorph: analytics and data management

About Xenomorph

Xenomorph is the leading provider of analytics and data management solutions to the financial markets. Risk, trading, quant research and IT staff use Xenomorph’s TimeScape analytics and data management solution at investment banks, hedge funds and asset management institutions across the world’s main financial centres.


Blog powered by TypePad
Member since 02/2008