SUBMIT
Request a Brochure

Learn, explore and unleash your inner chef.

For more information, contact:
Yiota Andreou

yiotaa@marcusevanscy.com

Can you please explain the effect of regulations on the global development of model risk management?

The Model Validation process has evolved significantly over the last 20+ years, starting initially from ad-hoc tests and expanding over time into a more rigorous approach including full implementation testing, model risk analysis, model performance under stress market conditions, etc. On the other hand, Model Validation approaches have traditionally been very house specific.

Transformation of Model Validation into Model Risk Management was stipulated by Federal Reserve / OCC SR 11-7 “Supervisory Guidance on Model Risk Management” which covers the full lifecycle of a firm’s models: Model Development, Implementation and Use; Model Validation and Model Governance. Simultaneously, the scope of Model Management processes significantly expanded with the introduction of the Fed’s rather general definition of a model.

SR 11-7 ushered in fundamental change to the industry’s approach to Model Management, leading to recognition of Model Risk as an independent Risk Class to be managed alongside other Risk Classes (e.g. Market Risk, Credit Risk, Operational Risk, etc.). Initially this change mainly affected large firms regulated by Federal Reserve and OCC, but gradually many other regulators, often informally, aligned their model requirements to the major elements of SR 11-7. In some cases, regulators have adopted even more strict requirements. More and more, middle and smaller sized financial firms have been creating Model Risk Management functions, sometimes without a clear understanding how complex and expensive the process can be.

The ECB’s Guide for the Targeted Review of Internal Models (TRIM) is currently under consultation, and it is expected to provide a further step in the development of Model Risk Management. The guide focuses on internal models, but institutions are expected to implement a comprehensive effective Model Risk Management framework. In comparison with Federal Reserve / OCC guidance, TRIM is much more detailed and specific and it provides clear regulatory expectations for the lifecycle of internal models. I anticipate a number of challenges to accompany the implementation of TRIM requirements.

What are the big questions to consider when it comes to validating pricing and risk models?

It is important to ensure that a model is in line with its objectives. For example, a pricing model should capture important features of the product dynamics, have robust calibration and implementation routines and provide stable, accurate and meaningful outputs.

It should be clear that there is no universal modelling approach, and
often it is even impossible to determine what the “right” model is. Thus, validation of a general modelling framework, e.g. a local volatility model or BGM, can only be done to check the quality of the implementation, but the appropriateness of the model for a specific product or task requires a special consideration.

A good fit to market (calibration) instruments provides a necessary condition for the acceptance of a model, but it is not the only consideration. It is necessary to understand how exactly a model is going to be used (for example, hedging strategies), its behaviour under different market conditions, and suitability in relation to the firm’s portfolio.

Finally, identification of model limitations, together with quantification and mitigation of model risk, remain to be the most significant challenges of model risk management.

Can you elaborate on the ways to quantify and aggregate model risk across different business lines?

There is a long history of material model related losses, including those due to CDOs on subprime mortgage-backed securities, CMS spread options during EUR curve inversion in June 2008, or JP Morgan’s “London Whale” losses in 2012, just to mention a few. As a result, many firms have been trying to quantify and mitigate model risk for individual trading positions and portfolios.

In practice, as I already mentioned, it is a difficult task which is required for all types of models. Among other things, it should analyze dependence of model outputs on modelling assumptions, which is usually based on benchmarking of the production model against more advanced / alternative modelling approaches. It should also cover model sensitivity to market non-observable parameters, model behaviour under extreme market conditions, etc.

Thus it is quite a challenge to quantify model risk for an individual model, not to mention aggregating it across different model types. Perhaps it would be too simplistic to provide a single metric for aggregated model risk. An approach which uses several measures could, for example, cover losses with different confidence levels, arisen from incorrect or inappropriate model applications in excess of the current level of Model Reserves or Capital Adjustments, or model losses under different stress market conditions.

These measures could be supplemented by various “softer” metrics, such as current model performance failures, distribution of model risk ratings across all models in the firm, number of breaches raised due to model misuse, model validation progress, etc. Due to the complexity of model risk quantification, these “softer” metrics are often the dominant, if not the only, model risk reporting components in many institutions.

Could you elaborate on the latest trends within model risk management?

Since the introduction of SR 11-7 in 2011, significant progress has been made in the development of model risk management processes. This includes extension to the scope of covered models, strong requirements for the quality of model development and validation documentations, introduction of firm-wide model governance frameworks (committees, policies, etc.), establishment of model inventories and supporting model management workflows.

Recent trends include development of Model Risk Appetite, performance against which is monitored via quantitative measures and qualitative statements, Model Risk reporting, and Model Performance Monitoring. The latter is designed to identify potential model limitations outside of ongoing model validation, allowing the firm to proactively pursue mitigating actions, as appropriate.

One of the most popular current topics is model interconnectedness. There are at least two aspects to this. 

The first one is how uncertainty / limitations of a model could impact the performance of other models. Clearly the most “influential” of these  models include different types of feeder models, such as scenario generation models in Stress Testing or proxy treatments of input data. Another “influential” but less common case is where many of a firm’s models are derivations of a few general modelling approaches. For example, pricing models for structured products could be based on the application of a generic multi-asset framework, with some specifications (calibrations, choice of risk factors, etc.) for individual products.

Another aspect is the different types of model inconsistencies. For example, two different products may be similar when considered in a boundary case, but their models may imply quite different dynamics. Alternatively, large firms typically have dedicated trading systems for different asset classes, including all relevant hedging (usually vanilla) instruments. For example, Equity trading would need to hedge their quanto exposure with FX options, or a hybrid desk would use a variety of vanilla instruments from different asset classes. Sometimes those hedging models are not fully consistent, e.g. hybrid desks may apply more simplistic dividend treatment than Equity trading. Those situations need to be properly controlled and quantified, if required.

A necessary condition for a proper analysis of model interconnectedness is a robust model inventory, including a detailed record of the application of all model outputs. It is a huge task by itself, especially for firms with thousands of models. When all (or at least major) model connections are identified, some model risk analysis should be performed to analyse how model limitations could be propagated due to model interconnectedness. Due to the large volume of data and processing involved, there are some proposals to apply “machine learning” to this analysis, but, as far as I know, all of those attempts are still in early stages.

What would you like to achieve by attending the Validating Market Models: TRIM Conference?

The conference provides an excellent selection of presentations and panel discussions on the impact of TRIM and other regulations on model risk  management, quantification of model risk and machine learning applications. I look forward to learning about the latest trends to deal with those challenges.

Ahead of the Validating Market Models: TRIM Conference, we spoke with Slava Obraztsov, Global Head of Model Validation at Nomura, about ways to quantify and aggregate model risk across different business lines and the the latest trends within model risk management.

 
 
Practical Insights From:
BNP Paribas
Credit Suisse
Deutsche Postbank
DZ BANK
Imperial College London
JP Morgan
LBBW
Morgan Stanley
National Australia Bank
National Bank of Belgium
Nomura
Nordea
Rabobank
Société Générale

 

About the Conference:

This marcus evans event will address how market model validators can work towards mitigating model risk by developing a holistic framework. It will also provide an opportunity to review how TRIM is impacting the banking industry so far and what can be expected for model risk management in the future.
The 
Validating Market Models: TRIM Conference will take place from the 30th of November until the 1st of December 2017 in London, UK.

Copyright © 2017 Marcus Evans. All rights reserved.

Previous Attendees Include: 

Alpha Bank
Bank of Ireland
Barclays
BBVA
Belfius
BNP Paribas
Credit Suisse
Danske Bank
Deloitte
HSBC
ING
Landsbankinn
Metro Bank
Morgan Stanley
Nedbank Sydbank
Nordea
Rabobank
RBS
Santander
UBS
Unicredit

About the speaker:

Slava Obraztsov has been Global Head of the Model Validation Group at Nomura since 2007. His previous roles include Global Head of Model Validation at Bear Stearns, Senior Quantitative Model Risk Analyst at Commerzbank and Head of Risk Analytics at ANZ. He was awarded a PhD in Mathematics from Moscow State University and has held a number of academic positions at Russian and Australian universities.

The latest trends within model risk management 

An interview Slava Obraztsov, Global Head of Model Validation at Nomura

Slava Obraztsov,
Global Head of Model Validation at
Nomura

Fix the following errors:
Hide