© Copyright 2020 marcus evans. All rights reserved.

More Information

2nd Annual

Model Risk Management in Banking 

8 - 10 February 2021 / Live Streaming and On-Demand

As the financial markets are currently experiencing unprecedented times banks have spoken of the need to monitor models closer than ever, how could a model inventory help with this (if it is set up in the most effective way)?

An important and obvious first step in monitoring models more closely is of course to ensure that a firm’s model inventory is complete and accurate. The unfortunate reality is that in many firms anywhere from 30 - 50% of the models being used for firm business do not appear as formal entries in model inventories. These are mostly what are called EUC, or End User Controlled, models. I sometimes refer to these as models that are ‘hiding in the shadows’.

In a 2019 paper entitled “The Top Fourteen Challenges for Today’s Model Risk Managers”, I included an appendix that listed 10 classic evasions that model owners have used to avoid the validation process, including this winner: “It’s not a model because it’s just a spreadsheet and spreadsheets are, by definition, not models!” That effectively encapsulates a major reason why so many EUC models remain ‘hidden in the shadows’ at many firms today. Owners of quantitative spread sheets very often do not consider them to be model candidates and therefore do not submit them to MRM for validation.

To ensure its inventory is complete and accurate a firm should undertake a comprehensive model discovery process. There are automated IT tools for performing file searches for undeclared models, but few firms actually employ them, instead relying upon the more traditional practice of voluntary declaration by model owners. Because of this and other reasons cited above many models will continue to remain hidden in the shadows and beyond MRM purview.

At the model risk event you will be speaking on creating a smarter MRM by building smarter models. What precisely do you mean by ‘a smarter model’ and what are the challenges of achieving this?

First, you must realize that I do not use the phrase ‘smart model’ to refer to the current trend of building models incorporating artificial intelligence or machine learning (AI/ML), although that is also an apropos description of such models. I apply the phrase in a more abstract sense to describe models, with or without ML, that have a rudimentary form of self-awareness. This is an attribute possessed by virtually all of the ‘smart’ devices that we use every day.

No matter what their purpose, all smart devices have some form of an embedded identity token that uniquely identifies the device, coupled with a means of communicating the device’s existence, location and activity to the outside world. Think of Tesla and Uber vehicles, smart phones, any device connected to the internet, Onstar, etc.

A good example of this is the transponder device that is found in all commercial aircraft. It continuously broadcasts the aircraft’s identity and indicative flight data to air traffic controllers. Another example would be Tesla vehicles which are equipped with two-way transponder devices that communicate via satellite supporting wireless data uploading and software update downloading

In the sense that I use the phrase, ‘smarter’ financial models can be created similarly by combining embedded identity tokens with active intelligent agents that have the ability to broadcast model usage data via a firm’s intranet or downloaded into a local database. Implementation does require embedding a few simple lines into the source code of each model, namely, (1) a declaration of a unique identity token for the model and (2) a call to a transponder type of function that can broadcast indicative model usage data (the how, when and where) each time the model is executed. These two enhancements would have no effect on model outcomes or performance but would open the door to new ways of automating much of the manual effort performed by today’s model risk managers.

I have described this proposal for creating what I like call “the next generation of smart models” in much greater detail in a recent 2020 paper: “A Smarter MRM Will Follow from Building Smarter Models” (available upon request to the author ).

How far are banks from achieving a smarter MRM and what steps should they be considering in their journey to reach this goal?

Very few banks have taken even tentative steps in the direction described in the 2020 paper I wrote arguing for creation of a new generation of ‘smart models’. This is partly because implementing the concept of model-embedded active intelligent agents that can endow models with a rudimentary level of self-awareness comparable to that of the myriad of smart devices we use every day requires a few modifications to model source code. This would mean retrofitting a firm’s entire model inventory, which is a resistance barrier at every firm, even though it could be accomplished incrementally, one model group at a time, during annual reviews.

On the other hand, vendor firms that that operate in the MRM space often seek innovations that will distinguish them from their competition, so they tend to be more aggressive in embracing novel methods. The SAS Institute is one example. A  prototype version of the ‘smart model’ concept has been implemented in their MRM platform and is undergoing proof-of-concept trials with several of their clients.

Although the payoff to firms making this investment would translate into a substantial reduction in the manual overhead required for many MRM functions, bureaucratic and managerial inertia at most firms tend to oppose changing the “way things have always been done”. It seems that most decision-makers who hold budgetary purse strings belong to the “I have to see it before I’ll pay for it” school of decision-making.

To make this point, consider the proposal from my 2020 paper for creation of a dynamic network map of model and data inter-dependencies within a firm’s model ecosystem by passing identity tokens from upstream to downstream entities at each sequential execution event. Currently these relationships are identified and mapped by model owner attestation, a process of manually tracing each of the inputs into their models back to their upstream origins. This is not only laborious but also error prone as model owners often stop when they identify the first upstream model, rarely pursuing the possibility that there may be secondary or tertiary model dependencies further upstream. (This is partly because financial models and their input data streams are often developed in separate business silos so there is no single line of ownership from upstream to downstream entities).

If the manual attestation process is performed diligently it can produce an accurate network map of model and data interdependencies, albeit one that is static, a snapshot of interdependence taken at a point in time. Token passing is a technique commonly used in computer networks, email, the internet, etc., but is one that does not seem to have gained traction in financial model ecosystems. Yet if implemented as described in my 2020 paper it would eliminate the need for manual attestation by model owners by producing a dynamic map of data and model inter-dependencies, one that is updated each time a model is executed. If a new model is introduced into the ecosystem, it would appear in the network map after the first execution event. Likewise, if a model is retired it would disappear from the network when the dynamic map is refreshed.

However, I think it is safe to say that today no financial firm is mapping data and model inter-dependencies using any form of token passing. One reason for resistance to this type innovation is that implementation requires retooling models so they can collect identity tokens from upstream entities and pass them on to any downstream entities. The additional effort required adds to the natural resistance to change inherent in any bureaucracy that is influenced by the “I have to see it before I’ll pay for it” school of decision making.

How can risk professionals drive the adoption of these smart MRM concepts across the financial industry?

Because it requires model developers to embed identity tokens and transponder (or tracking) functions in their model source code for the benefit of MRM it would typically require buy-in from risk managers senior enough to have authority over both the first and second lines of defense (model owners and MRM respectively) and with enough imagination to look down the road far enough to recognize the long-term value of making MRM smarter by making models smarter. Such farsighted managers are in short supply in my experience. But there are some here and there, and they could show the way by acting as thought leaders for improved model discipline and convincing their C-suite managers of the long-term value in terms of reduced model risk and reduced cost of a smarter MRM practice.

Another group of risk professional who have a lengthy track record of driving innovation in model risk discipline at US banks are the federal regulators at the FRB, OCC, FDIC and SEC. The same accolade would apply to some of the European regulators, such as the PRA at the Bank of England. As a result of financial meltdown of 2008 that exposed weaknesses in the ways that banks estimate and manage risk bank regulators have played a leading role in raising the bar for model risk management practices year after year and are primarily responsible for the substantial progress that has been made since 2008.

Regulators can continue to stimulate innovation by continuing to ask questions at MRM bank exams that managers find difficult to answer with confidence and accuracy - the types of questions that will motivate model risk managers to search for new and more efficient ways to answer them.

What, for you, are the benefits of attending a conference like this Model Risk Management meeting and what can attendees expect to learn from your session?

One of the most compelling reasons to attend Marcus Evans MRM conferences is to gain the opportunity to hear from and interact with thought leaders in the field from other firms and regulatory agencies that one wouldn’t normally interact with.

For example, I first became aware of the potential for applying AI and ML techniques to model validation around five or six years ago at these conferences when it was barely on anyone’s radar. Today the use of ML is a very hot topic and tends to dominate presentations and panel sessions at MRM conferences.

Just within the last year or so I have begun to hear from a few singular thought leaders about the potential use of blockchain techniques to manage and commoditize models in a very secure environment, another on-the-horizon leading edge idea whose time may be coming soon.

Giving presentations at these conferences is also one of the best ways to gain broader visibility in the MRM profession and to become known to the most advanced thought leaders from other firms and regulatory agencies.

For me then the two outstanding professional reasons for attending and participating in MRM conferences is to remain current in the techniques that leading practitioners are moving forward and to develop one’s own brand in the field.

Soft copies in PDF format of the two papers cited in this interview are available upon request to the author at 

jh7050@nyu.edu or jonhill@optonline.net 

















































An interview with:

Jon Hill, Professor of Financial Risk Engineering at New York University

Follow us on Social Media!


This marcus evans conference will look to develop an industry standard for the assessment and management of model risk for traditional and next generation models in banks.

The last decade, has been filled with regulation that has tightened and changed the underlying methodology for models, in turn indirectly impacting model risk. There has been no guideline comparable to the US‟ SR 711 for model risk, however there is now a stir in the market and, an expectation that the ECB will be releasing a guideline exclusively on model risk following on from TRIM 2.0. In addition to that, the scope of what the model risk function is managing as they are expected to look at is growing as the next generation of models.

Despite the current circumstances, we know your appetite for key business insights remains so our Live+ digital platform enables you to fully participate in the event remotely. Of course it provides access to live online streams of all session, but much more than that, it ensures you are able to engage with speakers directly allowing you to participate in Q&A, relevant breakout groups as well as event polling and other insights and resources delivered during the event. We realise interacting with other delegates is key to your event experience, so our innovate online solution allows you to set up online meetings with other virtual and physical attendees throughout the event; ensuring you still walk away with those key contacts that can make a tangible difference to you and your business. The platform will continue to host all event content on-demand for you to re-visit and continue to access for up to 6 months post event.

Your content. Your way.

About the conference

We would be delighted to provide you with more information on the conference agenda.  Please fill in your details below and we will be in touch.

Jon is an adjunct professor of Financial Risk Engineering at NYU and also serves as head of the New York Chapter of the Model Risk Managers International Association. He is the former Global Head of Model Risk Governance at Credit Suisse in New York City. In this role had responsibility for the ongoing identification, measurement, risk rating, inventory and monitoring of corporate model risk across all business units, regions and legal entities and for the validation of high and medium risk market and operational risk models. Jon was the founder and global head of the Morgan Stanley’s global market and operational risk validation team for six and half years. His team of 7 Ph.D. and Masters level quants in New York and Budapest had responsibility for the second-line-of-defence validation of Morgan Stanley’s global market risk models, including Value at Risk (VaR), Stressed VaR, Incremental Risk Charge, Comprehensive Risk Measure and all firm wide Operational Risk models. Prior to his tenure at Morgan Stanley Jon was a member of the model validation group at Citigroup for six years, concentrating on equity, fixed income, foreign exchange, credit and market risk models. Before joining the Citigroup model validation team he worked for eight years on model development and general quantitative risk analytic methodologies as a member of the Quantitative Analysis Group at Salomon Smith Barney, which later merged with Citibank to form Citigroup.

To view the Conference Agenda, click HERE! 

For all enquiries regarding speaking, sponsoring and attending this conference contact:

Yiota Andreou
Email: Yiotaa@marcusevanscy.com
Telephone: +357 22849 404
Fax: +357 22 849 394