PD and LGD Masterscales: MRM Classification
-
We are developing unified PD and LGD masterscales for an entity that is in the process of merging multiple legacy banks. Historically, one of the legacy banks considered their masterscales as models for MRM purposes while the other two did not. Our initial hypothesis is that masterscales should not be considered models as we see them as translation tools for the actual models themselves.
Does anyone have recent guidance on whether banks should view PD and LGD masterscales as models or not? Any supporting evidence/ materials as well as any pros/ cons from any perspective would be greatly appreciated.
-
I have to admit, I don’t particularly remember seeing masterscales in EU MRM systems, but that might be more of a reflection of the fact that European banks (other than those with large US presences/subject to US MRM rules) generally have to date not included as many things in MRM.
At risk of stating the obvious, for me it slightly depends on the masterscale:
- Historically, banks have often defined masterscales somewhat “top-down” /
expert drive based on- Grades reported for large corporates by the big 3 rating agencies, or
- Based on min/max PD and number of grades they want (whilst maintaining
some sort of exponential relationship as you move between grades)
With these approaches, individual model outputs would be mapped to rating scale based on the grade boundaries.
For me, this is not therefore really a “model” (more of a calculation logic, which in fairness might need to be reviewed at some stage, but I seem to recollect some US banks at one stage were looking at including lighter touch capture of “calculator” type things that weren’t models but were still analytical and subject to choices)
- However, due to EU regulatory expectations that one should be able to demonstrate the homogeneity and heterogeneity of grades, this approach seems to be being used less for model outputs
- instead there are lots of analyses being done to try to define grade boundaries in a statistically significant manner e.g. using a decision tree or some other form of optimisation algorithm
- this would feel more like a “model” to me, but if defined as part of the individual PD/ LGD model would actually form part of that PD/ LGD model rather than a stand-alone model
Of course this is not then strictly speaking a masterscale which I would normally think of spanning the whole bank, not just an individual model
The other place where you may need to be careful is around the mapping of internal “masterscales” to rating agency ratings and assumptions made about the associated default rate for the grade – this can be complex (I’m thinking of all the smoothing / extrapolation work we used to do to get monotonic default rates for AAA-A entities). Furthermore, if this mapping covers outputs from/inputs to multiple underlying models, this might be best thought of as a separate model
- Historically, banks have often defined masterscales somewhat “top-down” /
-
I should also have said that I rarely see LGD masterscales (I cannot think of any immediately tbh) – for individual portfolios, these seem to tend to be continuous outputs (for secured exposures where LTV is a big driver) or some sort of bucketed/segmented average, but rarely mapped to a single scale. ECB in its annual validation templates allows for this by suggesting people report results by 5% LGD bands
-
We just completed a survey with 15 GSIBs and one of the questions was if LGD masterscales are considered a model:
- 6 consider it as model and validate it
- 2 consider is as a non-model and validate it
- 4 do not validate it
- And then you have 3 Asian GSIBs where 2 do not validate and 1 validates them
as non-model but I would not consider the Asian GSIBs a good benchmark for
you in Americas
So in summary, 8 out of 12 validate it
-
Having seen banks who treat masterscales as models, it usually devolves into a big waste of time. But there are two distinct components: 1) How granular/ how many ratings and how defined -> NOT a model. How the PDs are estimated per rating -> could be considered a model, and whether it is a model depends on the rating system design.
If you’re just defining a scale and all the PD estimation and calibration resides in the underlying rating models, then the masterscale is just a ruler and is not a model. We all know there is no right or wrong when it comes to defining a ruler, although those of us using the Imperial system must admit that the metric system is clearly better
But there may be complexity when using external ratings as part of an underlying rating model calibration -> e.g., for corporate and institutional, b/c when you map to those ratings and then calibrate long-run PD for each rating, you are in fact modeling at that point, and so is that part of the rating model or is that PD calibration part of a masterscale document?
-
I think most have been said already, but just for the record I have also seen a PD master scale being validated by MRM (US).
I’d say as mentioned, the guiding principle should be how you derive the scale. If it involves mathematical and statistical methods, assumptions etc. then it most likely falls under the “model” definition and should be validated. I would tie it to how you define a model internally and what the MRM policy says about it