Quantifying MoC A for a change in the definition of default which has no impact
-
We have made a change to our definition of default (required to close regulatory findings). The change is purely process-related and concerns a trigger for a default test, i.e. a trigger specification for a soft UTP criterion. Given data restrains, we could only argue qualitatively plus show based on a data sample that the change has no effect on default setting. Since we have no full historic analysis, the EGIM (ECB guide on internal models) seems to require that we quantify a MoC A for the remaining uncertainty that we might have missed cases in which the change might have an effect.
Our questions:
- Would you quantify a MoC A for the effected PD models in such a case (only sample data, but qualitative argumentation for "no effect")?
- What would be a suitable approach to quantify the MoC A in this case? We are debating a bootstrapping approach, but lack "trigger with default" cases to draw from.
-
Thanks for the question ! Here's an initial view from one our credit risk experts
Given the change originated in process change for soft UTP, it could be considered a MOC B instead, but if MOC A is expected. They state – no change in default levels – based on their analysis, but timing of default event could occur
What has been raised in other banks audits, is that even the number of defaults is same, there could be a timing impact e.g. earlier or later recognition based on the new process
Such a delay could lead to an impact on LGD model (default trigger could impact in-default model, impact of discounting period or maximum workout).
For PD model, a shift in defaults from one year to another could lead to a volatility of PD model performance (both ranking or calibration). Tests I have seen are holding a MOC for model performance uncertainty given dependency towards the chosen cohort moment
Besides analysis of default levels, the you should review if timing occurrences occur and considers the model uncertainty in MOC quantification
-
Couple of additional thoughts:
-
If the implementation of the trigger in application perimeter results in zero new defaults, they should be well prepared to defend it. If I were the supervisor, I would be annoyed with that outcome and would see negatively the remainder of the MoC construct
-
Provided they have fixed (1), using bootstrap makes sense. We have used it in several clients and portfolios and it works fine, although it needs to be properly framed and justified
-
A MoC of zero impact is still possible in net terms, but:
- The Appropriate Adjustment can be negative (i.e. reduce impact)
- The MoC is strict terms must always be positive, because uncertainty is never zero
- The aggregation AA + MoC can be below zero, and then, in net terms, it can be floored at zero
- They’d be better off assessing this for both PD and LGD; as typically the impacts have different directions (except the case of anticipation of defaults that Alex flags, which is often a big negative on PD and LGD)
-
-
@OP
It’s a little hard to say without knowing a little more about the specific issue, but I would have thought that, if you haven’t already, you may need to put together a qualitative argument around why in different economic circumstances you would not have seen a different outcome (and/or why historically it would not have thrown anything up – here I am a little more persuadable re: relevance of history) – it sounds from the question like the challenge is more around the fact that you only have data for a more recent period (at which point sadly bootstrapping would miss the possible cycle / different circumstance effects)
-
@OP
In which case,
-
No change in default level – new Soft UTP is not adding any defaults (there must be fully correlated to any existing trigger or triggers e.g. 90DPD, bankruptcy, etc.)
-
No timing difference of default event – therefore no PD or LGD impact even due to discounting
-
Client has only limited historic time series to show proof
Both 1 and 2 indicate there is full correlation to already existing triggers at the existing time period (but level of correlation can differ at time (t) for each trigger toward the new Soft UTP trigger).
Based on the other responses to this question, a bootstrapping would be the correct approach, but the bank states on recent periods the outcome should be NULL (or similar like that)
Therefore, an idea could be to simulate the uncertainty of the correlation in different macroeconomic environments
- Correlation analysis of macro economic factor vs existing triggers over time (t)
- Given that new Soft UTP trigger is correlated toward multiple triggers at time (t) (based on existing time period the bank has), the correlation variation found of each underlying existing trigger can be used to proxy new Soft UTP trigger back in time
PLUS above should still be supplemented with a qualitative statement
-