Skip to content

Credit Risk

The dedicated space to converse with peers and our experts on all aspects of credit risk, from the technicalities of modelling using internal approaches, credit decisioning and underwriting, credit risk appetite, governance and monitoring, provisioning, and regulatory requirements

26 Topics 73 Posts
Contact Oliver Wyman

Reach out to our team if you want to collaborate on recent developments in Risk.

Subcategories


  • Our dedicated space to discuss practicalities and technicalities of credit risk modelling using internal modelling approaches

    4 Topics
    13 Posts
    U

    While I recognize a lot of these points, I do think that we should not let the tail wag the dog

    If the non-financial factors add predictive power, I don’t think there is any reason on a first principles basis to categorically exclude them. But of course, I do appreciate that these kind of factors can be subjective and therefore of lower quality, so we should keep an eye on that and encourage the clients to improve data quality

    Also, many banks lump treatment of these kind of factors with overrides, which is almost always where the supervisory feedback is coming from. It is commonly used as a fudge factor, and that is poor practice. One can develop a disciplined, (high-quality) data based use of this type of information to avoid that pitfall

  • Seasoning effects in IRB model development

    2
    0 Votes
    2 Posts
    16 Views
    J

    Hi there,

    Based on previous experience, for PD this is often not relevant: PDs are 12-month and the seasoning tends to be generally captured by the scoring model itself. A qualitative explanation of each scoring model and which characteristics it is considering that relate to seasoning may be enough, especially if complemented with quantitative analyses on the seasoning effect.

    For a more quantitative approach, suggest testing time since origination and time until maturity as potential risk drivers using the general risk driver assessment framework during PD calibration - in the past I've observed this not to be significant but again, this is anecdotal evidence.

    On LGD it may be relevant. However it should be understood that seasoning actually correlates with other significant risk drivers, particularly LTV and outstanding exposure amount. Here a deeper analysis of these parameters' significance should help "paint the broader picture".

    Regards

  • PD Calibration - Applying Bayes theorem

    2
    0 Votes
    2 Posts
    16 Views
    J

    A couple of thoughts on this subject, from one of our experts:

    The discrepancy is caused by the adjustment implicitly assuming that a Bank would have had more defaults and lower scores (and so a worse average score) – while applying the theorem to a population which still has the same set of defaulted cases. This means the average scores are not worse, and hence you predicted PD will be lower.

    There are at least two approaches to deal with this effect:

    Adjust the constant term in the logistic until it hits the 2% target Run a “goal seek analysis" so that the average PD after mapping scores to the Bank grades, and applying the appropriate post-rating adjustments so the PD reaches 2%

    Especially for European banks IRB models are actually required to be quite conservative unless Banks have "perfect" data, so the long-run average can become a moot point to a certain extent

    On the topic of perfect data: if the Bank has enough data and the PD model is really powerful, it should find that there is no straight-line relationship between PD from logistic model vs. observed default rate. This is actually caused by the fact that whilst the errors are broadly normally distributed in logOdds space, when the distribution is converted to PD/default rate space the expectation will be closer to the mean than the original prediction.

Terms of Use Privacy Notice Cookie Notice Manage Cookies