Skip to content
  • Home
  • Recent
  • Tags
  • Popular
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Obligations of board toward AI risk

Scheduled Pinned Locked Moved Risk Data and Analytics
ai riskboard accountability
3 Posts 1 Posters 116 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • U Offline
    U Offline
    User 63
    wrote on last edited by
    #1

    Hi RiskBowl

    A question I’m getting a lot at the moment is about obligations and accountabilities towards AI from an Executive and Board level. I’ve given a lot of generic advice about augmenting boards with skills, standardising KPIs to risk levels, having a clear sense of direction for strategy, and avoiding conflict of interest, but keen to hear thoughts or experiences please

    e.g.,

    • Given C-Suite are not expected to be specialists in AI, how do they remain accountable for its oversight?
    • How could a regulator test this?
    • What is the best approach to monitoring AI risks, given there’s no real progress towards straightforward KPIs in most cases?
    • How much knowledge should executives have, both for managing AI risk and overcoming resistance to innovation?
    1 Reply Last reply
    0
    • U Offline
      U Offline
      User 63
      wrote on last edited by
      #2

      I’d have thought Model Risk, IRB and IFRS 9 may be good templates in terms of expectations for senior management understanding of models

      Likewise, I wonder whether we feel things like BCBS239 and GDPR are also good bases for expectations around understanding of underlying data sources and uses? Of course, execs and boards will need to more specific training around the more complex AI models

      1 Reply Last reply
      0
      • U Offline
        U Offline
        User 63
        wrote on last edited by
        #3

        Very good questions. I’ve come across this as well on operational resilience and
        cyber, where the challenges are similar

        Some thoughts on this (also with the ex-regulator hat on):

        • Management bodies should acknowledge the challenge and be thoughtful around
          how to address this, e.g. through training; reporting; succession planning
          etc.
        • We recently heard from a regulator that they were worried that sometimes
          these topics are ‘outsourced’ to one person on the exec/ Board who
          understands it, whereas they are looking for broader skills and knowledge in
          the group. Again I think this is important to acknowledge, including the fact
          that building those muscles take time
        • In terms of ‘evidencing’ appropriate oversight and challenge by the Board,
          when supervisors look at meeting minutes they would expect to see critical
          questions being asked and a level of discussion (rather than the Board just
          ‘noting’ things)
        • The quality of the materials and reports being presented to the Board is very
          important, both data, but also someone bringing out the ‘so what’ and in
          particular where there are areas of judgement and uncertainty, and where
          there are trade-offs
        1 Reply Last reply
        0

        Terms of Use Privacy Notice Cookie Notice Manage Cookies
        • Login

        • First post
          Last post
        0
        • Home
        • Recent
        • Tags
        • Popular
        • Groups