<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Obligations of board toward AI risk]]></title><description><![CDATA[<p dir="auto">Hi RiskBowl</p>
<p dir="auto">A question I’m getting a lot at the moment is about obligations and accountabilities towards AI from an Executive and Board level. I’ve given a lot of generic advice about augmenting boards with skills, standardising KPIs to risk levels, having a clear sense of direction for strategy, and avoiding conflict of interest, but keen to hear thoughts or experiences please</p>
<p dir="auto">e.g.,</p>
<ul>
<li>Given C-Suite are not expected to be specialists in AI, how do they remain accountable for its oversight?</li>
<li>How could a regulator test this?</li>
<li>What is the best approach to monitoring AI risks, given there’s no real progress towards straightforward KPIs in most cases?</li>
<li>How much knowledge should executives have, both for managing AI risk and overcoming resistance to innovation?</li>
</ul>
]]></description><link>https://riskbowl.owex.oliverwyman.com/topic/58/obligations-of-board-toward-ai-risk</link><generator>RSS for Node</generator><lastBuildDate>Fri, 06 Mar 2026 00:03:33 GMT</lastBuildDate><atom:link href="https://riskbowl.owex.oliverwyman.com/topic/58.rss" rel="self" type="application/rss+xml"/><pubDate>Wed, 11 Dec 2024 10:14:32 GMT</pubDate><ttl>60</ttl><item><title><![CDATA[Reply to Obligations of board toward AI risk on Wed, 11 Dec 2024 10:16:44 GMT]]></title><description><![CDATA[<p dir="auto">Very good questions. I’ve come across this as well on operational resilience and<br />
cyber, where the challenges are similar</p>
<p dir="auto">Some thoughts on this (also with the ex-regulator hat on):</p>
<ul>
<li>Management bodies should acknowledge the challenge and be thoughtful around<br />
how to address this, e.g. through training; reporting; succession planning<br />
etc.</li>
<li>We recently heard from a regulator that they were worried that sometimes<br />
these topics are ‘outsourced’ to one person on the exec/ Board who<br />
understands it, whereas they are looking for broader skills and knowledge in<br />
the group. Again I think this is important to acknowledge, including the fact<br />
that building those muscles take time</li>
<li>In terms of ‘evidencing’ appropriate oversight and challenge by the Board,<br />
when supervisors look at meeting minutes they would expect to see critical<br />
questions being asked and a level of discussion (rather than the Board just<br />
‘noting’ things)</li>
<li>The quality of the materials and reports being presented to the Board is very<br />
important, both data, but also someone bringing out the ‘so what’ and in<br />
particular where there are areas of judgement and uncertainty, and where<br />
there are trade-offs</li>
</ul>
]]></description><link>https://riskbowl.owex.oliverwyman.com/post/115</link><guid isPermaLink="true">https://riskbowl.owex.oliverwyman.com/post/115</guid><dc:creator><![CDATA[User 63]]></dc:creator><pubDate>Wed, 11 Dec 2024 10:16:44 GMT</pubDate></item><item><title><![CDATA[Reply to Obligations of board toward AI risk on Wed, 11 Dec 2024 10:16:11 GMT]]></title><description><![CDATA[<p dir="auto">I’d have thought Model Risk, IRB and IFRS 9 may be good templates in terms of expectations for senior management understanding of models</p>
<p dir="auto">Likewise, I wonder whether we feel things like BCBS239 and GDPR are also good bases for expectations around understanding of underlying data sources and uses? Of course, execs and boards will need to more specific training around the more complex AI models</p>
]]></description><link>https://riskbowl.owex.oliverwyman.com/post/114</link><guid isPermaLink="true">https://riskbowl.owex.oliverwyman.com/post/114</guid><dc:creator><![CDATA[User 63]]></dc:creator><pubDate>Wed, 11 Dec 2024 10:16:11 GMT</pubDate></item></channel></rss>