Created: Feb 09, 2026 07:39 PM (Updated: Feb 09, 2026 07:39 PM)
The Bermuda Monetary Authority released discussion paper The Responsible Use of Artificial Intelligence in Bermuda’s Financial Services Sector last July (File photograph)
Artificial intelligence-related risks are largely addressed through existing regulatory frameworks, industry stakeholders told the Bermuda Monetary Authority during the drafting of discussion paper The Responsible Use of Artificial Intelligence in Bermuda’s Financial Services Sector.
This included corporate governance, conduct, risk management, cyber-risk, operational resilience and third-party oversight requirements, the BMA said in a stakeholder letter this week.
“Stakeholders encouraged the BMA to avoid duplicative AI-specific structures and instead integrate AI considerations within established enterprise risk management frameworks,” the BMA said.
Respondents expressed broad support for the authority’s proposed outcomes-based and principles-led approach to AI governance.
The BMA said respondents generally agreed that AI could deliver meaningful benefits when deployed responsibly, including improvements in efficiency, risk management, compliance and market integrity.
“Many stakeholders also cautioned against overly prescriptive or technology-specific requirements, noting the pace of technological change and the risk of rigid rules becoming outdated quickly,” the organisation said.
The BMA confirmed its intention to maintain a fit-for-purpose, technology-neutral supervisory approach.
Some respondents suggested that proportionality should more explicitly recognise the expected benefits or competitive advantages of AI adoption.
Others emphasised that AI governance should extend well beyond traditional operational risk considerations, given the potentially transformative impact of AI-enabled business change.
The BMA acknowledged that boards and senior management consider strategic benefits when determining whether to pursue AI-enabled solutions and that AI may amplify or accelerate existing risk drivers across operational, conduct, data, cyber and strategic risk categories.
“However, from a supervisory perspective, responsible AI governance is anchored in risk and impact rather than commercial upside and does not necessarily require creating governance frameworks that sit outside of or above established enterprise risk management arrangements,” the BMA said.
Some stakeholders noted the growing use of AI within investment, trading, market surveillance and research functions and the potential implications for market integrity and financial stability.
The feedback highlighted risks associated with correlated or herding behaviours, accelerated market dynamics under stress, reliance on alternative or unstructured data sources and the potential for AI-enabled market manipulation or misleading disclosures regarding AI capabilities.
The BMA noted recent international supervisory commentary issued after the publication of the discussion paper, that highlights the potential for unintended or emergent behaviours arising from interactions between independently deployed AI systems, particularly in capital markets contexts.
“These considerations will inform the BMA’s ongoing assessment of whether sector-specific guidance or supervisory clarification is warranted for higher-risk use cases, which may build on existing market integrity, conduct and any other relevant regulatory frameworks,” the BMA said.
Stakeholders strongly emphasised the importance of international regulatory alignment, particularly for firms operating under group-wide AI governance frameworks that must comply with multiple regulatory regimes.
Respondents also identified practical challenges related to skills, resourcing, independent validation and integration with existing systems, particularly for smaller or less complex institutions.
The BMA saw the importance of phased implementation, supervisory engagement and continued dialogue to support the proportionate and effective adoption of AI governance practices.
The authority planned to continue engaging with stakeholders and monitoring international developments to inform its analysis of existing regulatory requirements, current market practices and the principles outlined in the DP to ensure the frameworks remain fit-for-purpose.
“The authority’s objective is to support responsible innovation while ensuring that any enhancements to supervisory expectations are practical, risk-based and avoid unnecessary duplication or unintended regulatory burden,” the BMA said.







