Singapore’s AI Model Risk Management Paper: Key Insights for Financial Institutions

  • Market Insight 19 December 2024 19 December 2024
  • Asia Pacific

  • Technology risk

Singapore’s AI Model Risk Management Paper: Key Insights for Financial Institutions

On 5 December 2024, the Monetary Authority of Singapore (“MAS”) released the Artificial Intelligence Model Risk Management Paper (“AI MRM Paper”), which may be accessed here. The paper outlines the best practices carried out by banks and offers guidance to financial institutions (“FIs”) on managing AI-related risks, particularly in light of the growing adoption of Generative AI (“Gen AI”).

Background: Risks of AI  

As FIs continue to leverage AI, certain risks may arise: 

  1. Regulatory Risks: Non-compliance with regulations could lead to fines and enforcement actions (such as poor performance of AI to support anti-money laundering efforts).
  2. Operational Risks: Unexpected behaviour of AI could lead to errors in critical processes (such as automation of financial operations). 
  3. Financial Risks: Inaccurate predictions may result in financial losses (such as fraud detection).
  4. Reputational Risks: Negative customer experiences or public backlash can harm a FIs reputation (such as biased or unethical AI decisions).  

MAS’ paper provides a framework for addressing these risks, providing key takeaways that FIs can adopt to continue to leverage AI responsibly. 

Key Takeaways (from the AI MRM paper)

No. Key Takeaways Description
1 AI Governance and Oversight

Governance remains the cornerstone of effective AI risk management. Key recommendations include: 

  • Updating existing policies to ensure that AI usage is fair, ethical, accountable and transparent (“FEAT”) (aligning with MAS’ FEAT Principles set out in 2018). 
  • Establishing cross-functional oversight forums to coordinate AI governance and oversee AI use across institutions.
  • Building internal expertise through training programs and establishment of AI Centres of Excellence to drive innovation, promote best practices and build AI capabilities. 
2 AI Risk Identification and Assessment 

Robust systems can identify, inventory and assess AI risks. 
The paper emphasises: 

  • Maintaining comprehensive AI inventories to provide a clear view of usage across the organisation.
  • Conducting risk materiality assessments to evaluate AI models based on their impact, complexity and reliance on automation.
3 AI Development, Validation and Monitoring 

Effective AI lifecycle management includes: 

  • Development by focusing on data quality, explainability, fairness and bias mitigation during AI model creation. 
  • Validation through conducting independent or peer validation for AI systems, particularly high-risk applications. 
  • Monitoring by establishing continuous monitoring mechanisms to detect data drifts, biases or performance issues. 
4 Addressing Gen AI Risks

Gen AI presents unique risks including unpredictability and data security concerns. To mitigate these risks, FIs should: 

  • Secure sensitive information through private cloud or on-premise deployments. 
  • Implement technical safeguards, such as input/output filters and secure data environments. 
5 Addressing Third Party AI Risks

Additional risks may arise from the usage of third-party AI. To mitigate these risks, FIs should carry out: 

  • Updating of legal agreements and updating of clauses in contracts with third-party AI providers (e.g. clauses pertaining to performance guarantees, data protection, the right to audit and notification when AI is introduced in existing third-party providers’ solutions). 
  • Compensatory testing, where the conducting of rigorous testing of third-party AI models can detect potential biases.
  • Contingency planning, where robust contingency plans can help to address potential failures and unexpected behaviour of third-party AI (e.g. like backup systems or manual processes in place). 
  • Awareness efforts, where training of staff on AI literacy and risk awareness can help to mitigate risks. 

Implications 

The AI MRM Paper highlights the need for proactive measures to align AI usage with ethical and regulatory expectations. While the paper’s recommendations are merely guidelines, FIs adopting these practices will benefit from better management of operational, regulatory and reputational risks. 

How we can help

Clyde & Co is able to assist in updating / preparing legal agreements, finetune clauses in documentation, and prepare / review policies and SOPs in relation to AI and Gen AI. Our team has extensive experience advising FIs (including banks, FinTech companies and e-payment providers) on business establishment, governance and policy development, contractual safeguards, risk mitigation and regulatory compliance in Asia-Pacific. 

If you’d like to know more about how AI and Gen AI, and the AI MRM Paper might affect you or your customers and partners, please get in touch with any of the authors below.

End

Stay up to date with Clyde & Co

Sign up to receive email updates straight to your inbox!