Skip to main content
Lab Notes
KSA Regulation

SAMA AI Governance Requirements for Saudi Financial Services: What Banks and Fintechs Need to Know in 2026

Nora Al-Rashidi|March 4, 2026|8 min read

According to KPMG's 2025 Saudi Banking Sector Report, 73% of Saudi banks have deployed AI in at least one core function — credit scoring, AML monitoring, or fraud detection. Fewer than 30% have documented governance policies covering those same functions. That gap is no longer a planning problem. SAMA's 2024 Risk Management Framework update has made AI governance a compliance obligation for Saudi financial institutions, and the window for proactive implementation is closing.

This article covers what SAMA actually requires, where the highest-risk exposure sits, and what a realistic 90-day starting point looks like for banks and fintechs that haven't yet built their governance layer.

What SAMA Actually Requires

SAMA's regulatory posture on AI has shifted from guidance to obligation. The 2024 update to the Risk Management Framework — which applies to all licensed banks, insurance companies, and financial market infrastructure providers — introduced specific expectations for what SAMA calls "model risk management," which covers AI and algorithmic systems used in financial decision-making.

Three obligations carry the most audit exposure:

Model inventory and documentation: SAMA expects institutions to maintain a complete inventory of AI and algorithmic models in production, with documentation covering each model's purpose, data inputs, output type, and business-criticality classification. For most banks that have deployed AI incrementally across departments, building this inventory is itself a significant undertaking — and is typically the first thing examiners ask for.

Validation and independent review: High-criticality models — those driving credit decisions, AML scoring, or fraud classification — must be validated by a function independent of the development team before deployment and on a regular cadence thereafter. SAMA's framework does not specify the exact validation methodology, but expects documentation of accuracy testing, bias assessment, and sensitivity analysis at minimum.

Ongoing monitoring and drift detection: Deployed AI models must be monitored continuously for performance degradation, distributional shift in input data, and emerging bias patterns. SAMA expects institutions to have documented thresholds for what constitutes a material change, and clear escalation paths when those thresholds are crossed.

The ISO/IEC 42001:2023 AI management system standard is not yet formally required by SAMA, but senior examiners and external auditors operating in the Saudi market are treating it as the baseline framework. Institutions that align their governance documentation to ISO 42001's structure are consistently better positioned in examinations. The patterns that work across Saudi sectors apply here too: visibility before control, incident response before principles.

The Three Highest-Risk AI Functions in Saudi Banking

Not all AI deployments carry equal regulatory exposure. These three functions are where SAMA examination scrutiny is currently concentrated:

Credit scoring models are the most widely deployed and most scrutinized. Saudi banks using AI for credit decisions — particularly for consumer and SME lending — are expected to demonstrate that their models do not produce discriminatory outcomes across protected categories, that human underwriters can understand and override model recommendations, and that the models are retrained or recalibrated when economic conditions shift materially. The sudden interest rate environment changes of 2023–2024 exposed significant model drift in several regional institutions; SAMA is sensitized to this risk.

AML and transaction monitoring AI is a regulatory priority because the consequences of failure — missed suspicious activity reports, regulatory sanctions, correspondent banking relationships at risk — are acute and visible. SAMA expects AML AI to be validated against known typologies, with human review at specific risk thresholds and documented escalation for model alerts. Many institutions have deployed AML AI to reduce false-positive rates, but have not documented the governance layer that SAMA now expects to see around those systems.

Fraud detection operates in real time, which creates a distinctive governance challenge: the model makes high-stakes decisions (block a transaction, flag an account) faster than any human can review. SAMA's expectation is not that every fraud decision receives human review, but that the threshold logic is documented, the model's performance is monitored, and customer dispute processes can reconstruct and explain individual decisions on request.

Islamic Finance AI Risk — The Overlooked Dimension

Conventional AI governance frameworks — whether ISO 42001, EU AI Act-adjacent approaches, or generic model risk management — consistently miss a dimension that is material for Saudi financial institutions: the Shariah compliance dimension of algorithmic decision-making.

Islamic finance products — murabaha credit facilities, sukuk pricing, Islamic insurance products — are governed not only by civil regulatory requirements but by Shariah principles enforced through each institution's Shariah Supervisory Board. When AI systems influence the pricing, structuring, or approval of these products, the Shariah Board has a legitimate governance interest that is entirely separate from SAMA's model risk framework.

The practical implication: explainability requirements for AI systems in Islamic finance contexts are driven by two governance authorities — SAMA (which cares about model accuracy, fairness, and risk) and the Shariah Board (which cares about whether the model's logic is consistent with Islamic finance principles). An AI credit scoring model that produces an output no one can explain is a problem for a SAMA examiner; it is also potentially a problem for a Shariah scholar trying to confirm that the institution's credit decisions are Shariah-compliant.

This creates an explainability requirement more demanding than most international frameworks anticipate. Governance documentation for AI systems in Islamic finance contexts should explicitly address Shariah Board oversight and include explainability mechanisms that can be reviewed by non-technical Shariah scholars.

What Good Governance Looks Like in Practice

The SDAIA sprint we ran in late 2025 demonstrated that credible AI governance infrastructure — policy documentation, technical monitoring, incident response — can be built faster than most institutions believe. The timeline was extreme (72 hours), but the structural elements are the same regardless of timeline.

For financial services, good governance consists of three interconnected layers:

Model inventory + risk classification: A documented register of every AI and algorithmic system in production, with a risk tier (Low / Medium / High / Critical) based on business impact, regulatory exposure, and data sensitivity. This is the foundation everything else rests on. Without it, you don't know where to focus monitoring, validation, or human oversight resources.

Validation log + monitoring infrastructure: For each High and Critical model, documented validation results (at deployment and on a regular cadence), plus real-time monitoring covering accuracy metrics, input distribution drift, and output anomaly rates. The operational patterns from the NCA logistics governance build — start with incident detection, add transparency, then add approval gates — translate directly to banking AI contexts.

Incident response and escalation documentation: Defined procedures for what happens when a model threshold is breached, who is notified, what the response timeline is, and how customer-facing impacts are managed. SAMA examiners expect this to exist; most institutions do not have it documented at the level of specificity required.

Your 90-Day Starting Point

For a bank or fintech starting from a limited governance baseline, a realistic 90-day sequencing looks like this:

Month 1 — Inventory and risk classification: Build the model inventory. Survey business lines to identify every AI and algorithmic system in production. Classify each by risk tier. This alone surfaces the governance gaps and prioritizes where to focus next.

Month 2 — Monitoring and human-in-the-loop for high-risk models: For your top-tier models (credit scoring, AML, fraud), implement monitoring dashboards and establish human-in-the-loop checkpoints at the decision thresholds that carry the most regulatory and business risk. Don't try to govern everything at once — focus on the 20% of systems carrying 80% of the exposure.

Month 3 — Documentation and validation process: Formalize the validation process for high-risk models, produce the documentation that supports SAMA examination, and establish the governance review cadence that will keep the framework operational after the initial build.

Key Takeaways

  • SAMA's 2024 Risk Management Framework update creates real, examinable AI governance obligations — model inventories, independent validation, and ongoing monitoring are no longer optional
  • Credit scoring, AML, and fraud detection carry the highest SAMA examination exposure for most Saudi banks
  • Islamic finance institutions face a dual governance requirement: SAMA compliance and Shariah Board explainability — most international frameworks miss this entirely
  • Good governance follows a consistent sequence: inventory first, monitoring second, documentation third — not the other way around
  • 90 days is sufficient to build a credible baseline from scratch if scope is disciplined

If your institution is approaching a SAMA examination, responding to an audit finding on model risk, or building AI governance from scratch, book a 30-minute assessment with our team. We work with Saudi banks and fintechs on governance builds that are examination-ready within weeks, not months. You can also review our AI Safety Pack for a structured starting framework.

N

Nora Al-Rashidi

Expert in AI Safety and Governance at PeopleSafetyLab. Dedicated to building practical frameworks that protect organizations and families, ensuring ethical AI deployment aligned with KSA and international standards.

Share this article: