April 22, 2026 · 7 min read

CBUAE AI Guidance 2026: Model Governance Checklist for UAE Banks & SVFs

CBUAE February 2026 Guidance Note on AI and ML in financial services - model governance checklist, inventory requirements, human-in-the-loop design, third-party vendor due diligence, and board-level accountability for UAE banks and SVFs.

CBUAE AI Guidance 2026: Model Governance Checklist for UAE Banks & SVFs

The Central Bank of UAE Guidance Note on Consumer Protection and Responsible Adoption and Use of Artificial Intelligence landed in February 2026, and it changes the conversation for every bank, SVF, and finance company operating under CBUAE licence. The Guidance sets clear expectations for model governance, human oversight, third-party vendor due diligence, and board-level accountability - and it reframes AI not as a technology choice but as a regulated activity subject to the same supervisory rigor CBUAE applies to other risk domains.

This guide translates the Guidance into a practical model governance checklist UAE banks can execute against before their next CBUAE review.

The Regulatory Context

CBUAE’s Guidance sits in a broader UAE AI regulatory landscape that now includes:

  • UAE National AI Strategy 2031 - the federal ambition that motivates much of the regulatory activity.
  • UAE PDPL (Federal Decree-Law 45 of 2021) - governs personal data processing in training and inference.
  • CBUAE Article 13 (Technology Risk and Information Security) and Annex II (best practices) - the established baseline for technology risk.
  • CBUAE Consumer Protection Regulation - invoked directly by the AI Guidance.
  • CBUAE Retail Payment Services and Open Finance Regulations - relevant for AI use cases in payments.
  • DFSA AI principles - apply in DIFC.
  • ADGM AI regulatory framework and RegLab - apply in Abu Dhabi Global Market.
  • VARA - applies to virtual asset service providers using AI in risk or transaction monitoring.

The CBUAE Guidance is consistent with this landscape: complementary, not duplicative. Banks already executing a PDPL-compliant data governance programme will find the Guidance adds AI-specific overlays without redefining core data governance.

The 5 CBUAE AI Principles

Every production AI or ML model in a CBUAE-licensed entity must demonstrably satisfy five principles:

Fairness: the model must not introduce or amplify unfair bias against protected attributes (age, gender, nationality, disability, and other PDPL-protected categories). Evidence: disparate-impact analysis across protected subgroups, documented thresholds for acceptable variance, and remediation plans where variance exceeds thresholds.

Transparency: customers must understand when AI is involved in decisions affecting them and the basis for material decisions. Evidence: customer-facing disclosure language, explanation templates for adverse decisions (credit declines, transaction blocks), and documented model-level explainability approach (SHAP, LIME, counterfactuals, or equivalent).

Accountability: every model has a named owner, and the institution retains accountability for outcomes regardless of vendor arrangements. Evidence: model cards with named business owner, escalation path for model issues, documented board or senior management committee with AI oversight authority.

Data governance: training and inference data must comply with UAE PDPL, with documented lineage and consent. Evidence: data lineage artifacts from source to model, PDPL lawful basis per data class, consent records for personal data, data minimization analysis, and data retention policy enforcement.

Human oversight: high-impact decisions require human review before execution, with documented intervention authority. Evidence: decision-type classification (HITL / HOTL / advisory), reviewer authority matrix, audit trail of human decisions, and periodic review-effectiveness metrics.

The Model Inventory Requirement

CBUAE’s Guidance establishes that banks must maintain a current AI/ML model inventory - board-visible, quarterly-updated, and available on inspection. The inventory spans:

  • Every internal model in production or development
  • Every third-party model or API in use (OpenAI, Anthropic, AWS Bedrock, Azure OpenAI, Google Vertex AI, specialist fintech ML vendors)
  • Every embedded ML capability in procured software (credit-decisioning platforms, AML systems, fraud platforms)

Per-model fields:

  • Use case and business function
  • Risk tier (high / medium / low) based on customer impact
  • Business owner (named individual)
  • Training data sources and lineage
  • Performance metrics baseline and latest measurement
  • Last validation date and next scheduled validation
  • Human oversight model (HITL / HOTL / advisory / fully automated)
  • Vendor details (internal / specific third party)
  • CBUAE principle mapping evidence per the five principles

Banks that do not yet maintain an AI/ML model inventory should expect CBUAE inspectors to open with that question.

Third-Party Vendor Due Diligence

The most frequently misunderstood element of the Guidance is that third-party LLMs and ML APIs are fully in scope. A bank using OpenAI for customer service, Anthropic for internal document review, or Azure OpenAI for fraud-alert triage is accountable for the governance of those models as if they were internal - CBUAE does not accept “the vendor handles governance” as an answer.

Expected vendor due diligence evidence:

  • Model cards documenting training data, intended use, limitations, and evaluation benchmarks
  • Evaluation evidence on tasks relevant to the bank’s use case (not just vendor-published benchmarks)
  • Data Processing Agreements with AI-specific clauses covering training data use, output retention, and model improvement
  • AI-focused Data Protection Impact Assessment under UAE PDPL Article 20
  • Data residency attestation - where customer data is processed, where outputs are stored
  • Red-team testing evidence for safety-critical use cases (prompt injection, hallucination rate, adversarial robustness)
  • Breach notification SLAs aligned with CBUAE incident reporting timelines
  • Sub-processor disclosure including cloud providers and downstream model providers

For banks running OpenAI via Microsoft Azure OpenAI Service in UAE North, the additional evidence comes easier because Microsoft publishes most of it. For banks running direct OpenAI API calls, the evidence burden is heavier and residency becomes harder to satisfy.

Human-in-the-Loop Design Patterns

The Guidance expects high-impact customer-facing decisions to operate under human-in-the-loop (HITL) rather than fully automated AI. A practical classification framework:

  • Fully automated - low-stakes, reversible, high-volume: product recommendations, marketing personalization, user interface adaptations.
  • Human-on-the-loop (HOTL) - automation with sampled human review and intervention authority: transaction monitoring alerts triaged by AI with analyst sampling, marketing segmentation, fraud-alert prioritization.
  • Human-in-the-loop (HITL) - mandatory human decision after AI recommendation: credit approvals above threshold, loan pricing for high-value products, AML case closures, KYC/CDD decisions with elevated risk indicators.
  • AI advisory only - human decides, AI suggests: wealth advisory, compliance advisory, first-line research support.

CBUAE will push back on banks that apply fully automated processing to HITL-appropriate decisions. The risk tier of the decision, not the technical feasibility of automation, drives the classification.

Board-Level Accountability

The Guidance establishes that AI governance is a board-level topic - not purely a technology or model-risk-management concern. Banks should expect to demonstrate:

  • A designated board committee or senior management committee with explicit AI oversight responsibility (typically the Risk Committee or a dedicated AI / Model Risk committee).
  • Quarterly AI governance agenda items with documented minutes.
  • Model risk reporting cadence that reaches the board at least annually, with exception reporting more frequently.
  • Documented escalation thresholds for model performance degradation, bias metric excursions, or incident-response triggers.
  • Annual AI strategy review aligning AI investments with risk appetite.

CBUAE inspections will increasingly ask for board minutes evidencing AI discussions. Banks without documented board-level AI attention will struggle to demonstrate the accountability principle.

Continuous Monitoring and Drift Detection

Static governance is not sufficient. Every production model must be under continuous monitoring for:

  • Prediction distribution tracking to detect concept drift
  • Input feature distribution tracking to detect data drift
  • Performance metric monitoring against the validation baseline
  • Bias metric monitoring across protected attributes
  • Latency and availability SLO monitoring for inference infrastructure

Recommended cadence: real-time alerting on SLO breaches, daily automated checks on prediction and input distributions, monthly performance review with drift analysis, and quarterly board-visible AI governance report.

For banks running third-party LLMs, monitoring extends to output quality measurements (hallucination rate on banked benchmark prompts, refusal rate on prohibited queries, response latency) since the upstream model may be silently updated by the vendor.

What a CBUAE-Ready AI Governance Program Looks Like

A mature program that satisfies the Guidance typically includes:

  • AI/ML model inventory with 100% coverage of production and in-development models
  • Written AI Governance Policy board-approved and reviewed annually
  • Per-model documentation package satisfying the 5 principles
  • Third-party vendor due diligence pack per vendor, refreshed annually or on material vendor changes
  • HITL / HOTL / advisory classification for every customer-facing decision
  • Designated board or senior management committee with AI oversight
  • Quarterly AI governance report reaching the designated committee
  • Continuous monitoring infrastructure for every production model
  • Incident response runbook for AI-specific incidents (hallucination-induced customer harm, bias drift, adversarial exploitation)
  • Annual AI strategy and risk appetite review

Most UAE banks in 2026 are partway through building this programme. Banks that wait until their first inspection to start building it will find the timeline tight.

How mlai.ae Supports CBUAE-Regulated Institutions

mlai.ae delivers vertical AI consulting for UAE banks and financial institutions with explicit CBUAE Guidance alignment. Engagements include:

  • CBUAE AI Readiness Assessment - 5-day evaluation of current AI inventory, governance gaps, and priority roadmap mapped to the 5 principles.
  • Model Governance Framework Design - policy, inventory template, per-model documentation pack, and vendor due diligence pack.
  • Continuous Monitoring Implementation - drift detection, bias monitoring, and board-visible reporting.
  • Third-Party Vendor DD Support - evidence gathering and gap analysis for OpenAI, Anthropic, AWS Bedrock, Azure OpenAI, specialist ML vendors.

Engagements produce CBUAE-inspection-ready documentation and working monitoring infrastructure - not slide decks.

Book a free 30-minute discovery call to scope your CBUAE AI governance engagement with an mlai.ae consultant.

Frequently Asked Questions

What is the CBUAE AI Guidance Note?

The CBUAE Guidance Note on Consumer Protection and Responsible Adoption and Use of Artificial Intelligence, issued in February 2026, establishes the Central Bank of UAE's expectations for how licensed financial institutions deploy AI and ML. It sets 5 principles (fairness, transparency, accountability, data governance, human oversight), requires a model inventory and governance framework, mandates third-party vendor due diligence, and expects board-level accountability for AI risk. It applies to all CBUAE-licensed entities: banks, finance companies, payment service providers, stored-value facilities, and exchange houses.

Who does CBUAE AI guidance apply to?

Every CBUAE-licensed entity: licensed banks (conventional and Islamic), finance companies, payment service providers, stored-value facility (SVF) licensees, retail payment services and card schemes licensees, exchange houses, and insurance companies under CBUAE oversight. Banks and SVFs with customer-facing AI use cases (credit decisioning, fraud detection, anti-money-laundering, customer onboarding) face the highest scrutiny.

What are the 5 CBUAE AI principles?

(1) Fairness - AI must not introduce or amplify unfair bias against protected attributes. (2) Transparency - customers must understand when AI is making decisions that affect them, and the basis for material decisions. (3) Accountability - every model has a named owner, and the institution retains accountability for outcomes regardless of vendor arrangements. (4) Data governance - training and inference data must comply with UAE PDPL, with documented lineage and consent. (5) Human oversight - high-impact decisions require human review before execution, with documented intervention authority.

Does CBUAE require a model inventory?

Yes. The Guidance Note requires licensed entities to maintain a current inventory of all AI and ML models used in their operations, including models deployed through third-party vendors and APIs. The inventory must cover use case, risk tier, business owner, training data, performance metrics, validation date, human oversight model, and vendor details. CBUAE expects the inventory to be board-visible, updated quarterly, and provided on request during inspections.

How does CBUAE expect banks to handle third-party LLMs like OpenAI or Claude?

Third-party LLMs and ML APIs are in scope. CBUAE expects banks to treat them with the same governance rigor as internal models: document the use case, collect vendor model cards and evaluation evidence, execute AI-specific Data Processing Agreements, perform AI-focused Data Protection Impact Assessments under UAE PDPL, verify data residency for customer data processed through the vendor, obtain red-team testing evidence, and ensure breach notification SLAs align with CBUAE incident reporting timelines.

What is the difference between human-in-the-loop and human-on-the-loop?

Human-in-the-loop (HITL) means a human must review and approve the AI's recommendation before it is executed - used for high-impact decisions like credit approvals above threshold, loan pricing, or AML case closures. Human-on-the-loop (HOTL) means the AI executes automatically but a human monitors a sample of decisions and can intervene - used for medium-impact cases like marketing segmentation or fraud alert prioritization. CBUAE expects HITL for material customer impact decisions.

When will CBUAE start inspecting AI governance?

CBUAE has signalled that AI governance will be a standard component of Supervisory Reviews from 2026 onward. Licensed entities should expect questions about model inventory, governance framework, board oversight, and third-party vendor evidence during annual supervisory engagements. Banks with material customer-facing AI use cases (credit, AML, fraud) are highest priority for inspection.

How does the CBUAE AI Guidance interact with UAE PDPL?

The CBUAE AI Guidance explicitly invokes UAE PDPL (Federal Decree-Law 45 of 2021) for data governance. Banks using personal data to train or infer with AI models must comply with PDPL lawful bases, data minimization, and data subject rights - plus CBUAE-specific sectoral requirements. For AI use cases involving cross-border data transfer (e.g., OpenAI API calls), banks must have documented legal basis and data transfer safeguards under both PDPL and CBUAE requirements.

Build It. Run It. Own It.

Book a free 30-minute AI discovery call with our Vertical AI experts in Dubai, UAE. We scope your first model, estimate data requirements, and show you the fastest path to production.

Talk to an Expert