Skip to main content
Lab Notes
General

How Saudi Enterprises Are Adopting AI Governance Frameworks

PeopleSafetyLab|March 9, 2026|15 min read

In the spring of 2024, a mid-sized Saudi bank deployed a machine learning model to automate credit scoring for retail customers. The model performed well in testing—accurate, fast, seemingly unbiased. Six months later, internal auditors discovered the system was systematically downgrading applications from certain geographic regions. Not because the model was explicitly trained to discriminate, but because the training data reflected historical lending patterns that had disadvantaged those areas for decades. The bank had an AI policy. It had a governance committee. What it lacked was a mechanism to detect bias before it became a regulatory problem.

This is the central tension of AI governance in the Kingdom today. Saudi enterprises are deploying AI systems faster than they are building the frameworks to govern them. The regulatory architecture exists—SDAIA, the Saudi Central Bank (SAMA), and the National Cybersecurity Authority (NCA) have each issued detailed requirements. Vision 2030's digital transformation mandate creates both the permission and the pressure to move quickly. What most organizations have not yet built is the connective tissue between ambition and accountability.

The enterprises getting this right are not necessarily the largest or the most resourced. They are the ones that have internalized a simple truth: in a multi-regulator environment where AI systems touch customer data, financial decisions, and critical infrastructure, governance is not overhead. It is the condition for sustainable deployment.


The Regulatory Landscape: Three Frameworks, One Reality

To understand how Saudi enterprises are approaching AI governance, it helps to understand what they are approaching it toward. Three regulatory frameworks now govern AI deployment in the Kingdom, each with distinct scope, enforcement mechanisms, and expectations.

The Saudi Data and AI Authority (SDAIA), established by Royal Decree in 2019, provides the overarching framework through its National AI Ethics Principles and its enforcement mandate under the Personal Data Protection Law (PDPL). SDAIA's framework is organized around three pillars—Human-Centric and Ethical AI, Secure and Reliable AI, and Data Governance—that establish principles for transparency, fairness, accountability, and human oversight. The PDPL gives these principles teeth: violations involving sensitive personal data can carry administrative penalties up to SAR 5 million, and the law explicitly covers automated decision-making systems.

The Saudi Central Bank (SAMA) governs AI in financial services through its model risk management framework, which treats AI models as financial infrastructure subject to independent validation, documented governance throughout the lifecycle, and ongoing performance monitoring. For banks, insurance companies, and the rapidly expanding fintech sector, SAMA's requirements are not guidance—they are examination-ready obligations that examiners are increasingly asking about.

The National Cybersecurity Authority (NCA) overlays cybersecurity requirements on AI systems through its Essential Cybersecurity Controls (ECC), updated to address AI-specific threats like data poisoning, model inversion, and adversarial attacks. For critical infrastructure sectors—energy, healthcare, telecommunications—the NCA requires independent security assessments before AI deployment and mandatory incident reporting within 24 hours of discovery.

The overlap between these frameworks is not a bug. It reflects a regulatory philosophy that treats AI governance as multi-dimensional: ethical behavior, model reliability, and cybersecurity are distinct concerns that cannot be collapsed into a single compliance checklist. The enterprises that have built effective governance are the ones that have created structures to manage all three simultaneously, rather than optimizing for one regulator at a time.


Saudi Aramco: Building Governance at Scale

Saudi Aramco, the world's largest oil company and one of the Kingdom's most sophisticated technology operators, faces AI governance challenges that few other enterprises encounter in scope or consequence. The company has deployed AI across predictive maintenance, supply chain optimization, environmental monitoring, and reservoir management—systems where failures carry both economic costs and safety implications.

What distinguishes Aramco's approach is the integration of AI governance into operational technology management. AI systems that control or influence physical infrastructure—the predictive maintenance models that flag equipment at risk of failure, the optimization algorithms that manage production flows—are subject to safety-critical system governance that predates modern AI frameworks. The company has extended these disciplines to AI, creating validation and monitoring protocols that treat model reliability as an operational safety concern rather than a purely compliance exercise.

Aramco's governance structure also addresses the supply chain dimension that most enterprises underestimate. The company uses third-party AI models and cloud-based inference services, but has established vetting requirements that parallel its traditional vendor management processes. Models deployed in operational contexts must pass independent validation; data processed by third-party systems must meet the company's data classification and localization requirements. This is not universal among Saudi enterprises—many organizations are deploying third-party AI with minimal supply chain scrutiny—but it represents the maturity level that regulators are increasingly expecting.

For organizations seeking a reference point, Aramco demonstrates that AI governance at scale requires more than policy documents. It requires governance structures that cross traditional organizational boundaries: technical teams that understand regulatory requirements, compliance functions that understand machine learning, and operational leadership that treats AI reliability as a business continuity concern.


STC Group: Telecommunications and Multi-Regulator Complexity

STC Group, Saudi Arabia's largest telecommunications provider, operates in perhaps the most complex regulatory environment for AI governance outside of financial services. Telecommunications companies deploy AI across customer service, network optimization, fraud detection, and content personalization—each touching different regulatory requirements and carrying different risk profiles.

STC's approach illustrates how multi-regulator governance works in practice. The company's AI deployments in customer-facing applications—chatbots, recommendation systems, personalization algorithms—must satisfy SDAIA's transparency and fairness requirements. Network optimization and infrastructure AI falls under NCA's critical infrastructure cybersecurity controls. Fraud detection systems that influence financial transactions or service termination decisions must align with principles that overlap with SAMA's model risk framework. A single AI system might be governed by multiple frameworks simultaneously.

What STC has built is a governance taxonomy that maps each AI deployment to its applicable regulatory requirements before the system goes live. This is not a post-hoc compliance exercise but an upfront classification step: when a business unit proposes an AI application, the governance function identifies which regulators have jurisdiction, what documentation is required, and what validation thresholds must be met. The result is that deployment timelines incorporate governance as a planned phase rather than an unexpected delay.

The telecommunications sector also demonstrates the importance of AI-specific incident response. When an AI system influences network operations or customer service at STC's scale, failures are visible and consequential. The company has integrated AI incident detection into its operational monitoring, creating escalation paths that parallel traditional network operations response. This reflects NCA requirements for critical infrastructure, but it also reflects operational maturity: organizations that detect AI failures quickly are positioned to respond before failures compound.


The Banking Sector: Al Rajhi Bank and Islamic Finance Considerations

Saudi Arabia's banking sector has been among the earliest adopters of AI governance frameworks, driven by SAMA's explicit requirements and the sector's sensitivity to regulatory scrutiny. Al Rajhi Bank, the world's largest Islamic bank, illustrates both the general challenges of AI governance in financial services and the specific considerations that Islamic finance introduces.

Like other Saudi banks, Al Rajhi has deployed AI in credit scoring, fraud detection, and customer service automation. These systems fall squarely within SAMA's model risk framework, requiring documented model inventories, independent validation, and ongoing performance monitoring. What distinguishes Islamic finance AI governance is the additional layer of Shariah oversight.

When AI systems influence the pricing, structuring, or approval of Islamic finance products—murabaha credit facilities, sukuk, Islamic insurance—they are subject to review by the institution's Shariah Supervisory Board. This creates an explainability requirement that exceeds most conventional AI governance frameworks: the system must be explainable not only to regulators and internal risk functions but also to Shariah scholars who may lack technical backgrounds but must confirm that algorithmic decisions comply with Islamic principles.

Al Rajhi and other Islamic finance institutions have responded by developing governance structures that integrate Shariah review into the AI lifecycle. Model documentation includes explicit sections addressing Shariah compliance. Validation processes incorporate Shariah Board input on high-stakes decisions. The result is governance that satisfies SAMA's requirements while addressing the distinct obligations of Islamic finance—a model that other institutions operating under dual governance authorities would do well to study.

The banking sector also illustrates the gap between policy and practice that characterizes much of Saudi AI governance. According to sector analysis, a majority of Saudi banks have deployed AI in core functions, but fewer than a third have documented governance policies covering those functions. The gap is closing—SAMA examination activity has accelerated governance investment across the sector—but it reflects a structural challenge: organizations that deployed AI before governance frameworks were mature are now retrofitting compliance onto systems that were not designed with audit trails, explainability, or bias monitoring in mind.


NEOM: Governance by Design for AI-Enabled Cities

If Saudi Aramco represents established enterprises adapting to AI governance requirements, NEOM represents the alternative: building governance infrastructure before AI systems are deployed. The $500 billion giga-project, spanning 26,500 square kilometers of Saudi Arabia's northwest, plans to integrate AI across transportation, energy, utilities, and urban services at a scale that tests the limits of existing governance frameworks.

NEOM's approach is distinctive because it has the opportunity to build governance into the architecture phase rather than retrofitting it later. The project has established AI governance structures that cross functional silos—transportation, energy, urban services—creating oversight mechanisms that can address system interdependencies. A failure in an autonomous transportation AI might cascade into energy grid management; governance structures that treat these systems in isolation will miss critical risk pathways.

The project also illustrates the importance of data governance as a foundation for AI governance. NEOM's smart city infrastructure will generate vast amounts of resident data—movement patterns, consumption data, behavioral signals. Under the PDPL, this data collection must satisfy consent requirements, data minimization principles, and localization rules. AI systems trained on this data must respect purpose limitations: data collected for one application cannot automatically be used to train models for another. NEOM's data governance framework establishes the constraints within which AI development must operate.

The NEOM case is instructive for enterprises not because it is replicable—few organizations have the resources or greenfield opportunity to build governance infrastructure from scratch—but because it demonstrates what governance by design looks like. Organizations deploying AI in established environments cannot recreate NEOM's approach entirely, but they can apply the principle: governance structures created before deployment are more effective and less costly than governance retrofitted after problems emerge.


The Healthcare Sector: Ministry of Health and NHIC Requirements

Saudi Arabia's healthcare sector presents AI governance challenges that combine clinical safety, data privacy, and regulatory compliance in ways that most enterprises do not face. The Ministry of Health's announcement that AI-assisted diagnostics would be deployed across 150 government hospitals by 2025 reflects the sector's ambition; the governance requirements established by the National Health Information Center (NHIC) and MOH reflect the corresponding oversight expectations.

Healthcare AI governance in the Kingdom operates under multiple constraints. NHIC's data governance policies require that AI systems processing health records comply with data classification, access controls, and audit requirements. Data localization rules prohibit processing Saudi patient data on infrastructure outside the Kingdom—a significant constraint for organizations using cloud-based AI services. MOH's clinical AI framework requires validation studies using Saudi patient populations, not Western datasets that may not reflect local demographics and disease patterns.

The healthcare sector also demonstrates the importance of human oversight requirements. Under both SDAIA's AI Ethics Principles and the PDPL's provisions on automated decision-making, AI systems that significantly affect individuals—in healthcare, this includes diagnostic recommendations and treatment pathways—must preserve meaningful human review. The emphasis is on meaningful: a clinician who routinely accepts AI recommendations without independent evaluation does not satisfy the requirement, regardless of whether a human was technically involved.

Healthcare organizations that have implemented governance effectively have typically created dedicated AI governance functions that bridge clinical, technical, and compliance expertise. These functions maintain inventories of AI systems categorized by clinical impact and data sensitivity; they establish validation protocols that satisfy MOH requirements; they create monitoring processes that detect performance drift before it affects patient care. The investment is substantial, but the alternative—deploying AI in clinical contexts without governance infrastructure—creates exposure to regulatory action and, more importantly, to patient harm.


Common Patterns: What Works Across Sectors

Despite the differences in regulatory scope and operational context, certain patterns emerge from the enterprises that have built effective AI governance.

Governance begins with visibility. Organizations cannot govern what they have not inventoried. The first step in every effective governance program is a comprehensive mapping of AI systems in production or development—their purpose, data inputs, decision-making roles, and regulatory exposure. This inventory becomes the foundation for risk classification, validation prioritization, and monitoring investment.

Multi-regulator coordination requires dedicated structure. In a regulatory environment where SDAIA, SAMA, and NCA may each have jurisdiction over a single AI system, governance cannot be siloed by regulator. Effective programs establish coordination mechanisms—a governance committee, a cross-functional review process, a designated AI governance officer—that can address requirements holistically rather than sequentially.

Validation is ongoing, not one-time. The organizations getting governance right treat validation as a continuous discipline rather than a pre-deployment checkbox. Models are monitored for performance drift, input distribution shift, and emerging bias. Thresholds for material change are documented in advance; escalation paths are clear. This requires investment in monitoring infrastructure and analytical capability, but it addresses the reality that AI systems degrade over time.

Incident response is built into governance from the start. AI systems fail. The question is whether the organization detects failures quickly, responds effectively, and learns systematically. Effective governance programs establish incident response procedures specific to AI—detection mechanisms, escalation paths, root cause analysis protocols—that parallel traditional operational incident response.

Documentation serves governance, not just compliance. The organizations that navigate regulatory examinations smoothly are the ones for whom documentation is a byproduct of good governance rather than a separate compliance exercise. Model documentation, validation records, and monitoring logs exist because they support operational decision-making; regulatory requests are satisfied by producing what already exists rather than creating it on demand.


The Gap Between Policy and Practice

Despite the progress that leading enterprises have made, the gap between regulatory expectation and organizational reality remains substantial. Across sectors, a consistent pattern emerges: organizations have AI policies, often drafted by legal or compliance functions; they have AI systems, developed and deployed by technical teams; and they have very little connecting the two.

This structural gap is the central challenge of Saudi AI governance in 2026. The regulatory frameworks exist. The technical capability exists. What is missing in many organizations is the organizational infrastructure that translates policy into practice: governance committees with genuine technical authority, engineering workflows that capture governance requirements, risk and compliance teams that understand enough about machine learning to evaluate what they are being told.

Closing this gap requires investment in organizational capability, not just documentation. It requires cross-functional governance structures where technical and compliance expertise genuinely engage rather than sequentially sign off. It requires monitoring and incident response infrastructure that treats AI reliability as an operational concern, not a compliance checkbox. It requires, in short, building the connective tissue between regulatory ambition and technical reality.

The enterprises that are succeeding are the ones that have recognized this and invested accordingly. The ones that are struggling are often the ones that have treated AI governance as a documentation exercise—a set of policies to be written and filed rather than an operational discipline to be built.


The Direction of Travel

SDAIA has signaled that its governance expectations will increase over time, not decrease. The National Strategy for Data and AI establishes ambitious targets for AI's contribution to the Saudi economy—targets that create both permission to innovate and pressure to govern. Vision 2030's digital transformation agenda depends on AI systems that are trustworthy, not just performant.

The regulatory direction is clear. Enforcement activity, while still developing, is accelerating. Organizations that have treated governance as a future problem are discovering that the future has arrived. The question is no longer whether to build AI governance but how quickly it can be done without disrupting the AI deployments that business operations now depend on.

For Saudi enterprises, the examples exist. The frameworks are published. The regulatory expectations are documented. What remains is the organizational commitment to build governance that is real rather than theatrical—structures that connect policy to practice, monitoring that detects problems before they become regulatory issues, and incident response that learns from failures rather than concealing them.

The organizations that navigate this transition successfully will be positioned not only for regulatory compliance but for sustainable AI deployment. The ones that do not will find themselves managing a gap that widens with each new AI system deployed and each new regulatory expectation issued. In a Kingdom committed to AI-driven transformation, that is not a sustainable position.


Published by PeopleSafetyLab — AI safety and governance research for KSA organizations.

P

PeopleSafetyLab

Independent AI safety research for organisations and families in Saudi Arabia and the GCC. All research is editorially independent. PeopleSafetyLab has no consulting clients and does not conduct paid audits.

Share this article: