Skip to main content
Lab Notes
General

SDAIA's Three-Pillar Framework: What It Actually Demands of Saudi Organizations

Nora Al-Rashidi|March 7, 2026|11 min read

In 2019, Royal Decree No. M/6 established the Saudi Data and AI Authority. Most governments create AI regulators after something goes wrong — a biased hiring tool makes headlines, an automated system denies benefits to thousands of people, a facial recognition deployment triggers a scandal. SDAIA was built before any of that happened in Saudi Arabia. The entire framework that followed — the National Strategy for Data and AI, the AI Ethics Principles, the three-pillar governance structure — was designed as preemptive architecture.

That is a regulatory bet most countries have been unwilling to make. The United States spent years in a patchwork of agency-level guidance. The EU's AI Act took roughly four years from proposal to enforcement. Saudi Arabia decided to write the rules first and build the AI economy around them.

Whether that bet pays off depends almost entirely on whether organizations inside the Kingdom actually implement what SDAIA has published — not the surface-level version, but the operational reality of what each pillar demands. Most organizations in KSA have read the headlines. Fewer have read the framework. And fewer still have translated it into day-to-day technical and governance practice.

Why the Three-Pillar Structure Matters

SDAIA did not organize its AI governance framework around industries, use cases, or risk levels — at least not primarily. It organized around three foundational pillars: Human-Centric and Ethical AI, Secure and Reliable AI, and Data Governance. Each pillar maps to a different dimension of the problem.

The Human-Centric and Ethical pillar addresses what AI systems should and should not do to people. The Secure and Reliable pillar addresses whether AI systems do what they claim to do, consistently and safely. The Data pillar addresses the raw material everything else depends on.

This structure is deliberate. A system can be technically secure and still discriminatory. A system can be ethically designed and still catastrophically unreliable if the data underlying it is bad. The pillars are interdependent, and SDAIA's framework reflects that. Organizations that implement one pillar in isolation — the compliance team handles ethics, the security team handles reliability, data governance lives somewhere in IT — will have a framework that looks complete on paper and fails in practice.

The Ethics Pillar: Operationalizing Principles

SDAIA published its AI Ethics Principles in 2020. The document articulates values that appear in AI ethics frameworks globally: transparency, fairness, accountability, reliability, human oversight. What makes SDAIA's version significant is that it was not written as aspiration. It was written as the normative foundation for enforceable requirements.

Transparency, in SDAIA's framing, has two distinct obligations. The first is explainability — AI systems that make or significantly influence decisions affecting individuals must be able to produce explanations those individuals can understand. Not interpretable to a data scientist; understandable to the person whose loan application, job application, or insurance claim was affected. The second obligation is disclosure: when a person is interacting with an automated system, they must know it.

Both of these are harder to implement than they appear. Explainability for deep learning systems is an active research area, not a solved engineering problem. Many organizations have deployed large language models or neural networks in customer-facing applications without adequate explainability infrastructure, because the models performed well and the compliance requirement felt abstract. SDAIA's framework makes it concrete.

Fairness demands that organizations test for bias before deployment and continue monitoring after it. The before-deployment requirement is increasingly standard in mature AI governance programs. The ongoing monitoring requirement is where most organizations fall short. AI systems encounter data in production that differs from training data. Populations shift. Seasonal patterns emerge. A system that was unbiased at launch can develop systematic bias six months later, and no one notices because no one is looking.

Accountability under the Ethics Pillar means that every AI system has an identified owner — not a team, not a department, a named individual — who is responsible for the system's performance and compliance. This is an organizational design requirement as much as a governance one. Many organizations have AI systems deployed without anyone clearly accountable for them. SDAIA's framework makes diffuse accountability unacceptable.

The Reliability Pillar: When AI Systems Fail

The Secure and Reliable pillar addresses a question that is easy to underestimate: what happens when the AI system does not work?

AI systems fail in ways that traditional software does not. A conventional application either processes a transaction or throws an error. An AI system can process a transaction, produce a result, and be confidently wrong in ways that are difficult to detect. It can degrade gradually as the world it was trained on diverges from the world it now operates in. It can behave correctly on average while systematically failing for specific subpopulations in ways that aggregate metrics do not reveal.

SDAIA's reliability requirements address this directly. Organizations must validate AI system performance against documented specifications before deployment — not just overall accuracy, but performance across the subgroups and conditions the system will actually encounter. They must analyze failure modes: how does the system fail, how often, and with what consequences? And for high-stakes applications, they must implement meaningful human oversight, which means human decision-makers who can actually evaluate and override AI outputs, not rubber-stamp workflows that technically involve a human but provide no real check.

The security dimension of this pillar is often underappreciated by AI teams because it does not map cleanly to traditional cybersecurity. AI systems face threats that conventional applications do not: model extraction attacks, where adversaries reverse-engineer proprietary models through repeated queries; data poisoning, where adversaries corrupt training data to manipulate model behavior; adversarial attacks, where carefully crafted inputs cause models to produce incorrect outputs with high confidence. These are not theoretical risks. They are documented attack vectors with real-world examples.

Organizations whose security programs treat AI systems as just another application — covered by existing firewall and access-control policies — have a gap. SDAIA's framework expects that gap to be closed.

The Data Pillar: The Foundation Everything Else Rests On

AI systems are only as good as the data they are built on. SDAIA's Data Pillar operationalizes this in conjunction with the Personal Data Protection Law, which came into effect in 2022 and applies directly to personal data used in AI training and inference.

Data lineage — the documented record of where data came from, how it was collected, what transformations it has undergone, and who has accessed it — is a baseline requirement, not an advanced practice. Organizations that cannot answer basic questions about their training data (What population does this represent? When was it collected? What biases might it contain?) are not in a position to make defensible claims about the AI systems trained on it.

The PDPL imposes purpose limitation on AI data use. Data collected for one purpose cannot be used for AI training without an independent legal basis. This has practical consequences for organizations that have accumulated large proprietary datasets for operational purposes and are now considering using them for AI development. The assumption that internal data is freely available for whatever internal use the organization chooses is legally incorrect under PDPL.

Data minimization — collecting only what is necessary for stated AI objectives — runs against a common instinct in AI development, which is to collect as much as possible and figure out later what is useful. SDAIA's framework pushes organizations toward intentional data architecture rather than opportunistic data accumulation.

For organizations working with government data, SDAIA has established the National Data Bank as an approved mechanism for data access and sharing. Organizations involved in government AI projects should understand how the National Data Bank works and what compliance it requires, because it represents SDAIA's institutional vision for managed, governed data sharing at scale.

What Integration Actually Requires

The three pillars are not independent compliance workstreams. They are designed to operate as an integrated system, and the integration is where implementation typically breaks down.

Ethical AI requires quality representative data — that is a dependency running from the Ethics Pillar into the Data Pillar. The operational systems that monitor deployed AI for performance degradation must also be monitoring for emerging bias — that is a dependency running from the Reliability Pillar back into the Ethics Pillar. Data governance controls that restrict access to training data affect model development timelines — that is a tension that must be managed at the organizational level, not resolved by one team overriding another.

This integration requires a governance structure that spans technical, legal, data, and business functions. Organizations that have siloed their AI governance — ethics over here, security over there, data governance in a third place — will find that the pillars as actually implemented have gaps between them. Those gaps are where incidents happen.

SDAIA has not published a prescriptive maturity model with numbered levels and specific benchmarks as of March 2026. What it has published is principles-based guidance that requires organizations to assess their own situation and demonstrate reasoned implementation. This is more demanding than a checklist, because a checklist can be satisfied on paper. Principles-based compliance requires actual governance.

Where Organizations Are Falling Short

The most common implementation failures are predictable.

The first is deploying AI systems without documented risk classification. SDAIA's framework requires organizations to assess the risk level of each AI system — the potential consequences if it fails, the populations it affects, the degree of human oversight in place. Many organizations have AI in production with no formal risk assessment. When SDAIA asks how the organization determined what level of governance a particular system requires, there is no answer.

The second is monitoring that tracks technical performance but not ethical performance. Uptime and response time are not sufficient. Organizations need to be tracking output distributions, detecting drift, testing for bias on an ongoing basis. This is more expensive than conventional application monitoring and requires AI-specific tooling. Many organizations have skipped it.

The third is inadequate documentation of the AI lifecycle. SDAIA expects organizations to be able to demonstrate the history of an AI system: how it was developed, what data it was trained on, what ethical review it underwent, what testing it passed, who approved its deployment, what changes have been made since. Many organizations cannot produce this documentation because they never created it.

The Regulatory Direction of Travel

SDAIA has signaled that its governance expectations will increase over time, not decrease. The National Strategy for Data and AI established targets for AI's contribution to the Saudi economy — 12,000 AI graduates per year, significant GDP contribution from the AI sector — that create pressure to develop and deploy AI at scale. But scale without governance is exactly what SDAIA was created to prevent.

Organizations that treat SDAIA compliance as a one-time certification project are misjudging the situation. The framework is living guidance, and the authority has the mandate and the infrastructure to enforce it. The question is not whether SDAIA will scrutinize AI deployments in the Kingdom more closely over time. It is whether organizations will have built the governance infrastructure to respond when they do.

What Good Implementation Looks Like

For technical leaders, good implementation starts in the architecture phase, not the compliance review phase. Systems designed for explainability, built with data lineage tracking, and deployed with monitoring infrastructure are far less expensive to govern than systems retrofitted for compliance after the fact.

For compliance and legal teams, good implementation means building evidence systems that capture governance artifacts continuously, not assembling documentation packages when a regulatory review is imminent. The evidence that SDAIA cares about — ethical assessments, data provenance records, monitoring logs, incident documentation — should exist because the organization runs well, not because someone scrambled to create it.

For boards and executive leadership, good implementation means treating AI governance investment as a strategic requirement rather than an overhead cost. Organizations that participate in Vision 2030-era AI initiatives — government partnerships, national platform access, public-sector contracts — will face governance scrutiny as a condition of participation. The organizations positioned to win those opportunities are the ones that built real governance infrastructure, not compliance theater.

The organizations that will struggle are those still treating SDAIA's three pillars as a framework to acknowledge rather than implement. The regulatory infrastructure is built. The expectations are documented. What remains is whether the organizations operating AI inside Saudi Arabia will meet them.

Published by PeopleSafetyLab — AI safety and governance research for KSA organizations.

N

Nora Al-Rashidi

AI governance researcher specialising in regulatory compliance for organisations in Saudi Arabia and the GCC. Examines how SDAIA, SAMA, and the NCA's overlapping frameworks interact — what that means for risk, audit, and board-level accountability.

Share this article: