Skip to main content
Lab Notes
AI Governance

SDAIA's Three Pillars: The Architecture of AI Trust in Saudi Arabia

PeopleSafetyLab|March 10, 2026|10 min read

The conference room lights in Riyadh burned late into the night. In early 2023, a group of policymakers and AI experts gathered around a table, confronting a question most of the world was still dodging: how to build an actually workable framework for governing artificial intelligence—not another declaration of principles destined for a filing cabinet.

The answer was surprisingly concrete. Not ten principles, not voluntary guidelines, but three pillars: data governance, AI ethics, and technical capability building. These three pillars constitute the Saudi Data and AI Authority's (SDAIA) structural framework for AI governance, and they represent one of the most mature AI governance implementations in the Middle East.

For organizations deploying AI systems in Saudi Arabia, this isn't abstract theory—this is actual compliance reality.

The First Pillar: Data Governance as Foundation

Data governance is the most mature of the three pillars, partly because it has the Personal Data Protection Law (PDPL) as legal backbone. Since PDPL's full enforcement in 2023, organizations processing Saudi citizen data face a clear set of legal obligations: legal basis, data minimization, storage limitations, cross-border transfer rules.

But SDAIA's data governance pillar is not merely "comply with PDPL." It constructs a more complete vision: data quality as a prerequisite for AI reliability.

There's a paradox here: organizations often start thinking about data quality after they've collected massive datasets, and AI systems precisely amplify the consequences of that neglect. A model trained on biased data doesn't "self-correct"—it industrializes bias, replicating it faster and more efficiently into every decision.

SDAIA's data governance pillar requires organizations to answer three questions before deploying AI:

Data quality assurance: Is the data accurate, complete, and timely? Is there clear data lineage tracking?

Data access control: Who can access training data? Is access logged and audited?

Data lifecycle management: How long is data retained? How is it securely deleted?

For practical compliance, this means documentation, audit trails, and—most critically—Data Protection Impact Assessments (DPIAs) for specific high-risk scenarios. While PDPL doesn't mandate DPIAs for all AI projects, SDAIA's guidelines strongly recommend: any AI system involving automated decision-making, large-scale data processing, or sensitive data categories should complete this assessment.

From an implementation perspective, this means:

  • Establishing data governance committees with clear data management responsibilities
  • Deploying data lineage tools to track data from collection through model training
  • Regularly auditing data quality metrics, incorporating data quality into AI model performance evaluation

The cross-border dimension deserves special attention. PDPL restricts the transfer of personal data outside Saudi Arabia unless the recipient country provides adequate protection or appropriate safeguards are in place. For AI systems trained on Saudi citizen data—particularly those using cloud-based machine learning platforms—this creates a compliance puzzle. Organizations must either ensure data residency within the Kingdom, obtain explicit consent for transfer, or rely on approved contractual mechanisms. The National Data Management Office maintains a list of countries deemed to provide adequate protection, but as of early 2024, that list remains limited, making contractual safeguards the practical path for most international AI deployments.

The Second Pillar: AI Ethics Without the Fluff

AI ethics is the pillar most prone to empty rhetoric, but SDAIA's framework makes a wise choice here: transforming ethical principles into auditable requirements.

Not "AI should be fair" but "model decisions must be explainable"; not "AI should be transparent" but "high-risk AI systems must provide human-understandable explanations of decision logic."

The core ethical requirements can be distilled into four dimensions:

Fairness and non-discrimination: Model outputs must not unfairly differentiate based on protected characteristics like gender, race, religion, or disability. This requires bias detection and mitigation measures during model development.

Transparency and explainability: High-risk AI decisions—such as credit assessment, employment screening, medical diagnosis—must be explainable to affected individuals.

Accountability: There must be a clearly identified "human responsible" for AI systems; decision-making responsibility cannot be outsourced to algorithms.

Privacy protection: AI systems must comply with PDPL's privacy requirements, including data minimization and purpose limitation principles.

A concrete example: when a bank deploys an AI credit scoring system in Saudi Arabia, it must be able to explain to a rejected loan applicant exactly which factors led to the rejection. This isn't a "nice to have"—it's a compliance requirement. In practice, this means choosing interpretable models (decision trees, linear models) or equipping complex models (deep learning) with explanation layers (SHAP, LIME).

The implementation challenge for this pillar isn't understanding the principles—it's embedding them into organizational processes:

  • Introducing ethics review stages during model development
  • Establishing AI ethics committees with cross-functional participation (technical, legal, business)
  • Creating pre-deployment ethics impact assessment checklists for high-risk AI systems

One pattern we've observed in our work with Saudi organizations: the ethics requirement often conflicts with procurement reality. A business unit purchases an AI solution from an international vendor, only to discover that the "black box" model cannot meet explainability requirements. The vendor's intellectual property claims collide with SDAIA's transparency mandates. This isn't a theoretical concern—it's a procurement issue that should be addressed during vendor evaluation, not after deployment. Organizations procuring AI systems should include explainability and auditability as contractual requirements, not optional features.

The Third Pillar: Technical Capability Building

The third pillar is in some ways the most forward-looking: it acknowledges that governance capability must grow in parallel with AI capability.

SDAIA doesn't just set rules—it's building an ecosystem that supports compliance. This includes:

The National Data Management Office: As PDPL's regulatory body, providing compliance guidance and consultation.

AI Capability Centers: Providing organizations with AI assessment tools, best practice guidelines, and technical support.

Talent Development Programs: Training AI professionals in governance capabilities, not just technical skills.

For organizations, this pillar means two things:

First, don't go it alone. Use the resources SDAIA provides—guidelines, tools, consulting services—to build compliance capabilities. Many organizations reinvent the wheel on AI governance when there are existing frameworks and tools available.

Second, invest in internal capability building. Compliance isn't a one-time activity outsourced to legal counsel—it's an ongoing capability. This means training data scientists to understand ethical requirements, training business teams to identify AI risks, and training legal teams to understand technical realities.

A pragmatic implementation path:

  • Designating an AI governance lead with clear responsibilities and authority
  • Regularly participating in training and certification programs organized by SDAIA and NCA
  • Building internal knowledge bases to document AI governance practices and lessons learned

The talent gap is real, and it's structural. Saudi Arabia's National AI Strategy aims to train 20,000 data and AI specialists by 2030, but current supply falls short of demand. For organizations, this means competing for scarce talent while simultaneously building internal capabilities. The practical approach is layered: hire or designate a governance lead who understands both the regulatory landscape and technical realities, then systematically upskill existing teams. The alternative—relying entirely on external consultants—creates dependency and fails to build the institutional knowledge that makes compliance sustainable.

The Tension: Compliance in Practice

These three pillars are clear on paper but full of tension in practice.

Data governance requires complete data lineage, but many organizations' legacy systems simply cannot provide this visibility. AI ethics requires explainability, but the most advanced models (large language models) are precisely the hardest to explain. Capability building takes time, but market competition demands rapid AI deployment.

This tension isn't a bug—it's a feature. The difficulty of governance is itself a filter: it ensures that only organizations taking AI risks seriously will reap AI's benefits.

For organizations, the strategy for managing this tension is phased compliance:

Phase One (Foundation): Complete PDPL basic requirements, establish data governance infrastructure, inventory AI systems and risk levels.

Phase Two (Advanced): Implement ethics review processes for high-risk AI systems, deploy model monitoring and auditing tools, establish internal AI governance committees.

Phase Three (Continuous Improvement): Regularly audit and update governance frameworks, participate in industry collaboration and standard-setting, transform governance capability into competitive advantage.

The PSL Angle: From Framework to Implementation

SDAIA's three-pillar framework answers the "what to govern" question but doesn't answer the "how to govern" question. This is precisely where PeopleSafetyLab's services fill the gap.

Gap Assessment: We translate the three-pillar framework into concrete checklists, helping organizations identify gaps between current practices and compliance requirements. This isn't a one-time "compliance audit" but an ongoing "governance health check."

Implementation Roadmap: We help organizations develop transition plans from current state to target state, prioritizing high-risk areas and allocating resources rationally.

Capability Building: We provide training and workshops to help organizational teams understand governance requirements and develop autonomous compliance capabilities.

Ongoing Support: Governance isn't a project—it's a practice. We provide continuous consulting support to help organizations navigate emerging regulatory requirements and technical challenges.

Our value proposition is simple: don't let governance become a barrier to innovation—let it become innovation's guardrail. The three-pillar framework exists not to stop AI deployment but to ensure AI deployment generates real value—not just legal risk.

This isn't just messaging. We've seen organizations approach compliance as a checkbox exercise, completing minimum requirements and moving on. Six months later, they're scrambling when an audit reveals gaps, or when a model drift produces unexpected outputs, or when a regulatory interpretation shifts. The organizations that treat governance as a one-time project end up paying more—financially and reputationally—than those that invest in building genuine capability.

The Deeper Question

SDAIA's three-pillar framework is a technical governance tool, but it also reflects a deeper question: in a world where AI capabilities are rapidly diffusing, who ensures AI serves human welfare?

This isn't a Saudi-specific problem. The EU has the AI Act, the US has the AI Bill of Rights blueprint, China has generative AI regulations. Every jurisdiction is trying to answer the same question.

But Saudi Arabia's answer has something unique: it's neither as rule-centric as the EU nor as market-centric as the US. It attempts to build an ecosystem: legal framework (PDPL), regulatory body (SDAIA), technical capability (National AI Strategy), and—most critically—enforcement will.

This ecosystem approach is worth watching. The EU's AI Act, while comprehensive, risks creating a compliance industry without necessarily improving AI outcomes. The US approach, relying on voluntary commitments and sectoral regulation, leaves gaps that determined actors can exploit. Saudi Arabia is attempting something different: a coordinated system where law, regulator, and capability-builder work in concert.

Whether this works in practice remains to be seen. Regulatory capture, resource constraints, and the sheer pace of AI development all pose challenges. But the framework itself—the three pillars—is structurally sound. It addresses the key dimensions: data as input, ethics as constraint, capability as enabler.

The three-pillar framework is just this ecosystem's skeleton. The real test is whether it can shape the trajectory of AI development in practice—whether it can make "responsible AI" move from slogan to organizational culture, whether compliance can transform from cost center to competitive advantage.

For organizations deploying AI in Saudi Arabia, the question isn't whether to comply with this framework, but how to translate it into sustainable practice. The answer isn't more documents—it's capability building: technical, organizational, and cultural.

The three pillars are there. Climbing them is each organization's own choice.

P

PeopleSafetyLab

Expert in AI Safety and Governance at PeopleSafetyLab. Dedicated to building practical frameworks that protect organizations and families, ensuring ethical AI deployment aligned with KSA and international standards.

Share this article: