Skip to main content
Lab Notes
Frameworks

Jurisdiction Notes (Appendix)

AI Safety Pack Component

PeopleSafetyLab|February 24, 2026|6 min read|intermediate

Jurisdiction Notes (Appendix)

Version: v1.0 Guidance for adapting the AI Safety Pack to different regulatory environments.

Disclaimer: This pack provides operational guidance, not legal advice. Consult qualified legal counsel for jurisdiction‑specific compliance.

Universal core (applies everywhere)

The following principles are jurisdiction‑neutral and should be maintained:

  1. Human accountability for AI‑supported decisions
  2. Data classification and protection (especially PII/secrets)
  3. Human‑in‑the‑loop for external/high‑impact outputs
  4. Incident reporting and response
  5. Governance and exception handling

European Union (EU AI Act)

Key implications

  • Prohibited AI practices: Some uses banned outright (social scoring, manipulation, real‑time biometric ID in public)
  • High‑risk systems: Specific conformity requirements (risk management, data governance, transparency, human oversight, accuracy)
  • General‑purpose AI models: Obligations for systemic risk models

Pack adaptations

| Pack element | Adaptation | |---|---| | Use‑case matrix | Add "EU AI Act classification" column (Prohibited / High‑risk / Limited‑risk / Minimal‑risk) | | Risk register | Include "Fundamental rights impact" assessment for high‑risk uses | | Controls | C‑G1 (governance) must include CE marking / conformity documentation for high‑risk systems | | Training | Add EU AI Act overview module; emphasize prohibited practices | | Records | Maintain 10‑year technical documentation for high‑risk systems |

Reference

  • EU AI Act (Regulation 2024/1689)
  • European Artificial Intelligence Board guidance

United States (sector‑specific)

Key implications

No comprehensive federal AI law yet; sector‑specific rules apply:

  • Healthcare: HIPAA considerations for AI processing PHI
  • Financial services: Fair Credit Reporting Act, ECOA for credit/lending decisions
  • Employment: EEOC guidance on AI and discrimination
  • Consumer protection: FTC Act Section 5 (unfair/deceptive practices)

Pack adaptations

| Sector | Adaptation | |---|---| | Healthcare | Add HIPAA Business Associate Agreement requirements to vendor due diligence (C‑V1) | | Financial | Add "Adverse action notice" requirements to HR/finance controls (C‑H3) | | Employment | Strengthen bias testing (C‑Q2) and audit trails (C‑L2); document "disparate impact" assessments | | All | Emphasize FTC transparency requirements; document AI use in consumer‑facing decisions |

Reference

  • NIST AI Risk Management Framework (voluntary)
  • EEOC AI guidance (2023)
  • FTC guidance on AI (2023, 2024)

United Kingdom

Key implications

  • UK GDPR and Data Protection Act 2018 apply to AI processing personal data
  • UK AI White Paper principles (safety, transparency, fairness, accountability, contestability)
  • Sector regulators (FCA, ICO) issuing AI guidance

Pack adaptations

| Pack element | Adaptation | |---|---| | Data controls | Reference UK GDPR lawful basis (consent, legitimate interests, etc.) | | Risk register | Include "Data subject rights" impact (right to explanation, objection) | | Controls | C‑G2 (privacy review) references ICO AI guidance and DPIA requirements | | Training | Include UK‑specific examples; reference ICO AI and data protection risk toolkit |

Reference

  • UK GDPR (retained EU law)
  • ICO AI and data protection risk toolkit
  • UK AI White Paper (2023)

Saudi Arabia / GCC Region

Key implications

  • Personal Data Protection Law (PDPL) effective 2023
  • SDAIA AI Ethics Principles
  • Data localization considerations
  • Islamic ethics considerations (Maqasid al‑Shariah)

Pack adaptations

| Pack element | Adaptation | |---|---| | Data controls | Emphasize PDPL compliance; data localization requirements | | Vendor due diligence | Add "Data residency / local hosting" requirement (C‑V1) | | Risk register | Include "SDAIA AI Ethics Principles" alignment assessment | | Training | Add module on SDAIA principles; use Arabic/English bilingual materials where appropriate | | Governance | Consider Shariah compliance for finance/insurance use cases |

Reference

  • Saudi Personal Data Protection Law (PDPL)
  • SDAIA AI Ethics Principles
  • National Data Management Office (NDMO) standards

Singapore

Key implications

  • Personal Data Protection Act (PDPA) applies to AI processing personal data
  • IMDA AI governance framework (voluntary)
  • MAS FEAT principles for financial services

Pack adaptations

| Pack element | Adaptation | |---|---| | Data controls | Reference PDPA consent and purpose limitation requirements | | Financial services | Add MAS FEAT principles (Fairness, Ethics, Accountability, Transparency) | | Risk register | Include PDPA data breach notification requirements (72 hours to PDPC) | | Training | Reference IMDA AI governance framework; use local case studies |

Reference

  • PDPA (Singapore)
  • IMDA AI Governance Framework
  • MAS FEAT Principles

Australia

Key implications

  • Privacy Act 1988 (Notifiable Data Breaches scheme)
  • OAIC AI guidance
  • Sector‑specific: APRA CPS 234 (financial), TGA (medical devices)

Pack adaptations

| Pack element | Adaptation | |---|---| | Data controls | Reference Australian Privacy Principles (APPs) | | Risk register | Include NDB scheme trigger assessment | | Financial | APRA CPS 234 information security requirements | | Training | Reference OAIC AI guidance and case studies |

Reference

  • Privacy Act 1988 (Cth)
  • OAIC AI guidance
  • APRA CPS 234

Cross‑border data flows

Considerations

  • Data transfer mechanisms (SCCs, adequacy decisions, certification)
  • Local data residency requirements
  • Lawful access by foreign governments

Pack adaptations

| Pack element | Adaptation | |---|---| | Vendor due diligence (C‑V1) | Add "Data transfer mechanisms" checklist item | | Data controls (C‑D1/C‑D2) | Specify allowed regions for data processing/storage | | Risk register | Include "Cross‑border data access" risk scenario |

Multi‑jurisdictional organizations

Approach

  1. Establish a "baseline" policy using this pack
  2. Add jurisdiction‑specific addenda as needed
  3. Use the most restrictive applicable standard as default
  4. Document which jurisdictions apply to each use‑case

Template addendum structure

## [Jurisdiction] Addendum

Applies to: [locations / data subjects]
Additional requirements:
- [Specific requirement 1]
- [Specific requirement 2]

Local contact: [name / role]

Keeping current

| Jurisdiction | Review cadence | Source to monitor | |---|---|---| | EU | Quarterly | European Commission AI Office | | US | Quarterly | FTC, EEOC, sector regulators | | UK | Quarterly | ICO, DSIT | | Saudi Arabia | Quarterly | SDAIA, NDMO | | Singapore | Quarterly | PDPC, IMDA, MAS | | Australia | Quarterly | OAIC, APRA |

P

PeopleSafetyLab

Expert in AI Safety and Governance at PeopleSafetyLab. Dedicated to building practical frameworks that protect organizations and families, ensuring ethical AI deployment aligned with KSA and international standards.

Share this article: