Skip to main content
Lab Notes
Frameworks

Approved & Prohibited AI Use‑Cases (Executive 1‑Pager)

AI Safety Pack Component

PeopleSafetyLab|February 24, 2026|2 min read|intermediate

Approved & Prohibited AI Use‑Cases (Executive 1‑Pager)

Version: v1.0

What this is

A quick, executive‑readable view of what is Approved, Conditional, and Prohibited.

  • For intake/implementation details, use: 02a-ai-use-case-matrix.md
  • For control definitions, use: 04-controls-map.md

Approved (low risk; examples)

Allowed when using approved tools and no restricted data.

  • Drafting internal emails and summaries
  • Translating non‑sensitive content
  • Brainstorming/outlining documents
  • Creating internal training materials (manager review where needed)

Conditional (allowed only with controls)

Allowed only after classification via the matrix and implementation of required controls.

  • Customer support reply drafting (human review + QA sampling + logging)
  • Data analysis on internal data (approved tool; access control + logging)
  • Code assistance on internal repos (secrets protection + repo access controls)
  • Policy/process drafting (legal/risk review; version control)

Prohibited (default)

  • Uploading/pasting Restricted data (PII/secrets/regulated) into unapproved AI tools
  • Fully automated HR decisions (hire/fire/promote/comp) or candidate ranking using personal data
  • Medical/legal/financial advice delivered to customers without approved playbooks and required human review
  • Generating instructions for hazardous work without validated SOPs
  • Deceptive content, impersonation, misinformation, or fraud

Minimum required controls (for any Conditional use‑case)

(Reference IDs in 04-controls-map.md.)

  • Approved tools + access control (C‑D1, C‑A1)
  • Logging & monitoring baseline (C‑L1)
  • Human‑in‑the‑loop for external outputs (C‑H1)
  • Quality checks / QA sampling (C‑Q1 or C‑Q3)
  • Incident reporting channel + triage (C‑I1)

Escalation

If a use‑case touches D3 Restricted data, O2 automated external outputs, or C2 high‑impact decisions, route it through Risk/Compliance review using 02a-ai-use-case-matrix.md (and document the decision in the Use‑Case Register).

P

PeopleSafetyLab

Independent AI safety research for organisations and families in Saudi Arabia and the GCC. All research is editorially independent. PeopleSafetyLab has no consulting clients and does not conduct paid audits.

Share this article: