Skip to main content
Lab Notes
Frameworks

Decision Tree: Use‑Case Classification (Quick Reference)

AI Safety Pack Component

PeopleSafetyLab|February 24, 2026|5 min read|intermediate

Decision Tree: Use‑Case Classification (Quick Reference)

Version: v1.0 A visual decision tree for rapid use‑case classification.


ASCII Decision Tree

START: New AI Use‑Case Proposed
│
├─► Q1: What data will be used?
│   │
│   ├─► D3 Restricted (PII, secrets, regulated records)
│   │   │
│   │   ├─► Is the tool EXPLICITLY approved for D3?
│   │   │   │
│   │   │   ├─► YES → Go to Q2 (Output Exposure)
│   │   │   │
│   │   │   └─► NO → ⚠️ PROHIBITED (default)
│   │   │           Exception possible via EDR + C‑G2 + C‑G3
│   │   │
│   ├─► D2 Confidential (contracts, pricing, strategy)
│   │   │
│   │   └─► Go to Q2 (Output Exposure)
│   │
│   ├─► D1 Internal (routine internal info)
│   │   │
│   │   └─► Go to Q2 (Output Exposure)
│   │
│   └─► D0 Public (marketing, public content)
│       │
│       └─► Go to Q2 (Output Exposure)
│
├─► Q2: Where do outputs go?
│   │
│   ├─► O2 External Automated (sent by system, no/minimal review)
│   │   │
│   │   ├─► D3 or C2? → ⚠️ PROHIBITED (default)
│   │   │           Exception possible via EDR + C‑G1 + C‑I3
│   │   │
│   │   └─► D0–D2 and C0–C1 → ⚠️ CONDITIONAL
│   │           Requires: C‑G1 + C‑G3 + C‑I3 + automation safeguards
│   │
│   ├─► O1 External Drafted (human reviews before sending)
│   │   │
│   │   └─► Go to Q3 (Decision Criticality)
│   │
│   └─► O0 Internal Only (stays inside org)
│       │
│       └─► Go to Q3 (Decision Criticality)
│
└─► Q3: How critical is the decision?
    │
    ├─► C2 High (safety/rights/eligibility/finance/legal impact)
    │   │
    │   ├─► O2 or D3? → Already handled above
    │   │
    │   └─► O0/O1 with D0–D2 → ⚠️ CONDITIONAL
    │           Requires: C‑G1 + C‑L2 + C‑H1 (if O1) + governance review
    │
    ├─► C1 Medium (influences outcomes; reversible with effort)
    │   │
    │   └─► Go to Q4 (Final Classification)
    │
    └─► C0 Low (convenience/formatting; reversible)
        │
        └─► Go to Q4 (Final Classification)

Q4: FINAL CLASSIFICATION
│
├─► D0–D1 + O0 + C0–C1 → ✅ APPROVED
│       Minimum controls: C‑D2
│
├─► D0–D1 + O1 + C0–C1 → ⚠️ CONDITIONAL
│       Minimum controls: C‑H1, C‑L1, C‑Q1/C‑Q3, C‑I1
│
├─► D2 + O0 + C0–C1 → ⚠️ CONDITIONAL
│       Minimum controls: C‑D1, C‑D2, C‑A1, C‑L1
│
├─► D2 + O1 + any → ⚠️ CONDITIONAL
│       Minimum controls: C‑H1, C‑Q1/C‑Q3, C‑L1, C‑I1
│
└─► D2 + O2 + any → ⚠️ PROHIBITED (default)
        Exception only: C‑G3 + documented rationale + C‑I3

LEGEND:
D = Data sensitivity (D0 Public, D1 Internal, D2 Confidential, D3 Restricted)
O = Output exposure (O0 Internal, O1 External drafted, O2 External automated)
C = Decision criticality (C0 Low, C1 Medium, C2 High)
C‑XX = Control ID (see 04-controls-map.md)
EDR = Exception Decision Record (see 08-exception-decision-record-template.md)

Quick Questions (5‑Minute Classification)

Ask these three questions in order:

1. Data Check (30 seconds)

"Will this use-case involve any of the following?"

  • [ ] Individual names + contact info
  • [ ] Government IDs (passport, national ID, etc.)
  • [ ] Health/medical information
  • [ ] Financial account details
  • [ ] Passwords, API keys, credentials
  • [ ] Non‑public contracts or pricing

If any checked → D2 or D3 → Proceed with caution If all unchecked → D0 or D1 → Lower risk

2. Output Check (30 seconds)

"Where will the AI output go?"

  • [ ] Stays internal (draft doc, internal chat)
  • [ ] Goes to customer after human review
  • [ ] Goes to customer automatically (or near‑automatically)

O0 (internal) → Lower exposure O1 (drafted external) → Requires C‑H1 (human review) O2 (automated external) → High scrutiny; default prohibited for D3/C2

3. Impact Check (1 minute)

"If the AI makes a mistake, what's the worst that could happen?"

  • [ ] Minor inconvenience; easily fixed → C0
  • [ ] Customer complaint; reversible with effort → C1
  • [ ] Safety incident, rights violation, financial loss, legal liability → C2

C2 → Governance review required; often prohibited by default

4. Tool Check (30 seconds)

"Is the AI tool on our approved list?"

  • [ ] Yes, and approved for this data class
  • [ ] No, or not approved for this data class

Not approved + D3 → STOP. Do not proceed.

5. Document (2–3 minutes)

Record the classification:

  • Use‑case name:
  • Data class (D): ___
  • Output exposure (O): ___
  • Decision criticality (C): ___
  • Classification: Approved / Conditional / Prohibited
  • Required controls: [list]
  • Owner: [name]
  • Date: [today]

Store in Use‑Case Register (07-use-case-register-template.md).


Common Patterns (Copy/Paste)

| Pattern | Classification | Quick Rationale | |---|---|---| | Internal meeting notes | ✅ Approved | D1/O0/C0 — low risk | | Drafting customer emails | ⚠️ Conditional | D1–D2/O1/C1 — needs review | | Auto‑sending support replies | ⚠️ Conditional / Prohibited | O2 requires automation safeguards; D2/C2 default prohibited | | HR resume ranking | ⚠️ Prohibited | D3/C2 — high harm potential | | Code copilot (internal repos) | ⚠️ Conditional | D2/O0/C1 — secrets risk | | Contract drafting (legal review) | ⚠️ Conditional | D2/O1/C2 — needs governance review | | Marketing content (public) | ✅ Approved | D0/O1/C0 — public info, human review before publish |


When to Escalate

Always escalate to Risk/Compliance if:

  1. Data class is D3 (Restricted)
  2. Output is O2 (automated external)
  3. Decision criticality is C2 (high impact)
  4. Tool is not explicitly approved for the data class
  5. Team requests an exception to a Prohibited classification

Escalation path:

  1. Document the use‑case using this decision tree
  2. Complete preliminary Use‑Case Card (07-use-case-card-template.md)
  3. Submit to Risk Committee with EDR if requesting exception
  4. Do NOT proceed until written approval received

Reference

  • Full matrix: 02a-ai-use-case-matrix.md
  • Control definitions: 04-controls-map.md
  • Use‑Case Card template: 07-use-case-card-template.md
  • Exception process: 08-exception-decision-record-template.md
P

PeopleSafetyLab

Independent AI safety research for organisations and families in Saudi Arabia and the GCC. All research is editorially independent. PeopleSafetyLab has no consulting clients and does not conduct paid audits.

Share this article: