Skip to main content
Lab Notes
Frameworks

Training Facilitator Guide (AI Safety Pack)

AI Safety Pack Component

PeopleSafetyLab|February 24, 2026|5 min read|intermediate

Training Facilitator Guide (AI Safety Pack)

Version: v1.0 Practical tips for delivering the AI safety training session.

Before the session

Materials needed

  • [ ] Slide deck (based on 06-training-deck-outline.md)
  • [ ] Printed or digital: Approved/Prohibited 1‑pager (02-approved-prohibited-usecases.md)
  • [ ] Quick decision guide (matrix excerpt from 02a-ai-use-case-matrix.md)
  • [ ] Quiz (10 questions from slide 15)
  • [ ] Sign‑in sheet or LMS attendance tracking

Room setup

  • [ ] Projector/screen visible to all
  • [ ] Flip chart or whiteboard for exercise debrief
  • [ ] Optional: breakout space for role‑specific modules

Prep (30 mins)

  1. Read through your org's approved tools list and data classification scheme
  2. Prepare 2–3 org‑specific examples (see below)
  3. Confirm incident reporting channel name/location
  4. Test quiz delivery (digital or paper)

During the session

Opening (5 mins)

  • State the goal: "Use AI productively without causing people harm or data leaks"
  • Set expectation: "This is practical—what you can do tomorrow, not theory"

Making it real: org‑specific examples

Replace generic examples with your own:

| Section | Generic | Org‑specific replacement | |---|---|---| | What AI is good at | Drafting emails | "Drafting our weekly client status updates" | | Hallucination risk | Wrong troubleshooting | [Actual support ticket example, anonymized] | | Data rules | No PII | "Don't paste client phone numbers into ChatGPT" | | HITL | Review before send | "All Zendesk drafts need supervisor approval" |

Slide‑by‑slide tips

| Slide | Key point | Facilitator tip | |---|---|---| | 3 (Failure modes) | Show real consequences | Mention actual fines or news stories from your industry | | 6 (Non‑negotiables) | This is policy | Pause; ask: "Any questions about these?" | | 7 (Approved/Conditional/Prohibited) | Use the 1‑pager | Pass out the 1‑pager; have them highlight their role's uses | | 8 (Matrix) | Three levers: D, O, C | Use a real use‑case; walk through classification together | | 9–10 (Data rules) | Be specific | Show examples of what IS allowed vs NOT allowed in your org | | 11 (HITL) | What review means | Clarify: checking facts, tone, no PII—not just grammar | | 13 (Reporting) | Channel + 24h | Show the actual channel; practice reporting a near‑miss | | 14 (Mini exercise) | Make it interactive | Call for votes: Approved? Conditional? Prohibited? | | 15 (Quiz) | Check understanding | Review answers immediately; clarify wrong ones |

Exercise (Slide 14) — facilitator script

Scenario A: "Draft a customer email using ticket text containing phone number"

  • Ask: "What data class?" (D3 Restricted)
  • Ask: "What exposure?" (O1 External drafted)
  • Ask: "Approved, Conditional, or Prohibited?" (Prohibited in unapproved tool)
  • Debrief: "If the tool is approved with DLP + human review → Conditional"

Scenario B: "Summarize internal meeting notes"

  • Ask: "What data class?" (D1 Internal)
  • Ask: "Approved, Conditional, or Prohibited?" (Approved with approved tool)
  • Debrief: "Verify key facts; avoid restricted data"

After the session

Immediate (same day)

  • [ ] Collect attendance/sign‑in
  • [ ] Note questions that were hard to answer (update FAQ)
  • [ ] Send follow‑up: policy link + 1‑pager + incident channel

Within 48 hours

  • [ ] Score quizzes; identify knowledge gaps
  • [ ] Update training materials if questions were consistently missed
  • [ ] Add completion records to training evidence folder

Ongoing

  • [ ] Quarterly: review quiz results for patterns
  • [ ] Quarterly: refresh examples with recent incidents/near‑misses
  • [ ] Annually: full content review + refresh

Handling difficult questions

| Question | Response | |---|---| | "Can I use ChatGPT for work?" | "Only if it's on the approved tools list. Check with IT if unsure." | | "What if I already pasted something I shouldn't?" | "Report it as a near‑miss within 24h. We need to assess, not punish." | | "The AI is faster than human review." | "Speed doesn't override safety. Conditional uses require review by policy." | | "Other companies automate this." | "Our risk tolerance is documented. Exceptions require EDR approval." | | "How do I know what's Restricted?" | "Refer to the data classification guide. When in doubt, treat as Restricted." |

Role‑specific module tips

Support (15 mins)

  • Focus: KB grounding, escalation rules
  • Activity: Review 3 real AI drafts; identify hallucinations
  • Deliverable: Each agent writes one grounding rule for the team

HR (15 mins)

  • Focus: Default prohibitions, bias risks
  • Activity: Discuss what "audit trail" means for your hiring process
  • Deliverable: List 3 decisions that must remain human

Engineering (15 mins)

  • Focus: Secrets handling, code review
  • Activity: Find secrets in sample code snippets
  • Deliverable: Confirm pre‑commit hooks installed

Leadership (15 mins)

  • Focus: Governance cadence, exceptions
  • Activity: Review one EDR template together
  • Deliverable: Confirm Risk Committee schedule

Success indicators

  • Attendance: >90% of target audience within 30 days
  • Quiz pass rate: >85% on first attempt
  • Post‑training incidents: decrease in "user error" category
  • Near‑miss reporting: increase (indicates trust in reporting)
P

PeopleSafetyLab

Independent AI safety research for organisations and families in Saudi Arabia and the GCC. All research is editorially independent. PeopleSafetyLab has no consulting clients and does not conduct paid audits.

Share this article: