People‑Harm Risk Register (AI)
Version: v1.0
How to use (10 minutes)
- Pick a use‑case (or create a Use‑Case Card from
07-use-case-card-template.md). - Add/adjust risks below to match your workflow.
- Score Likelihood (L) and Impact (I) from 1–5.
- Define controls (reference control IDs from
04-controls-map.md). - Track evidence and residual risk at a set cadence (e.g., quarterly).
Scoring guidance (simple)
Likelihood (1–5): 1=rare, 3=possible, 5=likely (weekly/daily)
Impact (1–5):
- 1 = minor inconvenience
- 3 = material harm to individuals (financial loss, privacy breach) or significant customer impact
- 5 = severe harm (safety incident, rights violation, major regulatory/PR impact)
Risk rating (suggested): L×I
- 1–5 Low
- 6–12 Medium
- 13–25 High
Register template
| ID | Use‑case | Risk scenario (what can go wrong) | People impacted | Trigger / failure mode | L | I | Rating | Detection (how you notice) | Controls (IDs) | Owner | Evidence / artifacts | Residual rating | |---|---|---|---|---|---:|---:|---:|---|---|---|---|---:| | R1 | Support reply drafting | Hallucinated guidance causes customer harm | customers | model answers beyond KB; reviewer misses it | 3 | 4 | 12 | QA sampling; complaint review | C‑Q1, C‑H1, C‑L1, C‑I1, C‑I3 | Support/Risk | weekly QA report; incident log; kill‑switch/rollback runbook | 6 | | R2 | Support reply drafting | Sensitive data leakage in outbound response | customers | agent pastes ticket with PII; response echoes PII | 3 | 5 | 15 | DLP alerts; output scans; customer complaints | C‑D2, C‑D1, C‑L1, C‑I1 | IT/Sec | DLP logs; policy link | 8 | | R3 | HR screening | Discrimination / bias in screening outcomes | candidates | model ranks candidates using proxies | 3 | 5 | 15 | audit samples; fairness metrics | C‑H2, C‑G1, C‑Q2, C‑L2 | HR/Risk | quarterly bias review + EDR if exception | 10 | | R4 | HR screening | Lack of explainability / contestability | candidates | decision rationale missing or opaque | 3 | 4 | 12 | audit; complaints | C‑G1, C‑L2 | HR | decision logs | 6 | | R5 | Engineering copilot | Secret leakage to vendor/tool | org/customers | keys pasted into prompts; repo contains secrets | 4 | 5 | 20 | secret scanning; egress monitoring | C‑D3, C‑D1, C‑A1, C‑L1 | Eng/IT | scan reports; key rotation tickets | 10 | | R6 | Exec comms drafting | Misinformation / false claims | public/customers | unverified stats or quotes | 3 | 4 | 12 | approvals; fact‑check checklist | C‑H1, C‑Q3, C‑L1 | Comms/Legal | approval logs | 6 | | R7 | Fraud / impersonation | AI used to generate phishing or impersonation | employees/public | malicious use or compromised account | 2 | 5 | 10 | SOC detection; user reports | C‑A2, C‑L1, C‑I2 | Security | incident tickets | 5 | | R8 | Customer analytics | Privacy violation via re‑identification | customers | small cohort analysis reveals identity | 2 | 5 | 10 | privacy review | C‑G2, C‑D2 | Data Gov | DPIA / review doc | 5 | | R9 | Knowledge base ops | IP / copyright exposure | creators/org | tool reproduces licensed text improperly | 2 | 3 | 6 | legal review; vendor term review | C‑V2, C‑G1 | Legal | vendor terms; guidance | 3 | | R10 | Any | Over‑reliance / automation bias | customers/employees | humans stop checking; rubber‑stamping | 4 | 3 | 12 | QA; training checks | C‑T1, C‑Q1, C‑H1 | Risk/HR | training records; QA reports | 6 |
Notes
- Add risks specific to your industry (healthcare, fintech, logistics, public sector).
- Link each use‑case to at least one monitoring metric (hallucination rate, escalation rate, DLP hits, complaint volume).
- When a risk involves D3 data or C2 decisions, default to Prohibited unless governance has documented the exception (EDR).