How to use this resource
The AI Risk Knowledge Base is a practitioner reference, not a risk assessment. It helps you understand AI risks in depth, identify applicable controls, and engage with the topic at the level your role requires.
Who this is for
This resource is designed for everyone involved in AI governance, deployment, and oversight — across every level of seniority and technical background.
Executives and board members — use the Layer 1 executive cards to understand what a risk means, what the consequence of inaction is, and what question to ask about it.
Risk managers and compliance leads — use Layer 2 to understand the risk mechanism, which controls apply, who owns each control, and what done looks like before a system goes live.
Project managers — use the controls summary tables in Layer 2 to confirm what must be in place before a go-live sign-off, and who is responsible for delivering each control.
Security analysts and technical practitioners — use Layer 3 for actionable control descriptions and Layer 4 for technical implementation with code examples, tool references, and compliance mapping.
How entries are structured
Every risk entry has four layers. You do not need to read all four — read to the depth your role requires.
| Layer | Audience | What you get |
|---|---|---|
| 1 — Executive card | Board, executives | Plain English summary, severity, key question to ask, persona-specific hooks |
| 2 — Practitioner overview | Risk, compliance, PMs | Risk mechanism, likelihood drivers, controls summary with owner/effort/done criteria |
| 3 — Controls detail | Risk practitioners, audit | Full control descriptions, KPIs, jurisdiction-specific obligations |
| 4 — Technical implementation | Engineers, security analysts | Code examples, tool references, compliance implementation steps |
How to navigate
The left sidebar organises all 26 risk entries across 7 domains (A through G). Within each entry, use the on-page table of contents on the right to jump directly to the layer you need.
How to use this alongside a risk assessment
This taxonomy is a checklist and reference, not a risk assessment methodology. When conducting an AI risk assessment for a specific deployment:
- Use the 26 entries as a checklist — consider each domain and determine whether it is relevant to your deployment context.
- For relevant risks, use Layer 2 to understand likelihood drivers and assess which apply.
- Use the controls summary to identify what must be in place before go-live versus what can be addressed post-launch.
- Use Layer 3 and 4 to design and implement the specific controls.
Important caveats
Risk ratings in each entry are defaults — starting points for assessment, not prescribed values. A risk rated High in this taxonomy may be Low for a specific deployment, or vice versa, depending on context.
This resource is not legal, regulatory, or professional advice. Framework references are provided to support your own compliance work, not as a substitute for legal review.
How to contribute
This is an open-source project. If you spot an error, know of a documented incident that should be included, or believe a control is missing or outdated, contributions are welcome.
See the Contributing guide for how to raise an issue or submit a pull request.
Training module
An interactive scenario-based training module is under development. It will use the scenario seeds embedded in each entry to generate workplace situations across different personas — allowing practitioners to test their understanding of risks and controls in context. Watch the GitHub repository for updates.