Each entry covers four layers: a plain-English executive card, a practitioner overview with controls ownership and go-live criteria, an actionable controls detail, and a technical implementation guide with code examples. Built for use alongside your own risk assessments — not as a substitute for them.
Hallucination, model drift, robustness failures, explainability gaps.
4 risk entries
B — GovernanceAccountability gaps, regulatory compliance, lifecycle governance, supply chain.
4 risk entries
C — Security & AdversarialData poisoning, prompt injection, model theft, deepfakes, AI-enabled attacks.
5 risk entries
D — DataTraining data quality, privacy and data protection, intellectual property.
3 risk entries
E — Fairness & SocialAlgorithmic bias and discrimination, harmful content, misinformation.
3 risk entries
F — HCI & DeploymentAutomation bias, shadow AI, scope creep beyond intended use.
3 risk entries
G — Systemic & MacroConcentration risk, environmental impact, workforce displacement, AI safety.
4 risk entries
Open source under MIT licence. Content is provided for informational purposes. Not legal, regulatory, or professional advice.
Basis: MIT AI Risk Repository · NIST AI RMF 1.0 & AI 600-1 · EU AI Act · OWASP LLM Top 10 · Documented AI incidents.