Skip to main content

Monitoring sources

The AI risk landscape changes rapidly. The following sources are used to maintain currency of this knowledge base. They are also recommended for practitioners maintaining their own AI risk programs.

Incident databases

SourceURLCadenceUse
AI Incident Database (AIID)incidentdatabase.aiWeeklyPrimary source for real-world AI incidents. Subscribe to monthly digest.
MIT AI Incident Trackerairisk.mit.eduMonthlySeverity-classified tracker with harm taxonomy linked to MIT Risk Repository.
OECD AI Incidents Monitoroecd.ai/en/incidentsMonthlyCross-jurisdiction incident monitoring with policy context.
Stanford HAI AI Indexaiindex.stanford.eduAnnually (March)Comprehensive annual survey of AI incidents, regulation, and safety developments.

Regulatory and standards bodies

SourceURLCadenceUse
NIST AI RMF and AI 600-1nist.gov/itl/ai-risk-management-frameworkAs publishedFoundational US voluntary framework. Monitor for profile updates and IR 8596 final release.
EU AI Officedigital-strategy.ec.europa.eu/ai-actMonthlyImplementation guidance, GPAI Code of Practice, enforcement updates. Critical for EU AI Act tracking.
APRAapra.gov.auMonthlyCPS 230 guidance, AI-related speeches and supervisory statements.
ASIC Digital Financeasic.gov.au/digital-financeMonthlyAI governance reviews and enforcement actions in Australian financial services.
ISO/IEC JTC 1/SC 42iso.org/committee/6794475.htmlAs publishedISO 42001 (AI Management Systems), ISO 42005 (Impact Assessment), ISO 23894 (AI Risk).
DISR AI Safetyindustry.gov.au/ai-safetyMonthlyAustralian AI Safety Standards, VAISS guardrails, voluntary AI governance initiatives.

Security and adversarial AI

SourceURLCadenceUse
MITRE ATLASatlas.mitre.orgQuarterlyAdversarial threat landscape for AI/ML. Track new tactics and techniques.
OWASP LLM Top 10owasp.org/www-project-top-10-for-large-language-model-applicationsAnnuallyLLM-specific vulnerability list. Current version: 2025.
NIST Cyber AI Profile IR 8596csrc.nist.govAs publishedAI-specific cybersecurity controls. December 2025 draft — monitor for final release.
SANS AI Securitysans.orgAs publishedPractical security guidance for AI systems.

Academic and research

SourceURLCadenceUse
MIT AI Risk Repositoryairisk.mit.eduQuarterlyLiving database of 1,700+ categorised AI risks. Track version updates.
Anthropic / DeepMind / OpenAI Safety ResearchVariousAs publishedFrontier AI safety research. Alignment Forum for broader community research.
ArXiv cs.AI / cs.LGarxiv.orgWeekly (curated)Pre-print research on AI risks, safety, alignment, and governance.

Industry and professional bodies

SourceURLCadenceUse
IAPP AI Governance Centreiapp.orgWeeklyPrivacy and AI governance practitioner community. Regulatory roundups.
ISACA AI Governanceisaca.org/topics/artificial-intelligenceMonthlyCOBIT for AI, practitioner guidance, audit frameworks.
Partnership on AIpartnershiponai.orgMonthlyMulti-stakeholder responsible AI research and guidance.

Australian-specific

SourceURLCadenceUse
ACSC AI Security Guidancecyber.gov.auAs publishedACSC guidance on AI security. Practical operational security.
OAIC AI and Privacyoaic.gov.auAs publishedOAIC guidance on AI and privacy obligations under the Privacy Act.
Tech Council of Australiatechcouncil.com.auMonthlyAustralian industry perspective on AI governance and workforce implications.

Automation

The knowledge base automation engine checks these sources on a defined schedule and generates proposed content updates for human review. See the automation configuration for the full maintenance schedule.