The AI Incident Database (AIID) and OECD AI Incidents Monitor (AIM) now catalogue 3,000+ real-world AI harms. Incident data is the fastest input to responsible AI risk assessments in 2026.
An AI incident is a situation where the development, deployment, or use of an AI system results in actual harm to people, property, or the environment. The AI Incident Database (incidentdatabase.ai) was launched in 2018 by Sean McGregor and the Partnership on AI. The OECD AI Incidents Monitor (oecd.ai/en/incidents) launched in 2024 and harmonises classification with OECD AI Principles.
The ISO/IEC/TR 5469:2024 and the EU AI Act Article 73 both require serious incident reporting for high-risk AI.
| Category | Example |
|---|---|
| Bias and discrimination | Amazon hiring AI down-weighting women (2018) |
| Autonomous-vehicle safety | Uber ATG fatal crash, Tempe AZ (2018) |
| Misidentification | Robert Williams wrongful arrest (2019, Detroit) |
| Content moderation failure | YouTube recommending extremist content |
| Healthcare AI error | UnitedHealth nH Predict denials (2023 lawsuit) |
| Financial AI discrimination | Apple Card gender disparities (2019) |
| Deepfake fraud | Arup HKD 200M deepfake transfer (2024) |
| LLM hallucination | Air Canada chatbot liability (2024) |
| Copyright infringement | Stability AI Getty litigation (2023-2025) |
| Privacy breach | ChatGPT title-history leak (2023) |
| Regulation | Trigger | Deadline |
|---|---|---|
| EU AI Act Art. 73 | Serious incident in high-risk AI | 15 days (3 days for widespread) |
| US state consumer-protection laws | Varies | Varies |
| India DPDP Act | Personal data breach | 72 hours to Data Protection Board |
| China Generative AI Measures | Illegal content | 24 hours |
| UK DPA 2018 | Personal data breach | 72 hours to ICO |
Uber ATG (Tempe, 2018) — Self-driving prototype killed pedestrian Elaine Herzberg. NTSB investigation found operator and system design failures.
Robert Williams (Detroit, 2019) — Wrongful arrest after facial recognition misidentification. ACLU case became a reference for face-recognition moratoria.
nH Predict (UnitedHealth, 2023) — Class action alleges AI tool with 90%+ error rate was used to deny Medicare Advantage claims.
Air Canada Chatbot (BC, 2024) — Civil Resolution Tribunal held airline liable for misinformation about bereavement fares.
Arup deepfake (Hong Kong, 2024) — HKD 200M transferred after deepfake CFO video call.
In 2026, incident management is a core RAI capability. Teams should:
Q: What is a serious incident under the EU AI Act? An incident that leads to death, serious health damage, serious property or environmental damage, or serious and irreversible disruption of critical infrastructure.
Q: Is AIID peer-reviewed? Incidents are community-submitted and editorial-reviewed by the Responsible AI Collaborative.
Q: Is OECD AIM government data? Maintained by the OECD AI Policy Observatory with government and multistakeholder inputs.
Q: Can an incident report be confidential? Regulator reports can be confidential; AIID public entries can be submitted anonymously.
Q: How do incidents map to risk tiers? Use incident severity (fatality, financial loss, privacy breach) to inform AI RMF MEASURE and MANAGE functions.
Q: Does the FTC require incident reporting? No dedicated AI incident rule, but Section 5 enforcement often follows publicised incidents.
Q: How often should IRPs be updated? At least annually, and after every incident.
Incident data is the cheapest risk-management input available. Read it, learn from it, and contribute.
Wire incident response into your AI stack with Misar AI's IRP template.
Free newsletter
Join thousands of creators and builders. One email a week — practical AI tips, platform updates, and curated reads.
No spam · Unsubscribe anytime
Comments
Sign in to join the conversation
No comments yet. Be the first to share your thoughts!