AI bias is systematic error that produces unfair outcomes for specific groups. Detection requires statistical fairness metrics (disparate impact, equalised odds, demographic parity), and mitigation spans pre-processing, in-processing, and post-processing techniques.
AI bias occurs when an AI system produces outputs that systematically favour or disadvantage certain groups. The NIST Special Publication 1270 ("Towards a Standard for Identifying and Managing Bias in Artificial Intelligence", March 2022) categorises AI bias into three types:
Famous incidents include Amazon's scrapped hiring tool (2018), ProPublica's COMPAS investigation (2016), and Apple Card credit-limit disparities (2019).
| Metric | Formula | When to Use |
|---|---|---|
| Demographic Parity | P(pred=1 or A=0) = P(pred=1 or A=1) | When base rates should be equal |
| Disparate Impact | P(pred=1 or A=0) / P(pred=1 or A=1) >= 0.8 | EEOC "four-fifths rule" |
| Equalised Odds | Equal TPR and FPR across groups | When label accuracy matters |
| Equal Opportunity | Equal TPR across groups | When false negatives harm |
| Calibration | Predicted probability = actual outcome | Risk scoring (recidivism, credit) |
| Tool | Maintainer | Best For |
|---|---|---|
| AIF360 | IBM / LF AI | 70+ fairness metrics, end-to-end pipeline |
| Fairlearn | Microsoft | Tabular data, disparity dashboards |
| What-If Tool | Google PAIR | Visual counterfactual analysis |
| Aequitas | University of Chicago | Bias audits for public policy |
| Facets | Visual feature-distribution analysis | |
| Themis-ML | Cornell | Integration with scikit-learn |
Amazon (2018) — Internal resume-screening AI down-weighted resumes containing "women's" (e.g., "women's chess club"); tool was scrapped.
Apple Card (2019) — New York DFS investigated alleged gender-based credit-limit disparities; Goldman Sachs (issuer) responded with process changes.
Dutch SyRI (2020) — The Hague District Court struck down the System Risk Indication welfare-fraud AI for violating ECHR Article 8.
UK A-level Algorithm (2020) — Ofqual's grading algorithm downgraded disadvantaged students; withdrawn after public outcry.
Every production AI system in 2026 needs a documented fairness assessment. Regulators from the FTC to the European Data Protection Board explicitly cite bias audits as evidence of compliance. The EEOC's 2023 technical assistance and OFCCP's 2024 AI hiring guidance treat disparate impact analysis as non-negotiable.
Q: Can all types of bias be eliminated? No — fairness metrics are often mathematically incompatible. Choose metrics tied to the harm you want to prevent.
Q: What is the "four-fifths rule"? EEOC's disparate-impact threshold: selection rate for any group should be at least 80% of the highest-scoring group.
Q: Is AIF360 free? Yes — Apache 2.0 licensed, maintained by LF AI & Data.
Q: Does differential privacy reduce bias? Not directly — it protects privacy but can exacerbate bias for small subgroups.
Q: Is fairness regulated? Yes — EU AI Act Article 10, Colorado AI Act, EEOC guidance, UK Equality Act 2010, and India DPDP Act all apply.
Q: What is fairness-aware training? Training-time techniques (e.g., adversarial debiasing, reweighing) that constrain the model to reduce disparate outcomes.
Q: Should we use demographic parity or equalised odds? Demographic parity for allocation decisions with equal base rates; equalised odds when ground-truth accuracy differs legitimately.
Bias audits are the new regulatory floor. Teams that embed fairness testing into CI/CD pipelines ship AI that courts, regulators, and customers trust.
Start your fairness audit with Misar AI's Bias Audit Kit — AIF360 and Fairlearn preloaded.
Free newsletter
Join thousands of creators and builders. One email a week — practical AI tips, platform updates, and curated reads.
No spam · Unsubscribe anytime
A practical 2026 AI ethics checklist covering governance, data, model, deployment, and monitoring — aligned with NIST, I…
India's M.A.N.A.V. AI framework, IndiaAI Mission, Digital Personal Data Protection Act, MeitY advisories, and the upcomi…
Comments
Sign in to join the conversation
No comments yet. Be the first to share your thoughts!