Yesterday you learned why AI needs its own governance. Today, you'll learn to categorize the risks AI creates — because you can't govern what you can't classify.
The AIGP exam tests your ability to distinguish between different types of AI risks and map them to appropriate governance responses. Let's build your risk taxonomy.
AI can harm people directly in several ways:
Discrimination and bias — AI systems that systematically disadvantage people based on race, gender, age, disability, or other protected characteristics. A hiring algorithm that screens out women. A lending model that charges higher rates to minorities.
Privacy violations — AI trained on personal data without consent. Facial recognition used for mass surveillance. Generative AI that memorizes and reproduces private information from training data.
Safety risks — Autonomous vehicles causing accidents. Medical AI providing incorrect diagnoses. AI-controlled systems making decisions that endanger physical safety.
Manipulation — AI-generated deepfakes used for fraud. Recommendation algorithms designed to maximize engagement through psychological manipulation. AI-powered social engineering attacks.
Environmental harm — The massive computational resources required to train large AI models contribute to carbon emissions and energy consumption.
AI creates specific risks for the organizations that build or deploy it:
Legal liability — Violations of anti-discrimination laws, privacy regulations, consumer protection statutes, or the EU AI Act can result in lawsuits, fines, and enforcement actions.
Reputational damage — A single AI failure can become a global news story. The reputational cost often exceeds the legal penalties.
Financial risk — Beyond fines, AI failures can cause direct financial losses through incorrect automated decisions, trading errors, or business disruption.
Operational risk — Over-reliance on AI systems that fail, produce drift, or become unavailable. Shadow AI use by employees introducing ungoverned tools into workflows.
Intellectual property risk — AI trained on copyrighted material. Ownership questions around AI-generated content. Trade secrets inadvertently disclosed to AI tools.
Beyond individuals and organizations, AI poses risks to society at large:
Democratic processes — AI-generated disinformation, deepfakes targeting elections, and algorithmic amplification of polarizing content can undermine democratic institutions.
Labor displacement — AI automation may eliminate jobs faster than new ones are created, particularly affecting certain industries and demographics disproportionately.
Concentration of power — AI development requires massive resources, potentially concentrating technological and economic power in a small number of organizations or nations.
Misalignment and loss of control — As AI systems become more capable, the risk of systems pursuing goals that diverge from human intentions grows. This is the "alignment problem."
The AIGP exam expects you to connect risk categories to appropriate governance actions:
Bias and discrimination risks → Fairness testing, bias audits, representative training data, demographic parity monitoring
Privacy risks → Data protection impact assessments, purpose limitation policies, consent management, anonymization
Safety risks → Red teaming, adversarial testing, human oversight requirements, kill switches
Operational risks → Monitoring frameworks, drift detection, fallback procedures, incident response plans
IP risks → Acceptable use policies, data classification, contractual protections, access controls
This mapping is foundational — you'll use it throughout the rest of this course.
In 2016, Microsoft launched Tay, an AI chatbot on Twitter designed to learn from conversational interactions with users. Within 24 hours, malicious users exploited Tay's learning mechanisms to manipulate it into posting racist, sexist, and inflammatory content. Microsoft took Tay offline within 16 hours of launch, but the damage was done — the incident became a global news story and a textbook example of AI risk across all three levels of the taxonomy.
At the individual level, Tay's outputs were harmful and offensive to targeted groups. At the organizational level, Microsoft suffered significant reputational damage and had to publicly apologize. At the societal level, the incident raised fundamental questions about the safety of deploying AI systems that learn from unfiltered public input. The risks that materialized — manipulation, emergent harmful behavior, and reputational fallout — mapped directly to governance responses that were absent: adversarial testing (red teaming), content filtering controls, human oversight mechanisms, and an incident response plan for AI-specific failures.
This case demonstrates why risk taxonomy matters for governance. Had Microsoft mapped the risks before deployment — manipulation risk, content safety risk, reputational risk — they could have implemented proportionate controls. For the AIGP exam, Tay is a classic example of how failing to classify risks leads to failing to govern them.
Want to see these concepts applied to full case studies? Check out AIGP Scenarios — 10 real-world governance simulations mapped to the AIGP exam domains.