BlitzLearnAI
1 / 13
Day 2 of 30 · Domain I — Foundations of AI Governance

AI Risks and Harms — A Governance Taxonomy

Yesterday you learned why AI needs its own governance. Today, you'll learn to categorize the risks AI creates — because you can't govern what you can't classify.

The AIGP exam tests your ability to distinguish between different types of AI risks and map them to appropriate governance responses. Let's build your risk taxonomy.

AI risk taxonomy showing individual, organizational, and societal risk categories
AI risks cascade across three levels: individual harms, organizational exposure, and societal impact.

Harms to Individuals and Groups

AI can harm people directly in several ways:

Discrimination and bias — AI systems that systematically disadvantage people based on race, gender, age, disability, or other protected characteristics. A hiring algorithm that screens out women. A lending model that charges higher rates to minorities.

Privacy violations — AI trained on personal data without consent. Facial recognition used for mass surveillance. Generative AI that memorizes and reproduces private information from training data.

Safety risks — Autonomous vehicles causing accidents. Medical AI providing incorrect diagnoses. AI-controlled systems making decisions that endanger physical safety.

Manipulation — AI-generated deepfakes used for fraud. Recommendation algorithms designed to maximize engagement through psychological manipulation. AI-powered social engineering attacks.

Environmental harm — The massive computational resources required to train large AI models contribute to carbon emissions and energy consumption.

Knowledge Check
A hiring algorithm systematically ranks candidates from one demographic lower. This is primarily an example of:
A
Operational risk
B
Privacy risk
C
Misalignment risk
D
Bias and discrimination risk
This is a bias and discrimination risk — the algorithm produces systematically unfair outcomes based on demographic characteristics. While it could also create operational and legal risks for the organization, the primary harm category is bias and discrimination against individuals.

Organizational Risks

AI creates specific risks for the organizations that build or deploy it:

Legal liability — Violations of anti-discrimination laws, privacy regulations, consumer protection statutes, or the EU AI Act can result in lawsuits, fines, and enforcement actions.

Reputational damage — A single AI failure can become a global news story. The reputational cost often exceeds the legal penalties.

Financial risk — Beyond fines, AI failures can cause direct financial losses through incorrect automated decisions, trading errors, or business disruption.

Operational risk — Over-reliance on AI systems that fail, produce drift, or become unavailable. Shadow AI use by employees introducing ungoverned tools into workflows.

Intellectual property risk — AI trained on copyrighted material. Ownership questions around AI-generated content. Trade secrets inadvertently disclosed to AI tools.

Knowledge Check
An employee pastes confidential client contracts into a public generative AI tool to summarize them. Which organizational risk category is MOST directly implicated?
A
Reputational risk
B
Operational risk
C
Bias risk
D
Intellectual property and confidentiality risk
The most direct risk is IP and confidentiality — confidential client information was disclosed to a third-party AI service. While this could also create reputational and operational risks, the primary and most immediate risk is the unauthorized disclosure of confidential information.

Societal Risks

Beyond individuals and organizations, AI poses risks to society at large:

Democratic processes — AI-generated disinformation, deepfakes targeting elections, and algorithmic amplification of polarizing content can undermine democratic institutions.

Labor displacement — AI automation may eliminate jobs faster than new ones are created, particularly affecting certain industries and demographics disproportionately.

Concentration of power — AI development requires massive resources, potentially concentrating technological and economic power in a small number of organizations or nations.

Misalignment and loss of control — As AI systems become more capable, the risk of systems pursuing goals that diverge from human intentions grows. This is the "alignment problem."

Knowledge Check
An AI-powered news recommendation system consistently amplifies sensational and divisive content because it maximizes user engagement. This is primarily an example of which societal risk?
A
Threat to democratic processes and social cohesion
B
Labor displacement
C
Concentration of power
D
Environmental harm
Algorithmic amplification of divisive content directly threatens democratic processes and social cohesion. The system isn't displacing labor or concentrating power — it's actively undermining informed public discourse by optimizing for engagement over accuracy and balance.

Mapping Risks to Governance Responses

The AIGP exam expects you to connect risk categories to appropriate governance actions:

Bias and discrimination risks → Fairness testing, bias audits, representative training data, demographic parity monitoring

Privacy risks → Data protection impact assessments, purpose limitation policies, consent management, anonymization

Safety risks → Red teaming, adversarial testing, human oversight requirements, kill switches

Operational risks → Monitoring frameworks, drift detection, fallback procedures, incident response plans

IP risks → Acceptable use policies, data classification, contractual protections, access controls

This mapping is foundational — you'll use it throughout the rest of this course.

Real-World Scenario

In 2016, Microsoft launched Tay, an AI chatbot on Twitter designed to learn from conversational interactions with users. Within 24 hours, malicious users exploited Tay's learning mechanisms to manipulate it into posting racist, sexist, and inflammatory content. Microsoft took Tay offline within 16 hours of launch, but the damage was done — the incident became a global news story and a textbook example of AI risk across all three levels of the taxonomy.

At the individual level, Tay's outputs were harmful and offensive to targeted groups. At the organizational level, Microsoft suffered significant reputational damage and had to publicly apologize. At the societal level, the incident raised fundamental questions about the safety of deploying AI systems that learn from unfiltered public input. The risks that materialized — manipulation, emergent harmful behavior, and reputational fallout — mapped directly to governance responses that were absent: adversarial testing (red teaming), content filtering controls, human oversight mechanisms, and an incident response plan for AI-specific failures.

This case demonstrates why risk taxonomy matters for governance. Had Microsoft mapped the risks before deployment — manipulation risk, content safety risk, reputational risk — they could have implemented proportionate controls. For the AIGP exam, Tay is a classic example of how failing to classify risks leads to failing to govern them.

Final Check
Your organization identifies that its AI lending model may have disparate impact across racial groups. Which governance response is MOST appropriate as the first step?
A
Remove race-related features from the model
B
Deploy the model with a disclaimer about potential bias
C
Shut down the lending model immediately
D
Conduct a bias audit to measure the actual disparate impact across demographic groups
A bias audit is the correct first step — you need to measure the actual impact before deciding on a response. Deploying with a disclaimer doesn't address the harm. Removing features may not eliminate proxy discrimination. Shutting down immediately may be disproportionate if the bias is minor and correctable.
🎯
Day 2 Complete
"AI risks cascade across three levels — individual harms, organizational exposure, and societal impact. You can't govern risks you haven't classified, so build your taxonomy first."
Tomorrow — Day 3
Ethical, Responsible, and Trustworthy AI
Understand the differences between ethical AI, responsible AI, and trustworthy AI — and how principles translate into organizational commitments.
🔥1
1 day streak!

Go Deeper

Want to see these concepts applied to full case studies? Check out AIGP Scenarios — 10 real-world governance simulations mapped to the AIGP exam domains.