BlitzLearnAI
1 / 14
Day 4 of 30 · Domain I — Foundations of AI Governance

Building an AI Governance Program — Roles and Accountability

You've learned what AI governance is, why it matters, and the principles behind it. Now the question becomes: who does what?

The AIGP exam heavily tests governance roles, accountability structures, and operating models. Knowing who is responsible for what — and how escalation works — is fundamental to every domain on the exam.

AI governance organizational chart showing board, committees, model owners, and teams
A well-designed governance structure has clear accountability from the board level down to individual model owners.

AI Governance Operating Models

Organizations typically adopt one of three operating models:

Centralized — A single AI governance team or office makes all governance decisions, sets all policies, and approves all AI deployments. Works well for organizations early in AI adoption or with limited AI use cases.

Federated — Individual business units have their own AI governance processes, guided by enterprise-wide principles and standards. Works for large organizations with diverse AI use cases across different domains.

Hybrid — A central AI governance office sets policies, standards, and minimum requirements, while business units implement and adapt them for their specific contexts. Most common model recommended by governance frameworks.

The hybrid model balances consistency (central standards) with agility (business unit autonomy). The AIGP exam tends to favor the hybrid model as best practice.

Knowledge Check
An AI model produces a harmful outcome in production. Under a well-designed governance framework, who is primarily accountable?
A
The data scientist who built the model
B
The model owner designated in the governance framework
C
The end user
D
The IT department
In a well-designed governance framework, the designated model owner bears primary accountability. While the data scientist and IT department have responsibilities, accountability is assigned through governance roles, not informal attribution. End users are not accountable for governance failures in the systems they use.

Key Governance Roles

Every AI governance program needs clearly defined roles:

AI Governance Officer / Head of AI Governance — Sets the overall governance strategy, owns the governance framework, reports to senior leadership. Comparable to a Chief Privacy Officer but focused on AI.

AI Ethics Committee / Board — Cross-functional advisory body that reviews AI principles, evaluates complex cases, and provides ethical guidance. Typically includes diverse perspectives: legal, technical, business, ethics, and external stakeholders.

AI Risk Committee — Focused specifically on risk assessment and management. Reviews risk assessments, approves risk tolerances, and monitors risk metrics. May overlap with or be separate from the ethics committee.

Model Owner — Accountable for a specific AI system throughout its lifecycle. Owns the deployment decision, ongoing monitoring, and incident response for their model. This is a critical role — the model owner is the single point of accountability.

Model Developer / Data Scientist — Builds and trains the model. Responsible for technical quality, testing, and documentation. Reports to the model owner on governance compliance.

Data Steward — Ensures data quality, governance, and compliance for training and operational data. Manages data lineage, access controls, and purpose limitations.

The RACI Matrix for AI Governance

The AIGP exam may test your ability to assign governance responsibilities. The RACI matrix is the standard tool:

R — Responsible — Who does the work?

A — Accountable — Who owns the outcome? (Only one person per decision)

C — Consulted — Who provides input before a decision?

I — Informed — Who is told after a decision?

Example for an AI deployment decision:

- Responsible: AI development team (prepares documentation, conducts testing)

- Accountable: Model owner (makes the go/no-go decision)

- Consulted: Legal, compliance, ethics committee, affected business units

- Informed: Executive leadership, risk committee, affected stakeholders

Knowledge Check
In a RACI matrix for AI governance decisions, which role should always be limited to exactly ONE person or entity per decision?
A
Informed
B
Responsible
C
Accountable
D
Consulted
The Accountable role must be limited to one person or entity per decision. If multiple parties are accountable, no one is truly accountable — this leads to diffusion of responsibility. Multiple people can be Responsible (doing the work), Consulted, or Informed.

Ethics Committee vs. Risk Committee

The AIGP exam distinguishes between these two bodies:

AI Ethics Committee:

- Advisory role — provides guidance, not binding decisions

- Reviews ethical implications of AI use cases

- Evaluates edge cases and novel scenarios

- Comprises diverse stakeholders including external voices

- Meets periodically to review principles and emerging issues

AI Risk Committee:

- Decision-making authority over risk acceptance

- Reviews and approves risk assessments

- Sets risk tolerances and thresholds

- Monitors risk metrics and escalations

- Comprises risk management professionals and business leaders

Some organizations combine these into a single body. The exam may test whether you can identify when they should be separate (large organizations with complex AI portfolios) versus combined (smaller organizations or early-stage programs).

Executive Accountability and Board Reporting

AI governance must have top-down support to be effective. The AIGP exam expects you to understand board-level responsibilities:

Board of Directors:

- Understands AI risks facing the organization

- Ensures AI governance is adequately resourced

- Reviews AI risk reports and incident summaries

- Holds executive leadership accountable for governance effectiveness

C-Suite:

- Champions AI governance across the organization

- Allocates budget and resources

- Removes organizational barriers to governance adoption

- Reports AI governance status to the board

Key exam point: If the board is not engaged with AI governance, the program is likely to fail regardless of how well-designed the framework is. Top-down commitment is a prerequisite, not a nice-to-have.

Knowledge Check
An organization's AI ethics committee has issued guidance recommending that a customer-facing AI system be paused due to fairness concerns. The product team disagrees. Under good governance, who resolves this conflict?
A
The data science team, since they understand the technical details
B
The product team, since they own the product
C
The AI ethics committee, since they identified the issue
D
An escalation authority defined in the governance framework (e.g., AI governance officer or risk committee)
Good governance includes predefined escalation paths for disagreements. Neither the ethics committee (advisory) nor the product team (operational) should unilaterally resolve conflicts. An escalation authority — such as the AI governance officer or risk committee — is designated in the framework to make binding decisions.

Real-World Scenario

In 2020, Google faced a governance crisis when it terminated Timnit Gebru, co-lead of its Ethical AI team, following a dispute over a research paper examining the risks of large language models. The incident exposed critical weaknesses in Google's AI governance structure — specifically, the lack of clear accountability boundaries between its AI ethics research team, product leadership, and executive management. The ethics team had an advisory role but no binding authority, and when their research conflicted with product strategy, there was no defined escalation path to resolve the disagreement.

The fallout was significant: several researchers resigned, public trust in Google's AI ethics commitments eroded, and the incident became a case study in governance failure. Google subsequently restructured its responsible AI organization, creating clearer reporting lines and expanding the team's mandate. The core lesson was that an ethics committee without decision-making authority or a defined escalation path is a governance vulnerability, not a governance control.

For the AIGP exam, this case highlights the importance of clearly defined roles, escalation authorities, and the distinction between advisory ethics committees and binding risk committees. When governance roles are ambiguous — particularly regarding who resolves conflicts between ethics recommendations and business priorities — the governance framework fails precisely when it is needed most.

Final Check
Which governance operating model BEST balances consistency of AI governance standards with the agility needed by diverse business units?
A
Federated — each business unit governs independently
B
Centralized — all decisions made by one governance team
C
Ad hoc — governance applied as needed on a case-by-case basis
D
Hybrid — central standards with business unit implementation
The hybrid model provides central standards (consistency) while allowing business units to implement those standards for their specific contexts (agility). Centralized is too rigid for diverse organizations. Federated risks inconsistency. Ad hoc is not a governance model — it's the absence of one.
🎯
Day 4 Complete
"Clear roles and accountability are the backbone of AI governance. The model owner is your single point of accountability, the RACI matrix prevents diffusion of responsibility, and board-level engagement is non-negotiable."
Tomorrow — Day 5
AI Governance Strategy — From Principles to Policy
Draft an AI governance charter, establish governance policies, and connect AI strategy to enterprise risk management.
🔥1
1 day streak!

Go Deeper

Want to see these concepts applied to full case studies? Check out AIGP Scenarios — 10 real-world governance simulations mapped to the AIGP exam domains.