BlitzLearnAI
1 / 10
Scenario 4 of 10 · AIGP Scenarios

Building an AI Risk Framework with NIST AI RMF

In January 2023, the National Institute of Standards and Technology released the AI Risk Management Framework (AI RMF 1.0), providing organizations with a voluntary, structured approach to managing AI risks. Unlike prescriptive regulations, NIST AI RMF is designed to be flexible and adaptable across industries. Yet implementing it across a complex organization with multiple AI systems remains a significant governance challenge. This scenario tests your ability to operationalize the framework.

Diagram showing the four NIST AI RMF functions — Govern, Map, Measure, Manage — connected in a continuous cycle across multiple AI systems
The NIST AI RMF's four core functions must be implemented across every AI system in the organization's portfolio.

The Situation

You are the Head of AI Risk at Meridian Financial Services, a mid-size financial services company based in Charlotte, North Carolina. Meridian operates 15 AI systems across the organization:

- 3 credit scoring models (used in mortgage, auto, and personal lending)

- 2 fraud detection systems (real-time transaction monitoring and claims fraud)

- 4 customer service chatbots (across banking, insurance, wealth management, and mortgage divisions)

- 2 marketing personalization engines (email targeting and website recommendations)

- 1 anti-money laundering (AML) system (transaction pattern analysis)

- 1 HR resume screening tool (candidate filtering for open positions)

- 1 document processing system (automated extraction from loan applications)

- 1 internal knowledge management assistant (employee-facing LLM for policy queries)

The Chief Risk Officer has mandated NIST AI RMF adoption across all 15 systems within 12 months. You have a team of four people and a budget of $800,000. The board expects quarterly progress reports. Several business unit leaders view this as bureaucratic overhead and are resistant.

Implementation Strategy — The Four Functions

GOVERN — Establishing the organizational foundation:

The Govern function is the bedrock. You must establish governance structures before touching individual systems.

- Create an AI Risk Committee with representation from risk, legal, compliance, technology, and business units. Secure executive sponsorship from the CRO.

- Develop an AI risk policy that defines risk appetite and tolerance for each AI use case category. Credit scoring models will have far stricter tolerances than marketing personalization engines.

- Define roles and responsibilities: who owns each AI system's risk profile? At Meridian, you assign ownership to business unit leaders with technical accountability to data science teams.

- Establish an AI system inventory — Meridian has 15 known systems, but your audit discovers 3 additional shadow AI tools being used by business units without governance oversight.

MAP — Understanding context and identifying risks:

For each of the 18 AI systems (including the 3 shadow AI tools), you must:

- Document the intended purpose, deployment context, and affected stakeholders

- Identify potential harms: who could be harmed, how, and how severely?

- Assess the risk tier: credit scoring and AML systems are highest risk; the internal knowledge assistant is lowest

- Map interdependencies: the fraud detection system feeds into the AML system, creating cascading risk

MEASURE — Quantifying and tracking risks:

- Develop metrics for each identified risk: fairness metrics for credit scoring, false positive/negative rates for fraud detection, accuracy metrics for document processing

- Establish baselines and thresholds aligned with risk tolerance

- Implement automated monitoring where possible, manual audits where necessary

- Create a measurement cadence: weekly for high-risk systems, monthly for medium, quarterly for low

MANAGE — Prioritizing and responding to risks:

- Develop response plans for each risk category: mitigate, transfer, accept, or avoid

- Create escalation procedures: when does an AI risk become a board-level issue?

- Establish incident response procedures for AI-specific failures

- Plan for model retirement when risks become unmanageable

Knowledge Check
During the NIST AI RMF implementation, the team discovers 3 AI tools being used by business units without governance oversight. According to the framework, the FIRST step is to:
A
Assign them the highest risk category by default
B
Report the unauthorized tools to regulators
C
Immediately shut down all unauthorized AI tools
D
Inventory and assess these systems through the MAP function to understand their context and risks
The NIST AI RMF MAP function is specifically designed to identify and understand AI systems and their contexts. Shadow AI tools should be inventoried, their purposes documented, and their risks assessed before determining the appropriate response. Shutting them down without assessment may disrupt critical business operations, and they may not require the highest risk category.

Overcoming Organizational Resistance

The biggest challenge is not technical — it is organizational. Here is how you address resistance:

Business unit leaders say: "This slows us down." Your response: prioritize by risk tier. The marketing personalization engines and internal knowledge assistant require lightweight governance — a documented risk assessment and quarterly review. Focus heavy governance on credit scoring, AML, and HR screening. Show that risk-proportionate governance does not apply the same burden to every system.

Data scientists say: "We already test our models." Your response: align NIST AI RMF with existing model validation processes. Meridian's credit models already undergo SR 11-7 (OCC model risk management) validation. Map existing controls to NIST AI RMF subcategories — you may find 60% of the Measure function is already being performed under different naming conventions.

Executives say: "What's the ROI?" Your response: quantify the cost of AI failures. Reference the Goldman Sachs/Apple Card investigation where algorithmic bias allegations led to regulatory scrutiny, reputational damage, and the eventual unwinding of the partnership. Calculate the cost of a similar incident at Meridian versus the $800,000 governance investment.

Knowledge Check
Meridian's credit scoring models already undergo OCC SR 11-7 model risk management validation. When implementing NIST AI RMF, the AI governance team should:
A
Defer NIST AI RMF implementation for credit models since SR 11-7 already covers them
B
Replace SR 11-7 processes with NIST AI RMF requirements entirely
C
Treat SR 11-7 and NIST AI RMF as completely separate compliance programs
D
Map existing SR 11-7 controls to NIST AI RMF subcategories to identify gaps and avoid duplication
The NIST AI RMF is designed to complement existing risk management frameworks, not replace them. The optimal approach is to map existing controls (like SR 11-7 model validation) to NIST AI RMF subcategories, identify gaps, and build only incremental processes. This avoids duplication and leverages existing organizational maturity.
Knowledge Check
Under NIST AI RMF, which function establishes the organizational culture, policies, and structures needed to manage AI risk?
A
Govern
B
Measure
C
Map
D
Manage
The Govern function is the foundational function that establishes organizational governance structures, policies, risk appetite, roles and responsibilities, and culture. It is cross-cutting and informs the other three functions (Map, Measure, Manage). Without Govern, the other functions lack organizational authority and direction.
🤖
ChatGPT
online
I need to implement NIST AI RMF across 15 AI systems in 12 months with a team of 4 and $800K budget. Business units are resistant. How should I prioritize and sequence the rollout to maximize impact while managing organizational resistance?
Great question — this is a resource-constrained implementation that requires strategic prioritization. Here's my recommended approach: **Phase 1: Foundation (Months 1-3) — Focus on GOVERN** - Establish the AI Risk Committee and get executive sponsorship documented - Create the AI system inventory (all 15+ systems) - Define risk tiers and appetite — this is critical for prioritization - Draft the AI risk policy - Budget: ~$150K (mostly staff time + external advisory) **Phase 2: High-Risk Systems (Months 3-7) — MAP, MEASURE, MANAGE for top 5** - Target: credit scoring (3), AML (1), HR screening (1) - These carry the highest regulatory and reputational risk - Leverage existing SR 11-7 and BSA/AML compliance — you're adding incremental AI-specific requirements, not starting from zero - Budget: ~$350K (external audits, tooling, staff time) **Phase 3: Medium-Risk Systems (Months 7-10) — MAP, MEASURE, MANAGE for next 5-7** - Target: fraud detection (2), customer chatbots (4), document processing (1) - These affect customers directly but have lower regulatory exposure - Budget: ~$200K **Phase 4: Low-Risk Systems + Continuous Improvement (Months 10-12)** - Target: marketing engines (2), internal assistant (1), shadow AI (3) - Lightweight assessments and documentation - Establish ongoing monitoring cadences - Budget: ~$100K **Managing resistance:** - Start with the high-risk systems where the business case is clearest (regulatory mandate) - Show early wins: "We found 3 shadow AI tools and brought them under governance" - Make governance proportionate — don't apply the same rigor to a marketing engine as a credit model - Invite business unit leaders to the AI Risk Committee so they feel ownership, not burden
↻ Replay conversation
📋
Scenario 4 Complete
"NIST AI RMF implementation succeeds when it is risk-proportionate, leverages existing controls, and secures organizational buy-in. The Govern function must be established first — without governance structures, the other three functions lack authority and direction."
Tomorrow — Day 5
Bias in a Lending Algorithm
An Apple Card-style lending scenario where an AI system approves different credit limits for spouses with identical financial profiles, triggering regulatory scrutiny and public outrage.
🔥1
1 day streak!