In January 2023, the National Institute of Standards and Technology released the AI Risk Management Framework (AI RMF 1.0), providing organizations with a voluntary, structured approach to managing AI risks. Unlike prescriptive regulations, NIST AI RMF is designed to be flexible and adaptable across industries. Yet implementing it across a complex organization with multiple AI systems remains a significant governance challenge. This scenario tests your ability to operationalize the framework.
You are the Head of AI Risk at Meridian Financial Services, a mid-size financial services company based in Charlotte, North Carolina. Meridian operates 15 AI systems across the organization:
- 3 credit scoring models (used in mortgage, auto, and personal lending)
- 2 fraud detection systems (real-time transaction monitoring and claims fraud)
- 4 customer service chatbots (across banking, insurance, wealth management, and mortgage divisions)
- 2 marketing personalization engines (email targeting and website recommendations)
- 1 anti-money laundering (AML) system (transaction pattern analysis)
- 1 HR resume screening tool (candidate filtering for open positions)
- 1 document processing system (automated extraction from loan applications)
- 1 internal knowledge management assistant (employee-facing LLM for policy queries)
The Chief Risk Officer has mandated NIST AI RMF adoption across all 15 systems within 12 months. You have a team of four people and a budget of $800,000. The board expects quarterly progress reports. Several business unit leaders view this as bureaucratic overhead and are resistant.
GOVERN — Establishing the organizational foundation:
The Govern function is the bedrock. You must establish governance structures before touching individual systems.
- Create an AI Risk Committee with representation from risk, legal, compliance, technology, and business units. Secure executive sponsorship from the CRO.
- Develop an AI risk policy that defines risk appetite and tolerance for each AI use case category. Credit scoring models will have far stricter tolerances than marketing personalization engines.
- Define roles and responsibilities: who owns each AI system's risk profile? At Meridian, you assign ownership to business unit leaders with technical accountability to data science teams.
- Establish an AI system inventory — Meridian has 15 known systems, but your audit discovers 3 additional shadow AI tools being used by business units without governance oversight.
MAP — Understanding context and identifying risks:
For each of the 18 AI systems (including the 3 shadow AI tools), you must:
- Document the intended purpose, deployment context, and affected stakeholders
- Identify potential harms: who could be harmed, how, and how severely?
- Assess the risk tier: credit scoring and AML systems are highest risk; the internal knowledge assistant is lowest
- Map interdependencies: the fraud detection system feeds into the AML system, creating cascading risk
MEASURE — Quantifying and tracking risks:
- Develop metrics for each identified risk: fairness metrics for credit scoring, false positive/negative rates for fraud detection, accuracy metrics for document processing
- Establish baselines and thresholds aligned with risk tolerance
- Implement automated monitoring where possible, manual audits where necessary
- Create a measurement cadence: weekly for high-risk systems, monthly for medium, quarterly for low
MANAGE — Prioritizing and responding to risks:
- Develop response plans for each risk category: mitigate, transfer, accept, or avoid
- Create escalation procedures: when does an AI risk become a board-level issue?
- Establish incident response procedures for AI-specific failures
- Plan for model retirement when risks become unmanageable
The biggest challenge is not technical — it is organizational. Here is how you address resistance:
Business unit leaders say: "This slows us down." Your response: prioritize by risk tier. The marketing personalization engines and internal knowledge assistant require lightweight governance — a documented risk assessment and quarterly review. Focus heavy governance on credit scoring, AML, and HR screening. Show that risk-proportionate governance does not apply the same burden to every system.
Data scientists say: "We already test our models." Your response: align NIST AI RMF with existing model validation processes. Meridian's credit models already undergo SR 11-7 (OCC model risk management) validation. Map existing controls to NIST AI RMF subcategories — you may find 60% of the Measure function is already being performed under different naming conventions.
Executives say: "What's the ROI?" Your response: quantify the cost of AI failures. Reference the Goldman Sachs/Apple Card investigation where algorithmic bias allegations led to regulatory scrutiny, reputational damage, and the eventual unwinding of the partnership. Calculate the cost of a similar incident at Meridian versus the $800,000 governance investment.