BlitzLearnAI
1 / 10
Scenario 2 of 10 · AIGP Scenarios

EU AI Act Compliance for a Chatbot Company

The EU AI Act entered into force in August 2024, creating the world's first comprehensive legal framework for artificial intelligence. For companies deploying AI in the European market, understanding the Act's risk-based classification system is not optional — it is a regulatory imperative with penalties reaching up to 35 million euros or 7% of global annual turnover. This scenario tests your ability to apply the Act's provisions to a real product.

Flowchart showing EU AI Act risk classification tiers from unacceptable to minimal risk with a chatbot being evaluated
The EU AI Act's four-tier risk framework determines which obligations apply to each AI system.

The Situation

You are the newly appointed AI governance officer at ConversaAI, a Berlin-based SaaS company with 400 employees and 2,000 enterprise customers across Europe. ConversaAI's flagship product is Atlas, a customer service chatbot platform powered by a fine-tuned large language model. Atlas handles approximately 50 million customer interactions per month across industries including telecommunications, e-commerce, retail banking, and insurance.

The CEO has asked you to lead the EU AI Act compliance program. Your first task: classify Atlas under the Act's risk tiers and determine ConversaAI's obligations as both a provider and deployer of AI systems.

The complexity: Atlas is used across multiple industries, and its risk classification may vary depending on the specific deployment context. A chatbot answering questions about phone plans has very different risk implications than one providing insurance claims guidance or banking product recommendations.

Risk Classification Analysis

Under the EU AI Act, AI systems are classified into four tiers: unacceptable risk (prohibited), high risk, limited risk (transparency obligations), and minimal risk.

Is Atlas a high-risk system? Under Annex III of the Act, AI systems used in certain sectors and use cases are classified as high risk. The critical question is whether Atlas falls under any of these categories:

- Access to essential private services (Annex III, Section 5b): When Atlas is deployed by banks to assess creditworthiness or by insurance companies to evaluate claims, it may trigger high-risk classification. The determining factor is whether the chatbot's outputs materially influence decisions about access to financial services.

- General customer service: When Atlas simply answers FAQs about store hours or shipping policies, it falls under minimal risk with no specific obligations beyond general transparency.

Transparency obligations apply regardless. Under Article 50, any AI system that interacts directly with natural persons must disclose that the person is interacting with AI, unless this is obvious from the circumstances. This applies to all Atlas deployments.

General-Purpose AI Model (GPAI) considerations. Because Atlas is built on a fine-tuned LLM, ConversaAI must also consider Chapter V obligations for GPAI providers, including technical documentation, copyright policy compliance, and a sufficiently detailed summary of training data.

Knowledge Check
ConversaAI's chatbot is deployed by an insurance company to guide customers through the claims process, including advising on coverage eligibility. Under the EU AI Act, this deployment is most likely classified as:
A
Unacceptable risk — it should be prohibited
B
High risk — it influences access to essential private services (insurance)
C
Limited risk — it only requires transparency obligations
D
Minimal risk — it is a simple customer service chatbot
Under Annex III, Section 5(b), AI systems used for evaluating eligibility for essential private services, including insurance, are classified as high risk. A chatbot advising on coverage eligibility materially influences the customer's access to insurance benefits. This is distinct from a simple FAQ chatbot, which would be minimal or limited risk.

Compliance Roadmap

As the AI governance officer, you develop a phased compliance roadmap:

Phase 1 — Classification and gap analysis (Months 1-2):

- Audit all 2,000 customer deployments to identify which use cases trigger high-risk classification

- Segment customers by industry and use case: financial services, healthcare, employment, and general retail

- Map current practices against Article 9-15 requirements for high-risk systems

Phase 2 — Technical compliance (Months 3-6):

- Implement Article 50 transparency requirements across all deployments (AI disclosure)

- For high-risk deployments: establish a quality management system (Article 17), implement risk management (Article 9), ensure data governance (Article 10), create technical documentation (Article 11), enable logging (Article 12), and ensure human oversight capabilities (Article 14)

- Develop a conformity assessment process for new customer deployments

Phase 3 — Organizational compliance (Months 6-9):

- Train all customer success teams on EU AI Act obligations

- Update customer contracts to delineate provider vs. deployer responsibilities

- Establish a post-market monitoring system (Article 72)

- Register high-risk AI systems in the EU database (Article 71)

Phase 4 — Ongoing compliance (Continuous):

- Implement change management processes for model updates that may affect risk classification

- Conduct regular audits of customer deployments for classification drift

- Monitor regulatory guidance from national competent authorities

Knowledge Check
Under the EU AI Act, ConversaAI must ensure that all users interacting with Atlas are informed they are communicating with an AI system. This obligation falls under:
A
Article 17 — Quality management system
B
Article 9 — Risk management system
C
Article 14 — Human oversight
D
Article 50 — Transparency obligations for certain AI systems
Article 50 of the EU AI Act establishes transparency obligations for AI systems that interact directly with natural persons. Providers must ensure the system is designed to inform users they are interacting with AI, unless this is obvious from the circumstances. This applies to all chatbot deployments regardless of risk classification.
Knowledge Check
ConversaAI is both building the AI platform and deploying it through enterprise customers. Under the EU AI Act, ConversaAI's primary role is:
A
Deployer — because ConversaAI operates the system for end users
B
Importer — because the underlying LLM may have been developed outside the EU
C
Distributor — because ConversaAI makes the system available to enterprise customers
D
Provider — because ConversaAI develops and places the AI system on the market
Under the EU AI Act, a "provider" is the entity that develops an AI system or has it developed and places it on the market or puts it into service under its own name or trademark. ConversaAI develops Atlas and offers it as a SaaS product, making it the provider. The enterprise customers who deploy Atlas for their end users are the "deployers." Providers carry the heaviest compliance obligations under the Act.
🤖
ChatGPT
online
I'm classifying our customer service chatbot under the EU AI Act. The same chatbot platform is used across industries — retail, banking, insurance, telecom. Does the risk classification apply to the AI system itself or to each specific deployment? This distinction is critical for our compliance strategy.
This is one of the most nuanced aspects of the EU AI Act, and your instinct to question it is exactly right. **The risk classification applies to the intended purpose and context of use, not to the technology itself.** The same underlying AI system can have different risk classifications depending on how it is deployed. Here's how this works for your chatbot: **Minimal risk deployments:** - Retail FAQ chatbot answering "What are your store hours?" - E-commerce chatbot tracking package delivery status - These have no specific obligations beyond general AI literacy (Article 4) **Limited risk (transparency obligations only):** - Telecom chatbot helping users choose a phone plan - Must disclose AI interaction under Article 50 **High-risk deployments (Annex III triggers):** - Banking chatbot that assesses creditworthiness or recommends financial products (Annex III, 5b) - Insurance chatbot that evaluates claims eligibility (Annex III, 5b) - Employment chatbot that screens job applicants (Annex III, 4) **Practical implications for your compliance strategy:** 1. You need a classification framework that evaluates each customer deployment, not just your product overall 2. Your customer onboarding process should include a risk classification assessment 3. Your contracts should clearly allocate provider vs. deployer obligations based on the use case 4. Consider building compliance features (logging, human oversight hooks, transparency notices) into the platform by default — this makes high-risk compliance easier for deployer customers The EU AI Act was deliberately designed this way to be technology-neutral and context-sensitive.
↻ Replay conversation
📋
Scenario 2 Complete
"The EU AI Act classifies risk based on intended purpose and deployment context, not the underlying technology. A single AI platform can have minimal, limited, and high-risk classifications depending on how each customer deploys it — making deployment-level risk assessment essential for compliance."
Tomorrow — Day 3
GDPR Data Subject Access Request for AI Training Data
A user submits a GDPR Article 15 request demanding to know if their data was used to train an AI model. Navigate the legal and technical complexities of AI training data transparency.
🔥1
1 day streak!