The EU AI Act entered into force in August 2024, creating the world's first comprehensive legal framework for artificial intelligence. For companies deploying AI in the European market, understanding the Act's risk-based classification system is not optional — it is a regulatory imperative with penalties reaching up to 35 million euros or 7% of global annual turnover. This scenario tests your ability to apply the Act's provisions to a real product.
You are the newly appointed AI governance officer at ConversaAI, a Berlin-based SaaS company with 400 employees and 2,000 enterprise customers across Europe. ConversaAI's flagship product is Atlas, a customer service chatbot platform powered by a fine-tuned large language model. Atlas handles approximately 50 million customer interactions per month across industries including telecommunications, e-commerce, retail banking, and insurance.
The CEO has asked you to lead the EU AI Act compliance program. Your first task: classify Atlas under the Act's risk tiers and determine ConversaAI's obligations as both a provider and deployer of AI systems.
The complexity: Atlas is used across multiple industries, and its risk classification may vary depending on the specific deployment context. A chatbot answering questions about phone plans has very different risk implications than one providing insurance claims guidance or banking product recommendations.
Under the EU AI Act, AI systems are classified into four tiers: unacceptable risk (prohibited), high risk, limited risk (transparency obligations), and minimal risk.
Is Atlas a high-risk system? Under Annex III of the Act, AI systems used in certain sectors and use cases are classified as high risk. The critical question is whether Atlas falls under any of these categories:
- Access to essential private services (Annex III, Section 5b): When Atlas is deployed by banks to assess creditworthiness or by insurance companies to evaluate claims, it may trigger high-risk classification. The determining factor is whether the chatbot's outputs materially influence decisions about access to financial services.
- General customer service: When Atlas simply answers FAQs about store hours or shipping policies, it falls under minimal risk with no specific obligations beyond general transparency.
Transparency obligations apply regardless. Under Article 50, any AI system that interacts directly with natural persons must disclose that the person is interacting with AI, unless this is obvious from the circumstances. This applies to all Atlas deployments.
General-Purpose AI Model (GPAI) considerations. Because Atlas is built on a fine-tuned LLM, ConversaAI must also consider Chapter V obligations for GPAI providers, including technical documentation, copyright policy compliance, and a sufficiently detailed summary of training data.
As the AI governance officer, you develop a phased compliance roadmap:
Phase 1 — Classification and gap analysis (Months 1-2):
- Audit all 2,000 customer deployments to identify which use cases trigger high-risk classification
- Segment customers by industry and use case: financial services, healthcare, employment, and general retail
- Map current practices against Article 9-15 requirements for high-risk systems
Phase 2 — Technical compliance (Months 3-6):
- Implement Article 50 transparency requirements across all deployments (AI disclosure)
- For high-risk deployments: establish a quality management system (Article 17), implement risk management (Article 9), ensure data governance (Article 10), create technical documentation (Article 11), enable logging (Article 12), and ensure human oversight capabilities (Article 14)
- Develop a conformity assessment process for new customer deployments
Phase 3 — Organizational compliance (Months 6-9):
- Train all customer success teams on EU AI Act obligations
- Update customer contracts to delineate provider vs. deployer responsibilities
- Establish a post-market monitoring system (Article 72)
- Register high-risk AI systems in the EU database (Article 71)
Phase 4 — Ongoing compliance (Continuous):
- Implement change management processes for model updates that may affect risk classification
- Conduct regular audits of customer deployments for classification drift
- Monitor regulatory guidance from national competent authorities