Privacy PolicyCookie Policy
    Blog
    AI Risk Classification Under the EU AI Act
    Technical Report

    AI Risk Classification Under the EU AI Act

    ByVeratrace Research·Research Team
    February 3, 2026|6 min read|1,004 words
    Share
    Research updates: Subscribe

    The EU AI Act uses a risk-based approach to regulation. Understanding how to classify your AI systems is the first step toward compliance—and getting classification wrong has significant consequences.

    01The Risk-Based Framework

    A European medical device manufacturer discovered the complexity of risk classification firsthand. Their AI-powered diagnostic imaging assistant analyzed radiology scans and highlighted potential abnormalities for physician review. The company initially classified the system as "limited risk" because physicians made all final diagnoses—the AI only provided suggestions. During pre-market consultation with their notified body, they learned their classification was wrong. Because the AI output materially influenced diagnostic decisions in healthcare, the system fell squarely within Annex III high-risk categories, regardless of human oversight. The reclassification triggered requirements for conformity assessment, quality management systems, technical documentation, and logging infrastructure they hadn't built. A system months from launch required fundamental re-architecture because of a classification error made early in development.

    This is why getting risk classification right matters from the start.

    02Understanding the Risk Tiers

    The EU AI Act doesn't regulate all AI equally. Instead, it imposes requirements proportionate to risk. Systems that pose greater risks to health, safety, and fundamental rights face more stringent obligations. This approach is practical, but it creates a threshold question: how do you know which category your AI system falls into?

    Unacceptable Risk (Prohibited)

    Certain AI applications are banned entirely. Social scoring by public authorities that evaluates people based on social behavior leading to detrimental treatment is prohibited. AI that exploits vulnerabilities of specific groups due to age, disability, or social or economic situation to materially distort behavior causing harm is prohibited. Subliminal manipulation techniques beyond consciousness that distort behavior harmfully are prohibited. Real-time remote biometric identification in public spaces for law enforcement is prohibited, with narrow exceptions.

    If your AI system falls into these categories, stop. The activity is prohibited regardless of safeguards.

    High-Risk

    High-risk AI systems face the Act's most stringent requirements. The classification comes through two paths.

    Safety-component path: AI systems that are safety components of products covered by Union harmonization legislation (medical devices, machinery, toys, aviation, etc.) and require third-party conformity assessment under that legislation are automatically high-risk.

    Annex III path: AI systems used in specified high-risk domains are classified as high-risk. These include biometric identification and categorization, management and operation of critical infrastructure, education and vocational training (access, assessment), employment (recruitment, task allocation, monitoring), essential services access (credit, public benefits, emergency services), law enforcement (risk assessment, profiling, crime analytics), migration and border control (application assessment, monitoring), and administration of justice.

    Limited Risk

    AI systems with specific transparency obligations but lighter overall requirements include emotion recognition systems that must disclose their nature, biometric categorization systems that must inform subjects, deepfakes that must be labeled as AI-generated, and AI systems interacting with natural persons that must disclose their AI nature.

    Minimal Risk

    AI systems not falling into higher categories face no specific EU AI Act obligations, though general product safety and liability rules still apply. Most AI systems fall here.

    03Classification Methodology

    Effective classification requires systematic analysis.

    First, assess whether the AI is a safety component of a product covered by Union harmonization legislation. If yes and that legislation requires third-party conformity assessment, the system is high-risk through the safety-component path.

    Second, determine whether the system is used in an Annex III domain. This requires examining intended use, not just technical capability. An AI system capable of analyzing images isn't automatically high-risk—but the same system used for employment screening likely is.

    Third, apply the Annex III exception. Even within Annex III domains, a system may be excluded from high-risk classification if it performs a narrow procedural task, improves the result of a previously completed human activity, detects decision-making patterns without influencing decisions, or performs preparatory tasks for Annex III assessments.

    These exceptions are narrow. Don't assume they apply without careful analysis.

    Fourth, evaluate whether transparency obligations apply independent of high-risk classification.

    04Common Classification Mistakes

    Over-reliance on human oversight: Organizations assume that human review prevents high-risk classification. The Act doesn't work this way. If the AI system materially influences the human decision, high-risk classification may still apply.

    Technical versus use-based classification: Organizations classify based on what the AI does technically rather than how it's used. Classification depends on intended use in specified domains.

    Ignoring downstream use: Developers may not consider how deployers will use their systems. If a general-purpose system is predictably used in Annex III domains, that affects classification.

    Narrow reading of Annex III: Organizations read Annex III categories narrowly to avoid high-risk classification. Regulators are likely to read them broadly.

    05Requirements by Risk Level

    High-risk systems require risk management systems throughout the lifecycle, data governance for training data quality and representativeness, technical documentation of design and operation, automatic logging of system operations, transparency to enable user interpretation, human oversight measures, accuracy, robustness, and cybersecurity, quality management systems, conformity assessment (self or third-party depending on domain), EU database registration, and post-market monitoring.

    Limited-risk systems require transparency obligations specific to system type, labeling of AI-generated content, and disclosure of AI nature to users.

    Minimal-risk systems have no specific EU AI Act requirements, though general obligations under other law continue to apply.

    06Classification Documentation

    Maintain documentation supporting classification decisions. This includes analysis of whether safety-component path applies, assessment of intended use against Annex III categories, evaluation of any applicable exceptions, reasoning behind classification conclusion, and periodic review as use cases evolve.

    This documentation supports regulatory inquiry and internal governance.

    07Platform Support for Classification

    AI governance platforms support classification through inventory systems that track AI systems and their uses, classification workflows that apply consistent methodology, documentation management that maintains classification records, monitoring that detects use changes affecting classification, and compliance tracking aligned with classification-based requirements.

    08Conclusion

    EU AI Act risk classification determines regulatory obligations. Getting classification wrong triggers either over-investment in unnecessary compliance or under-investment that creates regulatory exposure.

    Classification requires understanding the risk framework, applying it systematically to your AI systems, documenting the analysis, and reviewing as systems and uses evolve.

    EU AI Act compliance engineering details implementation requirements once classification is determined. AI governance for enterprises provides the broader framework within which classification operates.

    Cite this work

    Veratrace Research. "AI Risk Classification Under the EU AI Act." Veratrace Blog, February 3, 2026. https://veratrace.ai/blog/ai-risk-classification-eu-ai-act

    VR

    Veratrace Research

    Research Team

    Contributing to research on verifiable AI systems, hybrid workforce governance, and operational transparency standards.

    Related Posts

    ai-change-management
    operational-controls

    AI System Change Management Controls Most Teams Skip

    When an AI system changes behavior — through model updates, prompt revisions, or config changes — most enterprises have no record of what changed, when, or why.

    VG
    Vince Graham
    Mar 3, 2026
    ai-vendor-billing
    reconciliation

    AI Vendor Billing Reconciliation Is the Governance Problem Nobody Budgets For

    AI vendor invoices describe what vendors claim happened. Reconciliation against sealed work records reveals what actually did.

    VG
    Vince Graham
    Mar 3, 2026
    AI Work Attribution Breaks Down in Multi-Agent Systems
    ai-attribution
    multi-agent-systems

    AI Work Attribution Breaks Down in Multi-Agent Systems

    When multiple AI agents and humans contribute to a single outcome, traditional logging cannot answer the most basic question: who did what.

    VG
    Vince Graham
    Mar 3, 2026