EU AI Act deep dive #2
Let's continue our deep dive into the EU AI Act with a focus on the high-risk AI systems!
Hello everyone,
Here is the 2nd article of my series of 4 articles on the AI Act!
The 1st one was an introduction to the text and the prohibited practices, the 2nd one — this one — focus on the High-Risk AI systems, the 3rd will cover all the others requirements and main information, and the last one will talk about the innovation, the Governance and penalities as well as the timeline of its entry into force (first elements in February 2025).
Disclaimer: the legislation includes some abstract areas that still need clarification. These will be further defined through delegated acts, guidelines from EU institutions, and standards developed by European Standardization Organizations. Businesses can expect more detailed guidance soon.
Introduction
The EU AI Act places significant emphasis on regulating AI systems, particularly those classified as high-risk due to their potential impact on public safety, health, and fundamental rights. These high-risk AI systems are used in critical sectors such as healthcare, transportation, and law enforcement.
The regulation is trying to ensure that high-risk AI systems operate safely, ethically, and transparently by enforcing stringent requirements throughout their lifecycle, from design and development to deployment and post-market monitoring.
Which AI systems are high-risk?
High-risk AI systems are those that either meet both of the following conditions:
Safety-critical role: The AI system is either used as a safety component of a product or is itself a product that falls under specific EU regulations covering areas like machinery, medical devices, motor vehicles, civil aviation, and more1.
Third-party assessment: The product that incorporates the AI system must undergo a third-party conformity assessment to ensure it meets the necessary safety and regulatory standards under these EU laws.
Or, AI systems in those categories:
Non-banned biometrics: for remote biometric identification, excluding those used for verifying a person’s identity. Biometric categorization systems that deduce sensitive or protected traits. Systems for recognizing emotions.
Critical infrastructure: intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic, or in the supply of water, gas, heating or electricity.
Education and vocational training: used for determining admission or placement in educational and vocational programs, evaluating learning outcomes, assessing the appropriate level of education, and monitoring prohibited behaviour during exams.
Employment, workers management and access to self-employment: used for recruitment or selection, particularly targeted job ads, analysing and filtering applications, and evaluating candidates. Promotion and termination of contracts, allocating tasks based on personality traits or characteristics and behaviour, and monitoring and evaluating performance.
Access to and benefit from essential public and private services: used by public authorities to assess eligibility for benefits and services, manage creditworthiness (excluding fraud detection), classify emergency calls, and determine risk and pricing for health and life insurance should be closely regulated.
Law enforcement: that assess a person's risk of becoming a crime victim, use polygraphs, evaluate evidence reliability, predict reoffending, or profile suspects in criminal investigations require strict oversight due to potential risks to justice and fairness.
Migration, asylum and border control management: Polygraphs, health and migration risk assessments, asylum and visa applications, and related eligibility complaints. Detecting, identifying, or recognizing individuals, except for verifying travel documents.
Administration of justice and democratic processes: AI systems used for legal research, applying the law, or alternative dispute resolution. Influencing elections, referenda, or voting behaviour, except for tools that organize or optimize political campaigns without direct interaction with people.
However, an AI system in those categories is not considered high-risk when it does not pose a significant risk to the health, safety or fundamental rights of persons, and it is fulfilling at least one of these conditions:
it is intended to perform a narrow procedural task;
it is intended to improve the result of a previously completed human activity;
it is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review;
it is intended to perform a preparatory task to an assessment relevant for the purposes of the use cases.
To the contrary, an AI system is always considered high-risk when it performs profiling of natural persons.
To conclude, High-risk AI systems are those used in safety-critical products or sectors, such as machinery, medical devices, and motor vehicles, which must undergo third-party assessments to meet EU safety standards. Additionally, AI systems involved in sensitive areas like biometrics, critical infrastructure, education, employment, public services, law enforcement, migration, and justice are also considered high-risk due to their potential impact on individuals' rights and safety. However, systems that perform narrow tasks without significant risks to health or rights may not be classified as high-risk, except when they involve profiling individuals.
Requirements for high-risk AI systems
Now that we have identified which AI systems are classified as high-risk, let's explore the general requirements associated with them.
1. Risk management system
A continuous risk management system must be established for high-risk AI systems throughout their lifecycle. Key components of the system include:
Identifying and analysing risks to health, safety, and fundamental rights from intended use and foreseeable misuse.
Evaluating and mitigating risks based on post-market monitoring data.
Adopting targeted measures to address identified risks.
Risk management measures should:
Focus on eliminating or reducing risks through design and development.
Implement mitigation and control measures for risks that cannot be fully eliminated.
Provide technical information and training to deployers.
High-risk AI systems must undergo testing to ensure consistent performance and compliance, potentially in real-world conditions. Special consideration is required for vulnerable groups, especially those under 18. Furthermore, risk management processes must be combined with existing Union law requirements.
2. Data governance
High-risk AI systems using data-driven training must utilize training, validation, and testing datasets that meet specific quality standards. Data governance and management practices must be applied, addressing:
Design choices.
Data collection processes and origin.
Data preparation (e.g., annotation, cleaning, labeling…).
Bias detection and mitigation, as well as addressing data gaps.
Datasets must be relevant, representative, as error-free as possible, and context-appropriate for the intended use of the AI system.
Bias detection may require processing sensitive personal data, but only with strict safeguards under existing EU GDPR rules and related laws:
Limiting data reuse and ensuring privacy protections.
Securing access, ensuring confidentiality, and deleting data once bias is corrected or after its retention period.
For non-trained AI systems, these rules apply only to testing datasets.
3. Technical documentation
Technical documentation for a high-risk AI system must be prepared before it goes to market, kept up to date, and clearly show compliance with legal requirements for national authorities to review. It must includes these elements:
General description of the AI system, including:
Intended purpose, provider, version history
Interaction with external hardware/software
Software versions and update requirements
Forms in which the system is available (e.g., APIs, downloads)
Hardware specifications and product component details (pictures and illustrations)
User-interface and instructions for the deployer
Detailed description of the AI system’s elements and development process, covering:
Development methods, including use of pre-trained systems
System design logic, algorithms, and key design choices
System architecture and computational resources
Data requirements, including data sources and preparation
Human oversight measures and compliance with legal requirements
Pre-determined system changes and solutions to ensure compliance
Validation and testing procedures
Cybersecurity measures
Information on system performance, including:
Capabilities, limitations, accuracy levels for target users
Foreseeable risks and unintended outcomes
Human oversight measures and input data specifications
Appropirated performance metrics.
Risk management system description (as seen before).
Description of system changes throughout its lifecycle.
List of harmonised standards applied, or alternative compliance solutions.
EU declaration of conformity (cf. details later in Obligations).
Post-market performance evaluation system, including a monitoring system based on the risks of their high-risk AI systems.
This system should track and analyze the AI's performance over time to ensure compliance, following a standardized plan set by the Commission by February 2026.
Providers can integrate this plan with existing monitoring systems under EU laws.
SMEs and start-ups can submit simplified technical documentation using a form that will be created by the Commission. And, if the AI system relates to a product covered by Union harmonization legislation — listed in the notes —, a single set of documentation must include both the AI and product-related information.
4. Record-keeping
High-risk AI systems must allow for automatic event logging throughout their lifetime. Logging capabilities should ensure traceability for:
Identifying risky situations or substantial modifications
Facilitating post-market monitoring
Monitoring the operation of high-risk AI systems
For specific high-risk AI systems, logging must include:
Recording the period of each use (start and end times)
Reference database used for input data checks
Input data that matched during searches
Identification of individuals involved in result verification
5. Transparency and provision of information to deployers
High-risk AI systems must be designed with sufficient transparency to allow deployers to interpret and use the system's output appropriately, ensuring compliance with obligations for both providers and deployers. These systems should also come with clear, complete, and accessible instructions :
Provider information: identity and contact details of the AI provider and authorized representative.
System characteristics, capabilities and limitations:
Its intended use
Accuracy, robustness, and cybersecurity metrics
Known or foreseeable risks that could affect safety, health, or fundamental rights
Explanation capabilities of the system’s output (if applicable)
System performance regarding specific user groups (if applicable)
Specifications about the input/training data used, considering the system's purpose
Guidance to help users interpret and appropriately use system output
Pre-determined changes: Describe any planned changes to the system's performance as set during the initial assessment.
Human oversight: Include oversight measures and technical provisions to help users understand system outputs.
Resources and maintenance: Specify the computational/hardware needs, expected lifespan, and necessary maintenance or software updates.
Logging mechanisms: If relevant, describe how deployers can collect, store, and interpret system logs.
6. Human oversight
High-risk AI systems must be designed with tools enabling effective human oversight during their use to prevent or reduce risks to health, safety, or fundamental rights, even in cases of misuse.
Oversight measures should fit the system's risks, autonomy, and context, using built-in and deployable options provided by the developer. The AI system must empower overseers to:
Understand the system’s capabilities and monitor it for issues.
Be aware of automation bias (over-reliance on AI outputs).
Interpret the system’s output correctly.
Choose to ignore or override AI outputs when necessary.
Intervene or stop the system safely.
For remote biometric identification systems, decisions based on AI outputs must be confirmed by at least two qualified individuals. Exceptions apply for law enforcement, migration, border control, and asylum cases.
7. Accuracy, robustness and cybersecurity
AI systems must be designed for consistent accuracy, robustness, and cybersecurity throughout their lifecycle. The system’s accuracy levels and metrics must be provided in the user instructions.
AI systems should be resilient to errors, faults, or inconsistencies caused by interactions with humans or other systems. Redundancy solutions, such as backups or fail-safe mechanisms, should be used. Systems that continue learning must minimize biased feedback loops.
AI systems must resist unauthorized tampering or exploitation of vulnerabilities. Cybersecurity measures should address AI-specific risks like data or model poisoning, adversarial examples, and confidentiality attacks.
Conclusion on requirements
High-risk AI systems must meet a comprehensive set of requirements to ensure safety, transparency, and compliance throughout their lifecycle. These include establishing robust risk management and data governance systems, maintaining clear technical documentation, and enabling effective human oversight. Additionally, strong safeguards must be in place to ensure accuracy, resilience, and cybersecurity, while post-market monitoring ensures ongoing adherence to regulatory standards. By fulfilling these obligations, providers can ensure that their AI systems operate safely and ethically in sensitive and high-stakes environments.
Obligations for high-risk AI systems
Now that we have seen the requirements, let's see how they translate into obligations for the various players in the AI value chain. But first, let’s define those.
Definition of the roles
Provider: a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge.
Deployers: a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.
Distributor: a natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market.
Importer: a natural or legal person located or established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established in a third country.
The roles can evolve depending on the involment of the operator:
Third parties (distributors, importers...) are considered providers of a high-risk AI system if:
They rebrand a high-risk AI system already on the market.
They make substantial modifications to an existing high-risk system.
They modify a non-high-risk AI system, turning it into a high-risk one.
Once a third party assumes provider obligations, the original provider is no longer responsible but must cooperate by sharing necessary technical information and assistance unless they explicitly prohibited turning their system into a high-risk AI system.
Manufacturers of products that include high-risk AI systems are considered providers if the AI system is sold or serviced under the manufacturer's name or trademark.
Providers and third-party suppliers of AI tools or services must agree in writing on the necessary technical support to ensure compliance with regulations. This doesn't apply to open-source tools except general-purpose AI models.
Obligations for providers
Providers must comply with the following obligations.
→ Be compliant with the previously seen requirements.
Be able to demonstrate the conformity with the requirements to national competent authorities.
Cooperate with competent authority and provide that authority, when requested, all the information, documentation and logs necessary to demonstrate the conformity of the high-risk AI system with the requirements set out
→ AI systems, or their packaging or documentation, must display the provider's name, trade name or trademark, and contact address.
→ Have a quality management system that, at least, includes:
Regulatory compliance strategy: covering conformity assessment and managing modifications.
Design procedures: techniques for design control and verification.
Development processes: rnsuring quality control and assurance.
Testing procedures: for examination and validation throughout development, with defined frequencies.
Technical specifications: applying relevant standards or alternative methods for compliance when needed.
Data management systems: covering data handling from acquisition to retention, including analysis and storage.
Risk management: as seen in requirements.
Post-market monitoring system.
Incident reporting procedures.
Communication protocols: for interaction with authorities, operators, and customers.
Record-keeping systems: to maintain all relevant documentation.
Resource management: including supply security measures.
Accountability framework: defining responsibilities of management and staff for the above aspects.
→ Documentation must be kept for national authorities for 10 years after the high-risk AI system is placed on the market, including:
Technical documentation (cf. requirements).
Quality management system documentation (see above).
Documentation on changes approved by notified bodies (if applicable).
Decisions and documents from notified bodies (if applicable).
EU declaration of conformity (as detailed below).
→ Logs automatically generated must be kept: providers must retain logs for at least six months or longer, depending on the system’s purpose and applicable laws, particularly regarding data protection.
→ Undergoe a conformity assessment procedure — either an internal control, or with the involvement of a notified body, or under conformity assessment procedure required by Union harmonisation legislations.
→ Draw up an EU declaration of conformity, containing the following information:
AI system name, type, and identification details.
Provider's (or authorized representative's) name and address.
Statement confirming the declaration is under the provider's responsibility.
Confirmation that the AI system complies with the Regulation and other relevant Union laws.
Compliance statement if the AI system processes personal data (GDPR and other regulations).
References to harmonized standards or specifications for conformity.
Details of the notified body, conformity assessment, and certification (if applicable).
Place, date, and signature of the authorized person issuing the declaration.
→ CE marking must be placed on the AI system, or on its packaging or documentation:
For high-risk AI systems provided digitally, the CE marking must be accessible via the user interface or a machine-readable code.
It must be visible, legible, and permanent. If not possible, it should be on the packaging or accompanying documentation.
If applicable, the CE marking must include the notified body's identification number, and this number must also appear in any promotional materials that reference the system's CE compliance.
If high-risk AI systems are subject to other EU laws requiring CE marking, the CE mark indicates compliance with those laws as well.
→ Comply with the registration obligations: before marketing or using a high-risk AI system — except for “Critical infrastructure” that must be registered at national level —, the provider or authorized representative must register themselves and the system in the EU database.
→ Take the necessary corrective actions and provide information. If providers of high-risk AI systems find their system non-compliant, they must quickly take corrective actions, such as fixing, disabling, or recalling it, and notify relevant parties (distributors, deployers, etc.).
If the system poses a significant risk to health, safety, or rights, they must investigate, collaborate with deployers, and inform market authorities and the notified body about the issue and actions taken.
→ The system must comply with accessibility requirements of EU directives on accessibility of the websites and mobile applications of public sector bodies and on accessibility for products and services.
→ Before introducing high-risk AI systems to the EU market, providers from third countries must appoint an authorized representative within the EU. This representative is tasked with:
Verifying and maintaining records: ensuring the provider has drawn up necessary documents (EU declaration of conformity, technical documentation) and keeping these records for 10 years.
Cooperating with Authorities: providing information to competent authorities upon request and cooperating in actions to mitigate risks posed by the AI system.
Registration obligations: ensuring the provider's registration information is correct.
Compliance communication: acting as a point of contact for authorities regarding compliance with regulations.
Mandate termination: the representative can terminate the mandate if the provider acts contrary to its obligations, informing relevant authorities of the termination and reasons.
Obligations for deployers
Deployers must ensure transparency, human oversight, and legal safeguards:
Technical and organizational measures: deployers must use high-risk AI systems according to the accompanying instructions.
Human oversight: competent personnel must be assigned to supervise these AI systems.
Compliance with other laws: the obligations complement existing deployer obligations under Union or national laws.
Control of input data: deployers must ensure input data is relevant and representative for the AI system's purpose.
Monitoring and risk reporting: deployers must monitor AI system operations, and inform providers and authorities if risks arise. If serious incidents occur, deployers must notify relevant parties. Financial institutions may fulfill these obligations under existing governance laws.
Log retention: logs generated by high-risk AI systems must be kept for at least six months, or longer if required by Union/national law (including data protection laws).
Workplace notification: before using high-risk AI systems at work, deployers must inform affected workers and their representatives according to labor laws.
Registration: public authorities, or agencies or persons acting on their behalf, must ensure their AI systems are registered in the EU database. Unregistered systems must not be used.
Data protection impact assessment: deployers must use the information from AI systems to conduct data protection assessments as decribre per GDPR or Law Enforcement Directive.
Post-remote biometric identification: when using high-risk AI for biometric identification in criminal investigations, deployers must seek judicial or administrative authorization within 48 hours. Systems cannot be used indiscriminately for law enforcement, and decisions cannot be solely based on AI outputs. Data must be deleted if authorization is denied, and annual reports must be submitted.
Informing affected individuals: deployers must notify individuals when high-risk AI systems — listed in these areas: biometrics, critical infrastructure, education and profesionnal training, employment and workers’ management, access to essential private and public services, law enforcement, migration, justice — are used to make decisions affecting them. Law enforcement must follow the corresponding EU Directive.
Cooperation with authorities: deployers must cooperate with competent authorities to ensure compliance with this Regulation.
Furthermore, public or private entities deploying AI systems in areas like biometrics, education, employment, public services, law enforcement, migration, justice, credit scoring (excluding fraud detection), or life and health insurance must assess the impact these systems may have on fundamental rights. This assessment should include:
A description of how the system will be used according to its intended purpose.
The timeframe and frequency of use.
Categories of people likely to be affected by its use.
Potential risks of harm to those affected.
Implementation of human oversight measures.
Actions to take if risks materialize, including internal governance and complaint mechanisms.
Obligations for distributors
Distributors must ensure the following obligations before making their systems available on the market:
Verify the system has the CE marking, the EU declaration of conformity, and that providers/importers comply with their obligations.
If the system doesn't meet requirements, the distributor should not release it until it is brought into compliance. If it poses a risk, they must inform the provider/importer.
Ensure proper storage and transport conditions to maintain system compliance.
If a system is found non-compliant after being released, distributors must take corrective action, withdraw/recall it, and inform authorities if it poses a risk.
Provide authorities with information/documentation on compliance upon request.
Cooperate with authorities to mitigate any risks posed by the system.
Obligations for importers
Before placing a high-risk AI system on the market, importers must ensure:
Conformity and documentation:
The provider has conducted the relevant conformity assessment.
The provider has prepared technical documentation.
The system has CE marking, EU declaration of conformity, and instructions for use.
The provider has appointed an authorized representative.
Non-conformity actions: if the system is not in conformity or is falsified, importers must not place it on the market and inform the provider, representative, and market surveillance authorities if it presents a risk ("product presenting a risk").
Importer identification: importers must indicate their name, trade name/mark, and contact address on the system, packaging, or documentation.
Storage and transport: importers must maintain conditions that do not compromise the system's compliance with requirements.
Record keeping: importers must keep copies of the certificate, instructions, and EU declaration of conformity for 10 years.
Information provision: upon request, importers must provide authorities with information and documentation to demonstrate conformity, ensuring technical documentation is available.
Cooperation with authorities: importers must cooperate with authorities to reduce and mitigate risks posed by the system.
Conclusion on obligations
High-risk AI systems’ obligations require providers, deployers, distributors, and importers to take on specific responsibilities. Providers must ensure compliance through documentation, quality control, and risk monitoring. Deployers oversee proper system use and reporting, while distributors and importers ensure compliance and work with authorities to address risks. Each party plays a key role in ensuring AI systems meet legal and safety standards, protecting both individuals and society.
General conclusion
The obligations for high-risk AI systems under the EU AI Act set out responsibilities for providers, deployers, distributors, and importers to ensure compliance with safety, transparency, and performance standards. These regulations are designed to manage potential risks posed by AI systems in sensitive sectors, while aiming to balance innovation with the need for oversight. How these measures will impact industries remains to be seen, as they navigate both regulatory demands and the evolving AI landscape.
Source
Machinery (Directive 2006/42/EC), Toy Safety (Directive 2009/48/EC), Recreational craft and personal watercraft (Directive 2013/53/EU), Lifts and safety components (Directive 2014/33/EU), Equipment for potentially explosive atmospheres (Directive 2014/34/EU), Radio equipment (Directive 2014/53/EU), Pressure equipment (Directive 2014/68/EU), Cableway installations (Regulation EU 2016/424), Personal protective equipment (Regulation EU 2016/425), Appliances burning gaseous fuels (Regulation EU 2016/426), Medical devices (Regulation EU 2017/745), In vitro diagnostic medical devices (Regulation EU 2017/746), Civil aviation security (Regulation EC No 300/2008), Two- or three-wheel vehicles and quadricycles (Regulation EU 168/2013), Agricultural and forestry vehicles (Regulation EU 167/2013), Marine equipment (Directive 2014/90/EU), Interoperability of the rail system within the EU (Directive EU 2016/797), Motor vehicles and their trailers (Regulation EU 2018/858), Type-approval of motor vehicles (Regulation EU 2019/2144), Civil aviation and unmanned aircraft (Regulation EU 2018/1139).