EU AI Act deep dive #3
Let's now focus on the obligations for non-high-risk AI systems and general purpose AI systems.
Hello everyone,
Here is the 3rd article of my series of 4 articles on the EU AI Act!
The 1st one was an introduction to the text and the prohibited practices, the 2nd one focused on the High-Risk AI systems, this one is covering all the others requirements for not high-risk AI systems and general purpose AI models, and the last one will talk about the innovation, the Governance and penalities as well as the timeline of its entry into force (first elements in February 2025).
Introduction
Both providers and deployers of AI, whether dealing with general-purpose models or non-high-risk systems, face a set of obligations designed to ensure responsible use. This article explores these requirements, focusing on the need for transparency, voluntary best practices, and how AI systems must be documented and registered to meet regulatory standards.
Transparency obligations for all providers and deployers of AI systems
ℹ️ A quick reminder of provider and deployer definitions:
Provider: a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge.
Deployer: a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.
Obligations for providers
They must ensure that people interacting with AI systems are informed that they are dealing with AI, unless it's obvious to a reasonably informed person.
This rule doesn't apply to AI systems used by law enforcement for criminal detection or investigation.
Providers must label AI-generated synthetic data, including from general-purpose AI systems, in a machine-readable format.
This obligation does not apply if the AI only assists with standard editing, doesn't significantly alter the input, or is used for legal purposes like crime detection or investigation.
For deployers
Deployers of emotion recognition or biometric categorization systems must inform individuals about the system's operation, and process personal data according to GDPR and related data protection EU laws.
This obligation doesn't apply to systems used legally for crime detection or investigation.
Deployers of AI systems generating text for public interest must disclose that the content is AI-generated.
This rule doesn't apply if the content is for legal crime-related purposes or if it has been reviewed by humans with editorial responsibility.
Requirements for non high-risk AI systems
Non high-risk AI systems
As a reminder, an AI system is not considered high-risk when it does not pose a significant risk to the health, safety or fundamental rights of persons, and it is fulfilling at least one of these conditions:
it is intended to perform a narrow procedural task;
it is intended to improve the result of a previously completed human activity;
it is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review;
it is intended to perform a preparatory task to an assessment relevant for the purposes of the use cases.
To the contrary, an AI system is always considered high-risk when it performs profiling of natural persons.
By February 2, 2026, the Commission, after consulting the European Artificial Intelligence Board, will provide a detailed list of practical examples of AI systems considered high-risk and non-high-risk.
Documentation and registration obligations
A provider determining that an AI system is not high-risk must document the assessment before launching or deploying the system.
The provider, or their authorized representative if applicable, must also register both themselves and the system in the EU database.
Upon request from national authorities, the provider must submit the assessment documentation.
General public safety obligations
It is also reminded that non-high-risk AI systems must be safe when placed on the market or put into service; and that EU Regulation on general product safety applies as a safety net to such systems.
Non-biding practices
Providers of non-high-risk AI systems are encouraged to create voluntary codes of conduct, adapting some high-risk AI requirements to their lower-risk systems, using best practices like model and data cards.
Providers and deployers of all AI systems, high-risk or not, should also voluntarily follow additional guidelines on ethics, sustainability, diversity, and inclusivity, ensuring stakeholder participation. To make these voluntary codes effective, they should have clear goals, measurable outcomes, and involve relevant stakeholders.
General-purpose AI models
We are here speaking about generalist models, like Generative AI models and LLMs that are used by OpenAI, Anthropic, Meta, Google, Microsoft, Mistral…
Classification of general-purpose AI models
A model can be classified either as a general-purpose AI model without systemic risk or a general-purpose AI model with systemic risk if it meet one of the following conditions:
It has high-impact capabilities assessed using relevant technical tools, methodologies, indicators, and benchmarks (eg: the cumulative amount of computation used for its training measured in floating point operations is greater than 10^25).
Following a Commission decision or a qualified alert from the scientific panel, it is determined to have capabilities or impact equivalent to those outlined in the previous point, based on those criteria:
The number of its parameters.
The quality or size of the data set, for example measured through tokens.
The computation used for training, measured by floating point operations, training costs, time, or energy consumption, influences the model's power.
The model's input and output types (e.g., text-to-text, text-to-image) and state-of-the-art thresholds for determining high-impact capabilities for each modality, and the specific type of inputs and outputs (e.g. biological sequences).
The evaluations of capabilities of the model: the number of tasks without additional training, adaptability to learn new and distinct tasks, its level of autonomy and scalability, the tools it has access to.
The high impact on the internal market due to its reach — available to at least 10 000 registered business users in the EU.
The number of registered end-users.
Procedure and notification to the Commission
Providers of general-purpose AI models that meet systemic risk criteria (as defined in the 1st point earlier) must notify the Commission within two weeks and provide evidence of meeting the requirement. If the Commission identifies systemic risks not reported, it can designate the model as risky.
Providers may submit arguments to show their AI model, despite meeting the criteria, does not pose systemic risks due to specific characteristics. If the Commission finds the provider's arguments unconvincing, the AI model will be classified as having systemic risk.
Obligations for general-purpose AI models
General-purpose AI models must:
Maintain and update technical documentation, including training/testing processes, for the AI Office and authorities, containing at minimum:
A general description of the general-purpose AI model including:
the tasks that the model is intended to perform and the type of AI systems in which it can be integrated;
the acceptable use policies applicable;
the date of release and methods of distribution;
the architecture and number of parameters;
the modality and format of inputs and outputs;
the licence.
A detailed description of the elements referred to in point 1, and relevant information of the process of development, including:
The technical tools and infrastructure required for integrating the model into AI systems.
Design specifications and training process, including methodologies, design choices, and optimization goals.
Information on training, testing, and validation data, such as data type, source, characteristics, and bias detection methods.
Computational resources used for training, including floating point operations and training time.
Known or estimated energy consumption during training.
Provide up-to-date documentation to AI system providers integrating the model, ensuring they understand its capabilities and limitations, while protecting intellectual property and trade secrets. Must include details from:
A general description of the general-purpose AI model including:
The tasks that the model is intended to perform and the type of AI systems into which it can be integrated.
The acceptable use policies.
The date of release and methods of distribution.
How the model interacts with external hardware or software, if applicable.
The versions of relevant software related to the use of the general-purpose AI model, if applicable.
The architecture and number of parameters.
The modality and format of inputs and outputs.
The licence of the model.
A description of the elements of the model and of the process for its development, including:
The technical requirements, such as instructions, infrastructure, and tools, needed to integrate the AI model into an AI systems.
The modality and format of the inputs and outputs and their maximum size.
Information on the data used for training, testing and validation, if applicable, including the type and provenance of data and curation methodologies.
Implement a policy to comply with EU copyright laws, under Directive on copyright and related rights in the Digital Single Market.
Publish a detailed summary of the training content used for the AI model based on the AI Office template.
Providers releasing AI models under free/open-source licenses with publicly available parameters are exempt from documentation obligations, except for models with systemic risks.
Providers can follow approved codes of practice or European harmonized standards to demonstrate compliance. If not, they must prove alternative compliance methods for Commission assessment.
Additional obligations for general AI models with systemic risk
Besides previous obligations, providers of general-purpose AI models with systemic risk must:
Complete the technical documentation with:
A detailed description of the evaluation strategies, including results, based on public evaluation protocols or other methods, with criteria, metrics, and identification of limitations.
A description of internal or external adversarial testing (e.g. red teaming) and any model adaptations, such as alignment or fine-tuning, if applicable.
A detailed explanation of the system architecture, showing how software components interact and integrate into the overall processing system, if applicable.
Conduct model evaluations using standardized protocols, including adversarial testing to identify and mitigate systemic risks.
Assess and mitigate systemic risks at the EU level, addressing risks from development, market placement, or use.
Track, document, and promptly report serious incidents and corrective measures to the AI Office and national authorities.
Ensure cybersecurity protection for the model and its physical infrastructure.
Providers can rely on codes of practice to meet these obligations until harmonized standards are published. Compliance with these standards presumes conformity, but providers not following an approved standard must demonstrate alternative compliance for Commission assessment.
Authorised representatives of providers of general-purpose AI models
Providers from non-EU countries must appoint an EU-based authorized representative via a written mandate. The provider must enable the authorized representative to fulfill their mandated tasks. It’s tasks include:
Verifying the technical documentation and ensuring the provider meets their obligations.
Keeping technical documentation available for 10 years and providing the provider’s contact details.
Supplying necessary compliance information to the AI Office upon request.
Cooperating with the AI Office and authorities in relation to the model.
The authorized representative must be empowered to handle compliance issues on behalf of or in place of the provider.
If the authorized representative believes the provider is violating obligations, they must terminate the mandate and inform the AI Office immediately.
These obligations do not apply to providers of open-source AI models unless the models pose systemic risks.
Conclusion
The path to responsible AI use is complex, requiring a balance between regulatory obligations and voluntary practices. Providers and deployers have a pivotal role in ensuring transparency, but how these standards are implemented can vary depending on the nature of the AI system. While compliance with established regulations is necessary, embracing broader ethical practices and industry best standards can further support the development of trustworthy AI systems, contributing to a more nuanced and accountable AI ecosystem.