Scope and target groups of the AIA
The EU Regulation 2024/1689 (Artificial Intelligence Act, AIA) creates a uniform and horizontally applicable legal framework for the development, marketing, and use of artificial intelligence (AI) in the EU. In connection with the scope of application (Art. 2 AIA), the term "provider" a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge " (Art. 3 (3) AIA). Furthermore, the "deployer" is defined as " a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity " (Art. 3 (4) AIA).
Distributors, importers, deployers, or other third parties may also become providers if, for example, they make significant changes to a high-risk AI system that has already been placed on the market or put into service. Further details are set out in Art. 25 AIA.
Gradual entry into force defines the roadmap
The various parts of the AIA will apply gradually in the EU (Art. 113 AIA):
- Chapter I (General provisions) and Chapter II (Prohibited practices) from February 2, 2025,
- Section 4 (Notifying authorities and notified bodies) in Chapter III (High-risk systems), Chapter V (General-purpose AI models), Chapter VII (Governance) and Chapter XII (Penalties) apart from Art. 101 AIA (Fines for providers of general-purpose AI models) and Art. 78 AIA (Confidentiality) from August 2, 2025,
- most other provisions from August 2, 2026, and
- Art. 6(1) AIA (classification rules for high-risk AI systems) and the associated obligations from August 2, 2027.
HR AI that was placed on the market or put into service before the AIA came into force only must comply with the provisions of the AIA if " those systems are subject to significant changes in their designs" (Art. 111 (2)).
What is an AI system?
According to Art. 3 (1) AIA, an AI system is defined as " a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments ". This definition is explained by the EU Commission in the document "Commission Guidelines on the definition of an artificial intelligence system established by Regulation (EU) 2024/1689 (AI Act)".
Risk-based AIA approach
AI systems are classified as follows in the AIA:
- Unacceptable risk pursuant to Art. 5 AIA in relation to eight prohibited practices explained in the document " Commission Guidelines on prohibited artificial intelligence practices established by Regulation (EU) 2024/1689 (AI Act)"
- High risk pursuant to Art. 6 AIA, subject to a third-party conformity assessment procedure, example: medical device
- Limited risk with specific transparency requirements (Art. 50 AIA), example: chatbot
- Minimal or no risk, for which providers should comply with voluntary codes of conduct (Art. 95 AIA), example: video games or spam filters
Special case: General-Purpose AI System
A General-Purpose AI System (GPAI) is "an AI system which is based on a general-purpose AI model, and which has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems" (Art. 3 (66) AIA). Section 2 of Chapter V of the AIA contains general obligations for GPAI providers, namely the creation of technical documentation, the provision of information to other providers who wish to integrate a GPAI into their own AI system, the creation of a copyright policy based on the relevant legislation, and the publication of a summary of the training data used. In addition, GPAIs with systemic risks are required to carry out assessments including attack tests, cybersecurity measures, self-assessment and mitigation of systemic risks, and to report serious incidents. Moreover, providers of GPAIs with systemic risk may rely on codes of practice to demonstrate compliance with the obligations in Art. 55 (1).
High-risk AI systems
The classification of an AI system as HR AI is subject to the following conditions:
- Art. 6 (1) AIA: The AI system is a stand-alone product or the safety component of a product within the scope of the EU harmonization acts listed in Annex I and requires a conformity assessment procedure by a third party under these acts, or
- Art. 6 (2) AIA: The AI system falls within the scope of the applications listed in Annex III and does not constitute an exception under Art. 6 (3).
AI competence as the basis for the safe development and use of high-risk AI systems
Since February 2, 2025, both providers and deployers of HR AI have been required to develop sufficient AI literacy (Art. 4 AIA). This is defined in Art. 3 (56) AIA as " skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause ". In particular, deployers should ensure that " the persons assigned to
implement the instructions for use and human oversight […] have the necessary competence, in particular an adequate level of AI literacy, training and authority to properly fulfil those tasks" (Recital 91). The EU AI Office's Living Repository of AI Literacy Practices provides examples from various sectors on the scope and nature of competence building.
What do providers of high-risk AI systems need to do now?
Providers of HR AI are generally subject to the provisions of Art. 16 AIA. The following table lists the requirements in detail:
Requirement | Reference AIA |
Risk management system | Art. 9 |
Data and data governance | Art. 10 |
Technical documentation | Art. 11 |
Record-keeping | Art. 12 |
Transparency and provision of information to deployers | Art. 13 |
Human oversight | Art. 14 |
Accuracy, robustness and cybersecurity | Art. 15 |
Labeling requirements | --- |
Quality management system | Art. 17 |
Documentation keeping | Art. 18 |
Automatically generated logs | Art. 19 |
Conformity assessment | Art. 43 |
EU declaration of conformity | Art. 47 |
CE marking | Art. 48 |
Registration | Art. 49 (1) |
Corrective actions and duty of information | Art. 20 |
Proof of compliance to the competent national authority | --- |
Obligations of deployers | Art. 26 |
Market surveillance and control of AI systems on the Union market (market surveillance authorities) | Art. 74 |
Post-market monitoring and vigilance (suppliers) | Art. 72, 73 |
As a basis for the safety and performance of HR AI, a risk management system and a quality management system must be established, continuously developed, and maintained (Art. 9 and 17 AIA).
The technical documentation proving compliance with the legal requirements must meet the requirements set out in Annex IV AIA and be kept available for a certain period (Art.11 and 18). For SMEs and start-ups, a simplified provision of technical documentation is provided in accordance with Annex IV (Art. 11(1) AIA).
Providers of HR AI have the option of implementing the provisions of the AIA into their existing quality management system and technical documentation (Art. 8 (2) AIA, Art. 11 (2) AIA).
Art. 10 AIA sets out data and data governance requirements, and Art. 15 AIA essentially refers to the development and technical assessment of AI systems. Providers must pay particular attention to ensuring that the risk of " possibly biased outputs influencing input for future operations (feedback loops), and as to ensure that any such feedback loops are duly addressed with appropriate mitigation measures" (Art. 15 (4) AIA).
To ensure the traceability of system functions (processes and events) and post-market monitoring, “automatic recording of events (logs) over the lifetime of the system" must be implemented (Art. 12 AIA).
In Art. 13 AIA, the legislator provides for extensive transparency obligations. For "AI systems intended to interact directly with natural persons" it is also required that " he natural persons concerned are informed that they are interacting with an AI system" (Art. 50 (1) AIA). The mentioning of AI technology as an operating principle in the intended purpose should fulfill this requirement. A procedure for fulfilling the transparency obligations was recently published (Prinz, 2024).
The human oversight provided for in Art. 14 AIA refers to the technical implementation by the provider, whereas the associated deployer obligations are regulated in Art. 26 AIA.
The legislator stipulates specific retention periods for the provider for all documentation (Art. 18 AIA). This also applies to automatically generated logs (Art. 19 (1) AIA).
The provider’s obligations for AI systems do not end with the placement on the market but continue within the framework of post-market monitoring and vigilance. Accordingly, Art. 72 AIA requires the provider to draw up a post-market monitoring plan and to report serious incidents and malfunctions to the competent authorities (Art. 73 AIA).
Conformity assessment as gatekeeper
The conformity assessment of HR AI is regulated in Art. 43 AIA and its form depends mainly on the use case:
Use case | Reference in Art. 6 | Reference in Art. 43 | Conformity assessment procedure |
Biometrics | Art. 6 (2), Annex III No. 1 | Paragraph 1 | Either internal control1 (Annex VI AIA) or conformity assessment by a third party (Annex VII AIA) |
Critical infrastructure, education and vocational training, employment, workers’ management and access to self-employment, access to and enjoyment of essential private services and essential public services and benefits, law enforcement, migration, asylum and border control management as well as administration of justice and democratic processes | Art. 6 (2), Annex III No. 2-8 | Paragraph 2 | Internal control (Annex VI AIA) |
Machinery, toys, recreational craft, and watercraft, lifts, equipment and protective systems in explosive atmospheres, radio equipment, pressure equipment, cableways, personal protective equipment, appliances burning gaseous fuels, medical devices, and IVD | Art. 6 (1) AIA, Annex I Section A | Paragraph 3 | Conformity assessment in accordance with relevant harmonization legislation, including proof of conformity with AIA |
__________________
1 Conformity assessment procedures based on internal control (Annex VI AIA) do not require the involvement of a notified body.
The goal: EU Declaration of Conformity
HR AI providers issue a written, machine-readable, physical or electronically signed EU declaration of conformity and assume responsibility for compliance with the AIA (Art. 47 (1,4) AIA). For HR AI pursuant to Art. 6 (1), the provider issues a single EU Declaration of Conformity covering “all Union law applicable to the high-risk AI system”. The declaration contains "all the information required to identify the Union harmonisation legislation to which the declaration relates" (Art. 47 (3) AIA).
Standards as guidance
The application of harmonized standards triggers a presumption of conformity with the relevant legal requirements (Art. 40 (1) AIA). In addition, the EU Commission reserves the right to adopt common specifications (Art. 41 AIA). The published "Standardisation request to CEN and CENELEC in support of Union policy on artificial intelligence " will ensure the development of the relevant standards. Under the leadership of Joint Technical Committee 21 (JTC 21) of CEN and CENELEC, standards for use under the AIA are currently being developed in five working groups. The most important standards currently being developed include AI Trustworthiness Framework, Risk Management, Quality Management, and Conformity Assessment.
What are the responsibilities of deployers of high-risk AI systems?
The obligations for deployers of HR AI are summarized in Art. 26 of the AIA. They must " take appropriate technical and organisational measures to ensure they use such systems in accordance with the instructions for use accompanying the systems [...]" (paragraph 1). This also includes the responsibility to ensure that " input data is relevant and sufficiently representative in view of the intended purpose" if they are under the control of the deployer (paragraph 4). Based on the technical implementation by the provider, the requirements for human oversight must be implemented (paragraph 2). As part of post-market monitoring and vigilance, deployers have reporting obligations regarding serious incidents to providers, importers, distributors, and market surveillance authorities (paragraph 5). This also entails the obligation to store automatically generated logs - insofar as they are accessible to them -for at least six months (paragraph 6). Further requirements relate among others to the provision of information to employee representatives and affected employees on the use of HR AI (paragraph 7) and the obligation to carry out a data protection impact assessment in accordance with Art. 35 of Regulation (EU) 2016/679 or Art. 27 of Directive (EU) 2016/680 (paragraph 9).
Deployers of HR AI pursuant to Art. 6 (2) AIA – apart from the area listed in Annex III No. 2 – shall "perform an assessment of the impact on fundamental rights that the use of such system may produce (Art. 27 (1) AIA).
AI systems that continue to learn are not a no-go in the EU
In the event of a substantial change, HR AI shall "undergo a new conformity assessment procedure in the event of a substantial modification, regardless of whether the modified system is intended to be further distributed or continues to be used by the current deployer" (Art. 43 (4) AIA). It further states: " For high-risk AI systems that continue to learn after being placed on the market or put into service, changes to the high-risk AI system and its performance that have been pre-determined by the provider at the moment of the initial conformity assessment and are part of the information contained in the technical documentation referred to in point 2(f) of Annex IV, shall not constitute a substantial modification". The determination of changes to the HR AI and its performance at the time of the original conformity assessment, as well as the identification of associated risks, could be carried out in a manner similar to the FDA's Predetermined Change Control Plan (PCCP).
Medical engineering as a leading sector for the AIA
Experience with placing AI systems on the market is particularly extensive in medical engineering and can therefore be regarded as a leading sector for the AIA. More than 100 CE-marked AI systems for radiology are available on the EU market (as of March 2025), and the US Food and Drug Administration (FDA) has even approved more than 1,000 AI-based medical devices for the US market (as of December 2024).
The requirements set out in the joint questionnaire "Artificial Intelligence (AI) in Medical Devices" by IG-NB and Team-NB are currently decisive for the EU market. Providers of HR AI from other sectors should therefore also consistently implement the requirements of the questionnaire now in order to be prepared for the AIA (AIA Ready). Furthermore, the Medical Device Coordination Group (MDCG) has published Guideline 2025-6 with FAQs on the interplay between MDR/IVDR and AIA.
The highly regulated medical engineering industry already has extensive experience in risk management, quality management, and technical documentation for AI systems.
AI real-world laboratories as a "playground"
The establishment of AI real-world laboratories is intended to enable providers to develop, train, test, and validate AI systems for a certain period before they are placed on the market (Art. 57 AIA). This is intended to promote innovation and competitiveness and facilitate market access for start-ups and small and medium-sized enterprises (SMEs). In addition, under certain conditions, providers can test their systems under real-world conditions outside AI real-world testing facilities. These conditions include the granting of an authorization by the competent national authority based on a test plan submitted in advance by the provider. Future implementing acts of the European Commission are expected to lay down the conditions under which real-world testing facilities will be introduced in the individual member states (Art. 58 AIA).
Model Contractual Clauses for the public procurement of AI
Public organizations wishing to procure an AI System that is developed or will be developed by an external supplier may use model contractual clauses in order tocomply with its obligations.
How providers and deployers are preparing for the AIA
Both providers and deployers of AI systems must already start building up skills among their staff that correspond to their respective knowledge, professional experience, and training.
The safety of AI systems is the joint responsibility of providers and deployers. Therefore, early and intensive cooperation is recommended.
Providers of HR AI must quickly consider the increased effort required for technical documentation and quality management in terms of both organizational and financial resources. Preferably, the provider should apply the compliance-by-design approach, i.e., the regulatory requirements are integrated into the development process at an early stage. Relevant guidelines and standards that appear in the future should be continuously analyzed and implemented.
Deployers of HR AI should focus primarily on establishing internal processes for application, post-market monitoring, and vigilance of HR AI.