Grafische Benutzeroberfläche
metamorworks / stock.adobe.com
2026-02-25 expert contribution

Obligations for deployers of high-risk AI systems

The EU Artificial Intelligence Act (AI Act) clearly defines the responsibilities of those deploying high-risk AI systems, ensuring their use is safe, compliant and transparent. It sets out detailed requirements, ranging from the proper use of systems and human oversight to transparency, monitoring and data protection. The following overview outlines the main obligations that deployers must fulfil to comply with the Act and protect fundamental rights.

Contact
AI Projects & Services

Key obligations

The AI Act defines a deployer as “natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity” (Art. 3 (4)).

The responsibilities of deployers of high-risk AI systems are summarized in Art. 26:

  • Technical and organizational measures must be implemented to ensure that the AI system is used exclusively in accordance with its instructions for use (Art. 26 (1)).
  • Human oversight (implemented technically by provider) must be assigned to natural persons who have the necessary competence (according to Art. 4), training, and authority (Art. 26 (2)).
  • Input data under the control of the deployer must be relevant and sufficiently representative with regard to the intended purpose of the AI system (Art. 26 (4)).
  • AI system operation must be monitored based on the instructions for use. In the event of risks to the health, safety, or fundamental rights of humans, Art. 72 requires that the supplier, distributor, and relevant market surveillance authority gets  informed immediately and that the system is no longer in use (Art. 26 (5)).
    According to Art. 73 the provider must be notified of any serious incident first, followed by importers, distributors, and the market surveillance authority. If the provider cannot be reached, the notification must be sent directly to the market surveillance authority (Art. 26 (5)). Sensitive operational data of deployers of AI systems which are law enforcement authorities are not affected by this obligation.
  • Insofar as the logs automatically generated by the AI system are subject to user access, they must be stored for a period of at least six months in compliance with legal provisions on the protection of personal data (Art. 26 (6).
  • Transparency in the use of high-risk AI systems must be ensured by informing workers’ representatives and affected workers that they will be subject to the use of such systems (Art. 26 (7)).
  • A data protection impact assessment under Art. 35 of Regulation (EU) 2016/679 (General Data Protection Regulation) or Art. 27 of Directive (EU) 2016/680 (Data protection in criminal matters ) must be carried out on the basis of the transparency information in the instructions of use (Art. 26 (9)).

Special obligations for individual sectors and AI system use cases

Financial institutions as deployers comply with monitoring and logging requirements of Arts.
26 (5) and 26 (6), respectively, provided they comply with the rules on internal governance arrangements, processes and mechanisms under the relevant financial service law.

An EU institution or other body must register as a deployer of a high-risk AI system, as defined in Annex III, apart from those listed in point 2, in the new Europe-wide database, in accordance with Art. 49 (3). If a high-risk AI system has not been registered in the EU database in accordance with Art. 71, the EU institution must not use it and must inform the provider or distributor accordingly (Art. 26 (8)).

The use of an AI system for post-remote biometric identification in the context of a targeted search for a person suspected or convicted of having committed a criminal offense requires the approval of a judicial or administrative authority (Art. 26 (10)). In addition, this paragraph introduces the requirement for documenting the use of these systems.

Deployers of high-risk AI systems listed in Annex III, which are used to make or assist in decisions concerning natural persons, must inform them about the use of these systems (Art. 26 (11)).

Public institutions and private institutions that provide public services are required to conduct a fundamental rights impact assessment as providers of high-risk AI systems. The same applies to banks and insurance companies if they use such systems for creditworthiness checks or risk assessment and pricing in relation to health and life insurance (Art. 27 (1)).

Conclusion

Overall, the AI Act imposes significant responsibility on deployers to ensure the safe, lawful, and transparent use of high‑risk AI systems. Beyond adhering to the system’s intended use and ensuring qualified human oversight, deployers must monitor operations, manage data carefully, and take actions when risks arise. The European Commission is expected to publish guidelines in 2026 that will clarify the practical application of certain deployers' obligations.

Sector-specific duties and notification requirements further reinforce accountability, while obligations such as transparency towards affected individuals, and the requirement to conduct data protection and fundamental rights impact assessments, highlight the Act’s strong focus on protecting people. These measures aim to ensure that high-risk AI systems are deployed responsibly and with respect for fundamental rights.

Get in touch with us!

Briefumschlaege als Icons, Netzwerkkonzept
thodonal / stock.adobe.com

We offer our services in consulting projects and in-house workshops and would be happy to provide you with more information about our services and answer any questions.

AI Projects & Services: aips@vde.com