Künstliche intelligenz im Gesundheitswesen, KI in der Gesundheit, Frau mit Blazer
Stephan / stock.adobe.com
2026-02-18 expert contribution

Human oversight for high‑risk AI systems

Human oversight is a mandatory risk control for all high‑risk AI systems according to Art. 14 EU AI Act. It lays down concrete design and information duties for providers and operational duties for deployers. The forthcoming European standard EN 18229‑1 will provide structured methods to translate these legal duties into engineering and governance measures. 

Contact
AI Projects & Services

Scope of human oversight

High‑risk AI systems must allow effective human oversight throughout their lifecycle.

According to Art. 14 (3) oversight measures must either be built into the system by the provider or implemented by the deployer (as foreseen by the provider). The required level of human oversight depends on the inherent risks, the degree of autonomy and the specific use context of the high-risk AI system.

Human oversight must ensure the following:

  • Understanding the relevant capacities and limitations.
  • Ability to monitor in operation.
  • Prevention of automation bias.
  • Correct interpretation of outputs.
  • Ability to decide not to use, disregard, override, or reverse the output.
  • Intervention in operation (e.g., pressing the “stop button”).

Special case: remote biometric identification

The AI Act adds enhanced oversight for certain high-risk biometric identification systems (No. 1 (a) of Annex III), namely, no action or decision based on identification may be taken unless verified separately by at least two competent individuals (Art. 14 (5)). However, exceptions apply to law enforcement, migration, border control, and asylum, where the application of this requirement is considered disproportionate under Union or national law.

Provider‑implemented oversight measures

Art. 14 requires providers to design and develop high‑risk AI so that natural persons can effectively oversee it in use, including through appropriate human‑machine interface (HMI).

Providers must build in constraints the system cannot override and ensure overseers have proper skills and authority. The system should guide humans on when and how to intervene to prevent risks.

Human oversight is one of the external risk controls that must be implemented and verified within the provider’s risk management system (Art. 9) and quality management (Art. 17). With laying down the respective information in the instructions for use (Art. 13) deployers shall be enabled to perform their human oversight task.

The assessment of the human oversight measures including the related technical measures shall be part of the technical documentation of the AI system (No. 2 (e) of Annex IV).

Deployer‑implemented oversight measures

Deployers must designate individuals with competence, training and authority to oversee operation and take appropriate technical and organizational measures to ensure that the high-risk AI system is used in accordance with the instructions for use (Art. 26 (2)), Art. 14 (3)).

Oversight from the deployer’s point of view includes the following tasks:

  • Operation of the system according to instructions of use (Art. 26 (1))
  • Implementation of oversight measures (Art. 26 (3))
  • Ensuring that input data is relevant and sufficiently representative in view of the intended purpose (Art. 26 (4))
  • Suspending the use if risks or malfunction arise (Art. 26 (5))

How a new standard helps you to comply with human oversight requirements

The standard EN 18229‑1 “AI Trustworthiness Framework - Part 1: Logging, transparency and human oversight” is currently under development and is planned to be harmonized under the EU AI Act. It will contain guidance regarding the following topics:

  • Interaction of human oversight with risk management
  • Technical implementation into the AI system
  • Deployer-implemented oversight measures
  • Specific requirements for remote biometric identification (RBI) systems
  • Information to be provided in instructions for use (e.g., technical and organizational measures as well as competency requirements for designated individuals of the deployer)

Summary

Human oversight should be viewed as the link between technical performance and the protection of fundamental rights. Providers and deployers of high-risk AI systems must cooperate closely when implementing human oversight measures.

Providers must consider the technical implementation from the outset of the development process. Deployers must ensure that designated individuals are competent and aware of their high level of responsibility for the safe application of the AI system.

Biometric identification systems require stricter oversight: no action may be taken based on an identification unless two people independently verify it, unless this is considered disproportionate under specific Union or national laws.


Get in touch with us!

Briefumschlaege als Icons, Netzwerkkonzept
thodonal / stock.adobe.com

We offer our services in consulting projects and in-house workshops and would be happy to provide you with more information about our services and answer any questions.

AI Projects & Services: aips@vde.com