Risiko Button in rot auf weißer Tastatur
gunnar3000 / stock.adobe.com
2026-02-25 expert contribution

Risk Management for High‑risk AI Systems

Article 9 of the EU AI Act requires continuous, life‑cycle risk management for high-risk AI systems. This includes the ongoing identification, assessment, and reduction of all foreseeable risks to health, safety, and fundamental rights, as well as the regular updating of measures based on new findings and usage experience.

Contact
AI Projects & Services

Risk Management Process

The newly developed EN 18228 standard specifies the requirements of the AI Act and translates them into a clearly structured process in the quality management system (QMS) for the systematic identification, assessment, and control of risks to health, safety, and fundamental rights. The AI Regulation thus expands the previous focus of traditional risk management systems such as the medical device standard ISO 14971: In addition to physical safety risks, possible violations of fundamental rights – such as discrimination or intrusion of privacy – must now also be assessed, and criteria must be established for when such risks are considered acceptable.

Users also play an important role: they must ensure that controls, monitoring mechanisms and the instructions for use (IFU) are followed during operation. This means that both providers and users are responsible for risk management throughout the entire life cycle of the AI system.

01. Risk analysis

The process begins with the identification of all known and foreseeable risks, including those arising from interaction with the operating environment. In doing so, the provider takes particular account of:

At the same time, potential impacts on affected fundamental rights are assessed. These areas form the core of the risk analysis according to Art. 9 and the newly developed EN 18228.

02. Risk ssessment

The identified risks are assessed based on defined criteria, both for intended use and for foreseeable misuse. Unacceptable risks must be reduced before the product is placed on the market.

03. Testing activities

Tests serve to identify critical risks at an early stage and to verify that the AI system—especially in the case of learning components—is stable and operates within defined limits.

04. Risk control

Article 9 does not describe any prioritization of risk reduction measures. However, standards such as EN 18228 specify this in a three‑stage hierarchy:

  • Inherently safe design
  • Protective measures
  • User information and training

This sequence ensures that risks are reduced primarily through technical and design measures and only as a last resort through user information and training.

05. Assessment of residual risk

After implementing all measures, the provider assesses the overall residual risk. If it is acceptable, residual risks are documented and communicated in the instructions for use (IFU). If it is not acceptable, the system may not be placed on the market.

06. Review of risk management

Risk management is reviewed regularly and prior to placing the system on the market; the results are documented.

07. Post‑market activities

The provider monitors the AI system after it has been placed on the market using a documented procedure for collecting and evaluating relevant information. In particular, knowledge gained from logging system events is evaluated here.

Towards acceptable residual risks

In order to effectively implement the requirements of Art. 9, the legal framework, harmonized standards such as EN 18228, and other technical standards work together. They define areas in which risk control measures must be established on a mandatory basis.

Data governance

The quality of the data used is a key factor in determining the safety of an AI system. Training, validation, and test data must be relevant, representative, and largely error-free. Particular emphasis is placed on identifying and minimizing bias in order to avoid discrimination.

Logging

Adequate logging is necessary in order to analyze malfunctions and reconstruct decisions. It forms the basis for subsequent corrective measures and increases the transparency of system behavior.

Transparency and instructions for use

The instructions for use must enable users to operate the system safely. This includes clear information on the intended purpose, limitations, residual risks, and requirements for input data and operating environments.

Human oversight

The system must be designed in such a way that effective human oversight remains possible at all times. This reduces the risk of “automation bias” and ensures that users can critically evaluate outputs and intervene if necessary—for example, through clear intervention options or a stop mechanism.

Accuracy, robustness and cybersecurity

High-risk AI systems must achieve a defined level of accuracy and stability on an ongoing basis. This includes risk control measures against errors, manipulation, and attacks. The aim is to ensure safe operation even under difficult conditions.

Conclusion

Risk management in accordance with the AI Act and EN 18228 requires systematic and technically robust implementation of all regulatory requirements. Only if the hierarchy of measures is consistently applied and risk controls are fully implemented in key areas can legally compliant and trustworthy high-risk AI systems be placed on the market.

Get in touch with us!

Briefumschlaege als Icons, Netzwerkkonzept
thodonal / stock.adobe.com

We offer our services in consulting projects and in-house workshops and would be happy to provide you with more information about our services and answer any questions.

AI Projects & Services: aips@vde.com