Towards acceptable residual risks
In order to effectively implement the requirements of Art. 9, the legal framework, harmonized standards such as EN 18228, and other technical standards work together. They define areas in which risk control measures must be established on a mandatory basis.
Data governance
The quality of the data used is a key factor in determining the safety of an AI system. Training, validation, and test data must be relevant, representative, and largely error-free. Particular emphasis is placed on identifying and minimizing bias in order to avoid discrimination.
Logging
Adequate logging is necessary in order to analyze malfunctions and reconstruct decisions. It forms the basis for subsequent corrective measures and increases the transparency of system behavior.
Transparency and instructions for use
The instructions for use must enable users to operate the system safely. This includes clear information on the intended purpose, limitations, residual risks, and requirements for input data and operating environments.
Human oversight
The system must be designed in such a way that effective human oversight remains possible at all times. This reduces the risk of “automation bias” and ensures that users can critically evaluate outputs and intervene if necessary—for example, through clear intervention options or a stop mechanism.
Accuracy, robustness and cybersecurity
High-risk AI systems must achieve a defined level of accuracy and stability on an ongoing basis. This includes risk control measures against errors, manipulation, and attacks. The aim is to ensure safe operation even under difficult conditions.
Conclusion
Risk management in accordance with the AI Act and EN 18228 requires systematic and technically robust implementation of all regulatory requirements. Only if the hierarchy of measures is consistently applied and risk controls are fully implemented in key areas can legally compliant and trustworthy high-risk AI systems be placed on the market.