Why transparency matters for high‑risk AI systems
Regarding AI systems transparency is defined as “level of accessibility to the algorithm and data used” (ISO/IEC TR 29119-11) and represents a fundamental regulatory principle of the EU AI Act. Providers must design high-risk systems in such a way that "their operation is sufficiently transparent to enable deployers to interpret a system’s output and use it appropriately" (Art. 13 (1)).
Providers and deployers of high-risk AI systems must ensure that systems coming into contact with humans clearly disclose their AI nature (Art. 50 (1)) no later than the time of the first interaction (Art. 50 (5)).
Transparency promotes safety, accountability and the protection of fundamental rights, while also enabling Notified Bodies and competent authorities to conduct conformity assessments and market surveillance effectively.
Transparency documentation helps to ensure alignment with other regulatory obligations, including Art. 9 (risk management), Art. 14 (human oversight), and Art. 15 (accuracy, robustness, cybersecurity).
Instructions for use as transparency core element
High‑risk AI systems must be accompanied by concise, complete, correct, and clear instructions for use in a digital or otherwise accessible format (Art. 13 (2)). These instructions must be tailored to the competence and needs of those deploying the system and support its appropriate operation throughout its lifecycle.
Required content
The AI Act outlines a detailed list of information categories that must be included in the instructions for use (Art. 13 (3)):
- Provider identity and contact information (if applicable, the authorized representative is mandatory)
- Characteristics, capabilities, and limitations
- Predetermined changes (if applicable)
- Human oversight measures (acc. to Art. 14)
- Operational requirements (incl. hardware) and maintenance
- System logging mechanisms (acc. to Art. 12)
These requirements collectively ensure that deployers have the information necessary to operate the system safely, mitigate risks, and maintain ongoing compliance.
How explainability supports transparency
According to ISO/IEC TR 29119-11 the term “explainability” shall be understood as “level of understanding how the AI-based system came up with a given result”. It supports transparency by making the internal logic and decision‑making processes of a high‑risk AI system understandable to deployers.
Please note that the General Data Protection Regulation (GDPR) includes explainability requirements for certain decision-making systems.
Best practices in EN 18229-1
Practical implementation of Art. 13 benefits significantly from the newly developed standard EN 18229-1. It specifies the requirements of the following topics:
- Design and development for transparency
- Format, quality as well content of the instructions for use
Conclusion
Art. 13 establishes one of the most rigorous transparency frameworks for AI worldwide. By requiring clear explanations, detailed accompanying documentation and disclosure of limitations and risks, the AI Act ensures that high-risk AI systems are deployed responsibly and safely. Providers should therefore integrate transparency workflows into the design and development process from the outset, maintain documentation throughout the system's lifecycle and tailor instructions to the technical capacity of the deployers.