Legal QMS requirements
The QMS requirements are set out primarily in Art. 17, with cross-references to Art. 9 (risk management), Art. 72 (post-market monitoring), and Art. 73 (incident reporting). The aim is to establish a documented QMS that ensures compliance with the regulation by covering design, development, testing, risk management, data governance, and post-market monitoring. At its core, the QMS shall include the following:
- Regulatory compliance strategy: Documented plan for conformity assessment and for managing modifications to the AI system.
- Design and development controls: Procedures for design, design verification, and systematic quality assurance.
- Testing and validation: Testing and validation procedures for the entire development phases.
- Data governance: Systems and procedures must cover acquisition, collection, labeling,
- Risk management system: The risk management framework shall ensure that risks to health, safety, and fundamental rights are identified and mitigated.
- Post-market monitoring: A system for post-market monitoring must be established and maintained, ensuring continuous oversight of deployed AI systems.
- Incident reporting: Procedures must be in place for reporting serious incidents of AI systems to authorities.
- Communication procedures: Communication with stakeholders such as national competent authorities, Notified Bodies, and customers shall be established.
- Record-keeping: AI system traceability (by integrated logging capabilities) and retention of relevant documentation shall be assured.
- Resource management: Adequate personal and technical resources shall be provided.
- Accountability framework: Responsibilities of management and staff must be clearly defined across all QMS aspects.
Compliance framework EN 18286
Providers are encouraged to apply harmonized standards where possible. The to be harmonized draft standard prEN 18286:2025 translates Art. 17 quality management obligations into a structured, auditable framework. It emphasizes that the QMS implementation shall start with a clear quality policy and compliance strategy. This strategy should assign responsibility to top management and define roles and accountability throughout the organization. Providers are required to embed risk management processes that align with Article 9 of the AI Act, ensuring that risks to health, safety, and fundamental rights are systematically identified, assessed, and mitigated. The QMS must cover design and development controls, including verification, validation, and change management procedures, particularly for continuously learning systems where predetermined changes must be documented and controlled. It requires the establishment of data governance processes for acquisition, labeling, storage, analysis, and retention, ensuring that datasets are reliable, representative, and managed responsibly. Beyond development, prEN 18286:2025 mandates post-market monitoring systems to track performance and detect emerging risks once AI systems are deployed, alongside strict incident reporting procedures for serious events within defined timelines. The QMS must also include supplier and resource management, ensuring that third-party components and services are controlled and integrated into compliance processes. Technical documentation and instructions for use must be maintained to support safe use of high-risk AI systems and regulatory oversight. Finally, the standard requires mechanisms for handling non-conformities and driving continuous improvement, reinforcing the QMS as a living system that evolves with technological and regulatory developments. The QMS according to this standard can be integrated with existing frameworks (e.g., ISO 13485 for medical devices).
Take-aways
The QMS provides the basis for compliance of high-risk AI systems with the EU AI Act. In comparison to sectoral legislations the EU AI Act contains also AI-specific requirements that require additional resources to be invested by the provider. The standard prEN 18286:2025 provides a blueprint for operationalizing compliance, integrating regulatory requirements into auditable processes that authorities will rely on during conformity assessment of AI systems under the EU AI Act.