(Frankfurt am Main, November 26, 2025) The partners of the MISSION AI project – National Initiative for Artificial Intelligence and Data Economy – presented a voluntary quality standard for low-risk AI systems at the AI Quality & Governance Day in Berlin. In addition, a digital testing portal is being provided to facilitate practical application by companies.
At a time when artificial intelligence (AI) systems are increasingly becoming the focus of attention in everyday life, the challenges for operators and providers of AI systems are growing. In view of increasing regulation and criticism of AI products, it is clear that only those who take legal requirements and implicit customer demands regarding transparency and trustworthiness into account from the outset will achieve rapid market launch, improved product quality, and higher user acceptance.
Expertise in standardization, testing, consulting, and research
As part of the MISSION AI project, VDE has collaborated with leading organizations from the AI technology sector (PwC Germany, TÜV AI.Lab, AI Quality & Testing Hub, and Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS) to develop a voluntary quality standard for low-risk AI applications. The standard combines expertise from standardization, testing, consulting, and research. With its extensive experience in product testing and standardization, VDE has made a significant contribution to ensuring that the MISSION AI quality standard puts the abstract requirements for trustworthy AI systems into practice.
Nora Dörr, MISSION AI project manager at VDE, explains: "Start-ups and SMEs in particular do not have unlimited opportunities to bring their AI systems to market due to their limited human and financial resources. The voluntary quality standard is particularly helpful to them in both development and marketing. Practical assistance and clear guidelines show where there is room for targeted improvement in their own AI systems. The principles of trustworthy AI addressed in the standard make the quality of your AI systems verifiable and demonstrable. Trustworthy AI becomes verifiable with the quality standard."
Providers receive structured proof of quality
For AI systems with limited risk, there are currently only transparency requirements under the EU AI Act. The MISSION AI quality standard summarizes key principles of trustworthy AI—including transparency, non-discrimination, and reliability—in six quality dimensions. These have been translated into clear criteria and verifiable measures and form the basis of an independently executable testing procedure. This procedure guides companies step by step through the evaluation of their AI system. A protection needs analysis determines which requirements are the focus in the respective application context. The final test report shows the degree of compliance with the quality requirements and highlights areas where AI systems can be specifically improved.
The standard offers several advantages for companies: AI providers receive structured proof of quality that can serve as a basis for dialogue with investors, customers, and regulatory stakeholders. Start-ups and SMEs in particular can benefit from the opportunity to systematically document quality measures and demonstrate them in procurement and tendering procedures. For public contracting authorities and procurement agencies, uniform criteria are created on the basis of which AI solutions can be compared in a structured manner. At the same time, the standard supports companies in addressing the requirements of the EU AI Act early and systematically.
The quality standard and the testing portal can be found here.