(Frankfurt a. M., 10.04.2024) The AI Quality & Testing Hub (AIQ), which enables companies to measure and improve the quality of artificial intelligence (AI), is a member of the newly founded US AI Safety Institute Consortium. In coordination with other US authorities and together with international partners, the aim is to reduce risks in AI development and improve the quality of AI systems. The consortium is based at the National Institute of Standards and Technology (NIST), which reports to the US Department of Commerce. It brings together more than 200 leading AI players who are driving the use of safe, trustworthy AI. Currently, most of the member companies are based in the USA or the UK. Like comparable NIST activities, it is designed as a public-private partnership in which experts, partners, companies and research institutions are to provide their input for the development of risk management guidelines.
Using AI as a key technology in a safe and innovation-friendly way
Dr. Beate Mand, Deputy Chairwoman of the VDE Executive Board, emphasizes: "The VDE has been driving innovation and sustainable technologies for more than 100 years. To ensure this succeeds, we define the highest safety standards and test the safety of products and systems. In AIQ, we contribute our expertise and experience in the field of artificial intelligence so that this key technology can be used safely and at the same time in an innovation-friendly manner. AIQ's collaboration with NIST and the AI Safety Initiative is an important contribution to achieving this goal."
Hesse's Digital Minister Prof. Dr. Kristina Sinemus: "Germany must not miss the boat when it comes to regulating artificial intelligence. With the participation of our AIQ, we are actively involved in a global network and benefit from the exchange with experts, which can be of great benefit for our own product development and positioning on the market. The AIQ can also position itself as a potential AI safety center for Germany." Joint shareholders of the AI Quality & Testing Hub are the Hessian state government and the VDE.
Regulation of artificial intelligence - international alliances
Dr. Michael Rammensee, Managing Director of AI Quality & Testing Hub: "We are delighted to be able to contribute our expertise to the US AI Safety Institute Consortium. Our expertise in the development and application of guidelines for auditing AI systems is in demand and we contribute to the development of testing tools and environments. We also support the development of safety tests for AI systems, known as AI Red Teaming." Future research results from the consortium will be shared between NIST and the contract partners with the aim of subsequent publication.
The founding of the AI Safety Institute was announced by US Vice President Kamala Harris at the AI Safety Summit in the UK in early November 2023. This is to cooperate with the British AI Safety Institute, the founding of which was also announced at the summit. Shortly afterwards, the establishment of the AI Safety Institute Consortium was announced. On October 30, 2023, US President Joe Biden issued the "Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence", which provides for regulations for AI-based technologies. The aim is to enable the greatest possible benefit from AI while minimizing the risks. In addition to the USA, Japan, France and the UK have already made efforts to set up national AI safety centers.