The EU AI Act: Progress or Failure?
In the corridors of Brussels, where the pulse of European politics beats, a central topic is the focus of discussion: the regulation of artificial intelligence. The European Parliament presented its final position on AI regulation, also known as the AI Act, in June. This is a crucial step in putting Europe at the forefront of the safe and trustworthy use of AI technology. Johannes Kröhnert, head of the Brussels office of the TÜV association, underlines the potential of this measure, but also highlights the areas where improvements are needed.
Risk classification of AI systems: a high level of uncertainty
The EU institutions are planning a risk-based approach to classifying AI systems. This is a sound approach, but in its current form it falls short. Kröhnert points out that only AI systems that are integrated into physical products that are already subject to mandatory testing by independent bodies should be classified as high-risk. These are mainly industrial products such as elevators or pressure vessels.
A number of everyday products such as toys or smart home devices that contain AI systems are not subject to this testing requirement. It is therefore likely that a large number of AI-based consumer products will not be classified as high-risk products and will therefore be exempt from strict security requirements.
In addition, Kröhnert sees the possibility of misjudgments when it comes to classifying stand-alone AI systems - i.e. AI systems that come onto the market as pure software for certain areas of application. The current plan is for providers to carry out the risk assessment themselves and ultimately decide for themselves whether their product should be classified as a high-risk product or not.
Strengthen trust through independent verification
Kröhnert sees a need to increase people's trust in AI technology by introducing stricter and more transparent testing. The self-declaration by providers, which the EU legislator currently requires, may not be sufficient for high-risk systems. Kröhnert recommends a comprehensive obligation to provide evidence that includes independent bodies.
A recent representative survey by the TÜV Association showed that 86 percent of Germans support mandatory testing of the quality and safety of systems that use artificial intelligence. Such an approach could help “AI Made in Europe” become a real quality standard and a global competitive advantage.
AI real-world laboratories: a step in the right direction
Kröhnert praises the establishment of AI real laboratories (“regulatory sandboxes”) as a way to promote the development and testing of AI systems, especially for SMEs. However, he cautions that the use of a real-world laboratory alone should not give rise to a presumption of compliance with AI regulatory requirements.
It is therefore important that providers still need to go through a full conformity assessment process before bringing their AI system to market. Kröhnert points out that the involvement of independent bodies could be essential for this.
Regulation of artificial intelligence must also include ChatGPT & Co
In light of recent developments and advances in AI technology, Kröhnert emphasizes the need to also regulate generative AI systems such as ChatGPT in the AI Act. These systems must meet basic safety requirements and it should be checked which basic models can be classified as highly critical.
Improvements to the AI Act, with a view to combating fraud and fake news, are of great importance.
Artificial intelligence (AI) has the potential to contribute to both the creation and combating of disinformation and fraud.
- Creating disinformation: AI can be used to create convincing disinformation or fake news. For example, deepfake technologies based on AI can create realistic but completely fake images or videos. These can then be used to spread misinformation or commit fraud. There are also AI-controlled bots that can spread false news or propaganda on social media.
- Spreading disinformation: AI can also play a role in spreading disinformation. Algorithmic news feeds on social media can leave people trapped in echo chambers where they only receive information that confirms their existing views - even if those views are based on disinformation.
To mitigate these risks, the AI Act could provide clearer and stricter rules for the classification and control of AI systems that could be used to create or spread disinformation. This could include independent reviews and strict sanctions for the misuse of AI for such purposes.
At the same time, AI can also be an important tool for combating disinformation and fraud. For example, AI can be used to detect and flag misinformation or identify suspicious activity that could indicate fraud. The AI Act could also contain provisions that encourage the use of AI for such positive purposes.
So it's not just a question of improvement, but also of recognizing the dual potential of AI and intelligent regulation to promote the good and minimize the bad.
Conclusion
The AI Act marks a crucial step for Europe to become a global leader in the trustworthy and secure use of artificial intelligence. However, there is still room for improvement. It is necessary to establish clear and unambiguous classification criteria to ensure the effectiveness of the mandatory requirements. In addition, there is a need for mandatory, independent audits of high-risk systems to increase people's trust in the technology and avoid potential conflicts of interest on the part of providers. This is the only way to make AI regulation ambitious and future-proof.
Help us make our online community safer! Download the attached sharepic and share it on your Facebook feed or stories. The more people we can reach with this, the safer our digital world will become. Always remember: think first, then click!

Also read: Looking at the AI table tennis competition: “Can humans ever beat robots?”
Source:
Press portal
If you enjoyed this post and value the importance of well-founded information, become part of the exclusive Mimikama Club! Support our work and help us promote awareness and combat misinformation. As a club member you receive:
📬 Special Weekly Newsletter: Get exclusive content straight to your inbox.
🎥 Exclusive video* “Fact Checker Basic Course”: Learn from Andre Wolf how to recognize and combat misinformation.
📅 Early access to in-depth articles and fact checks: always be one step ahead.
📄 Bonus articles, just for you: Discover content you won't find anywhere else.
📝 Participation in webinars and workshops : Join us live or watch the recordings.
✔️ Quality exchange: Discuss safely in our comment function without trolls and bots.
Join us and become part of a community that stands for truth and clarity. Together we can make the world a little better!
* In this special course, Andre Wolf will teach you how to recognize and effectively combat misinformation. After completing the video, you have the opportunity to join our research team and actively participate in the education - an opportunity that is exclusively reserved for our club members!
Notes:
1) This content reflects the current state of affairs at the time of publication. The reproduction of individual images, screenshots, embeds or video sequences serves to discuss the topic. 2) Individual contributions were created through the use of machine assistance and were carefully checked by the Mimikama editorial team before publication. ( Reason )

