Survey by the TÜV Association: Concerns among the population about incalculable risks, a glut of fake news and job losses. 84 percent demand legal requirements for AI applications. Europe is a pioneer in the regulation of AI: Leading committees of the EU Parliament discuss their position on the “AI Act”.

Applications with generative artificial intelligence (AI) such as ChatGPT are spreading rapidly, but are also causing considerable concern among the population: Since its introduction in November, 83 percent of German citizens have heard of ChatPT. And almost one in four people have already used ChatGPT for professional or private purposes (23 percent).

This was the result of a representative Forsa survey commissioned by the TÜV Association among 1,021 people aged 16 and over. Young people are the pioneers: 43 percent of 16 to 35 year olds have now used ChatGPT. In the 36 to 55 year old age group it is 20 percent and among the 56 to 75 year olds it is only 7 percent. Men use the AI ​​application slightly more often at 27 percent than women at 19 percent.

“Since its introduction, ChatGPT has impressively demonstrated the enormous potential that artificial intelligence has,” said Dr. Joachim Bühler, Managing Director of the TÜV Association, presenting the study results. “ChatGPT also shows the risks associated with the use of artificial intelligence.”

Concerns are widespread among the population. 80 percent of those surveyed agree that there are currently incalculable risks when using AI. Almost two out of three fear that the technology can no longer be controlled by people (65 percent) or that it could be manipulated without their knowledge (61 percent). And 76 percent are concerned that AI does not adequately protect personal data.

“It must be ensured that AI applications do not physically harm or disadvantage people,” said Bühler. “The planned AI regulation offers the opportunity to create a legal framework for the ethical and safe use of artificial intelligence in the EU.”

In the study, 84 percent of German citizens support legal requirements for AI applications.

Generative AI systems such as ChatGPT, Midjourney, DALL-E or DeepL are used to create text, images, videos or other content. AI is also increasingly being used in critical areas such as automated vehicles, medical diagnoses and robotics. And AI applications are used as automated decision-making systems, for example in personnel selection or for assessing creditworthiness. Opinions are divided when it comes to the opportunities and risks of the technology. Every second person (50 percent) is of the opinion that the bottom line is that the opportunities outweigh the risks. 39 percent disagree and 12 percent are undecided.

“Artificial intelligence will have far-reaching consequences for the labor market,” said Bühler.

87 percent of those surveyed assume that artificial intelligence will fundamentally change the world of work. And almost half believe that a lot of people will lose their jobs as a result of the use of AI (48 percent). However, only 15 percent are currently worried that AI systems will replace their own professional activities. Many even expect benefits. Every second person agrees that AI has the potential to support him or her in their job (50 percent). “A likely scenario is that AI applications as digital assistants will support employees in completing a wide variety of tasks,” said Bühler. This applies to both office work and the planning and execution of manual activities.

Dangers for the media system and democracy

The population is very concerned about the impact of ChatGPT and other generative AI applications on the media system and our political system. A good half of people are of the opinion that technology is a threat to democracy (51 percent).

Screenshot TÜV Association: “Security of applications with generative artificial intelligence (AI) such as ChatGPT”
Screenshot TÜV Association: “Security of applications with generative artificial intelligence (AI) such as ChatGPT”

“Citizens fear a wave of fake news, propaganda and manipulated images, texts and videos,” said Bühler.

According to the survey, 84 percent of those surveyed assume that AI will massively accelerate the spread of “fake news”. 91 percent believe that it will hardly be possible to tell whether photos or videos are real or fake. And a good two out of three respondents fear that AI will massively accelerate the spread of state propaganda (69 percent).

Screenshot TÜV Association: “Security of applications with generative artificial intelligence (AI) such as ChatGPT”
Screenshot TÜV Association: “Security of applications with generative artificial intelligence (AI) such as ChatGPT”

Bühler: “Content created with artificial intelligence will present democratic societies with enormous challenges.”

Population demands legal framework for artificial intelligence

The results of the survey are also clear when it comes to the question of legal requirements. 91 percent demand that lawmakers create a legal framework for the safe use of AI. The regulation and use of AI in the EU should be based on European values, say 83 percent. However, only 16 percent are of the opinion that AI should not be regulated at the moment and that ethical development should be left to the tech companies.

Those surveyed also have clear ideas about possible specifications: 94 percent demand a labeling requirement for content generated automatically or with AI support, 88 percent demand a label that AI is “included” in a product or application, and 86 percent even demand one Mandatory testing of the quality and security of AI systems by independent testing organizations.

From the TÜV Association's perspective, this results in a clear mandate for politicians to take action.

“With the AI ​​Act, the EU is a global pioneer in legislation in democratically organized economic blocs,” said Bühler. “With intelligent regulation, we can set an international standard for innovative and value-based AI.”

AI ​​Act to be discussed today in the EU Parliament's committees envisages dividing AI applications into four risk classes. The planned regulations range from a ban on AI applications with an “unacceptable risk” such as social scoring to no requirements for applications with “minimal risk” such as spam filters or games. AI systems with “limited risk” such as simple chatbots must meet certain transparency and labeling requirements. Strict security requirements apply to AI applications with a “high risk”, for example in critical infrastructure, human resources software or certain AI-based robots. In addition to transparency obligations, these must also meet requirements such as the explainability of their results or freedom from discrimination.

Bühler: “Before they are brought onto the market, all high-risk AI systems should be checked by an independent body. This is the only way to ensure that the applications meet the security requirements.”

From the perspective of the TÜV Association, the practical implementation of the AI ​​specifications should be prepared now. An important basis for this are generally applicable norms, standards and quality criteria. In addition, appropriate test procedures must be developed. The TÜV Association has long been committed to setting up interdisciplinary “AI Quality & Testing Hubs” at state and federal level. The TÜV companies are currently preparing to test AI systems and have founded the “TÜV AI Lab” for this purpose.

Results of the survey for download: Presentation for the press conference “Security of applications with generative artificial intelligence (AI) such as ChatGPT”

Methodological note: The information is based on a representative Forsa survey commissioned by the TÜV Association among 1,021 people aged 16 and over. The survey was conducted in April and May 2023. Further information at www.tuev-verband.de/digitalisierung/kuenstliche-intelligenz

In line with this topic:


If you enjoyed this post and value the importance of well-founded information, become part of the exclusive Mimikama Club! Support our work and help us promote awareness and combat misinformation. As a club member you receive:

📬 Special Weekly Newsletter: Get exclusive content straight to your inbox.
🎥 Exclusive video* “Fact Checker Basic Course”: Learn from Andre Wolf how to recognize and combat misinformation.
📅 Early access to in-depth articles and fact checks: always be one step ahead.
📄 Bonus articles, just for you: Discover content you won't find anywhere else.
📝 Participation in webinars and workshops : Join us live or watch the recordings.
✔️ Quality exchange: Discuss safely in our comment function without trolls and bots.

Join us and become part of a community that stands for truth and clarity. Together we can make the world a little better!

* In this special course, Andre Wolf will teach you how to recognize and effectively combat misinformation. After completing the video, you have the opportunity to join our research team and actively participate in the education - an opportunity that is exclusively reserved for our club members!


Notes:
1) This content reflects the current state of affairs at the time of publication. The reproduction of individual images, screenshots, embeds or video sequences serves to discuss the topic. 2) Individual contributions were created through the use of machine assistance and were carefully checked by the Mimikama editorial team before publication. ( Reason )