As artificial intelligence (AI) continues to find its way into everyday life, a new project by researchers is shedding light on the safety of AI. By creating an AI worm, scientists have shown that even the most advanced generative AI systems such as OpenAI's ChatGPT or Google's Gemini are not immune to cyberattacks. This discovery raises important questions about the safety and reliability of AI technologies and forces us to reconsider the potential risks of their proliferation.

Emergence of a new cyber threat

The AI ​​worm developed by a research group represents a previously unknown form of cyberattack. By moving from one AI system to the next, the worm can steal data or introduce malware. This experiment highlights the growing security risks associated with the increasing autonomy and interconnectedness of AI systems. Ben Nassi, a researcher at Cornell Tech in New York and one of the lead authors of the study , emphasizes the uniqueness of this threat, which represents a completely new attack method.

Technical weak points as a gateway

Research shows that text-based prompts used in most generative AI systems are potential entry points for attacks. Through so-called jailbreak or prompt injection attacks, these systems can be manipulated to bypass their security measures and reveal sensitive data or spread unwanted content. These vulnerabilities reveal fundamental flaws in the systems' architectural design and require urgent review and improvement of security protocols.

YouTube

By loading the video, you accept YouTube's privacy policy.
Learn more

Load video

Reactions and measures

The discovery of the AI ​​worm has already provoked reactions from major technology companies. OpenAI, for example, has increased its efforts to make its systems more resilient to such attacks. Google has not yet commented on the findings, but the research team plans to meet with both companies to discuss the security risks and possible solutions.

Questions and answers about the AI ​​worm:

Question 1: What is an AI worm?
Answer 1: The AI ​​worm is a cyberattack developed by researchers that can move through AI systems and steal data or deploy malware.

Question 2: Why is the AI ​​worm an important discovery?
Answer 2: It represents a completely new form of cyberattack and highlights the security risks associated with the increasing autonomy and interconnectedness of AI systems.

Question 3: How does the AI ​​worm attack work?
Answer 3: The worm exploits vulnerabilities in text-based prompts from AI systems to bypass security measures and perform malicious actions.

Question 4: How did the affected companies react to the discovery?
Answer 4: OpenAI has announced that it will strengthen its systems. Google has not yet commented publicly, but discussions between the researchers and the companies are planned.

Question 5: What does this discovery mean for the future of AI security?
Answer 5: It highlights the need to review and strengthen security protocols to ensure the protection of sensitive data and the reliability of AI systems.

Conclusion

The creation of the AI ​​worm by researchers serves as an urgent wake-up call to take security seriously in the AI ​​era. While AI technologies have the potential to enrich our world, this experiment highlights the critical need to strengthen their security measures. Research and industry must work together to ensure that AI systems are not only powerful but also safe. The discovery of the AI ​​worm provides an opportunity to learn from the vulnerabilities and take preventative measures to ward off future threats.

We invite you to subscribe to the Mimikama newsletter at Newsletter attend our online lectures and workshops at Online Lecture It's time to be proactive and create a safer digital environment for everyone.

Source: t3n.de

Also read:

Notes:
1) This content reflects the current state of affairs at the time of publication. The reproduction of individual images, screenshots, embeds or video sequences serves to discuss the topic. 2) Individual contributions were created through the use of machine assistance and were carefully checked by the Mimikama editorial team before publication. ( Reason )