A recent study from the University of Notre Dame led by researcher Paul Brenner has painted an alarming picture: In conversations about political issues, many people can no longer distinguish between contributions from real people and AI bots. The experiment, in which participants were engaged in discussions with AI-controlled bots and humans, revealed that in 58 percent of cases, participants misjudged the nature of their counterparts. This finding raises serious questions about the role of AI bots in spreading misinformation and their potential impact on democracy.

The experiment: bots against humans

The study used large language models (LLM), including OpenAI's GPT-4, Meta's Llama-2 Chat, and Anthropic's Claude 2, and simulated discussions about global political issues. The AI ​​bots were equipped with ten different, realistic-looking identities, complete with their own opinions and personal profiles. These identities were designed to provide commentary on world events, contributing succinct opinions and connections to personal experiences. Interestingly, the result showed that the specific platform on which the AI ​​models were based made little difference in participants' ability to identify the bots.

The danger of misinformation

The inability to distinguish AI bots from humans risks making misinformation more easily spread and viewed as credible. The bots' design, based on previously successful human-backed bot accounts online, was aimed at effectively spreading misinformation. Paul Brenner highlights the importance of these findings by warning that such bots could potentially influence elections by operating silently on social networks and manipulating public opinion.

Questions and answers about AI bots and people:

Question 1: What was the percentage of participants who were unable to recognize AI bots?
Answer 1: 58 percent of participants could not distinguish AI bots from humans.

Question 2: Which language models were used in the study?
Answer 2: GPT-4 from OpenAI, Llama-2-Chat from Meta and Claude 2 from Anthropic.

Question 3: What was the goal of the AI ​​bot identities in the study?
Answer 3: The AI ​​bot identities aimed to have a significant impact on society by spreading misinformation.

Question 4: What highlights the difficulty of distinguishing AI bots from humans?
Answer 4: The difficulty highlights the risk of the spread of misinformation and its potential impact on democracy.

Question 5: What potential dangers does Brenner see in the spread of misinformation by AI bots?
Answer 5: Brenner warns that such bots could influence elections by manipulating public opinion.

Conclusion

The University of Notre Dame study sheds disturbing light on the ability of AI bots to be indistinguishable from real people in discussions, particularly when political topics are at stake. The fact that 58 percent of participants misjudged the nature of their interlocutors highlights the risk associated with the spread of misinformation through such technologies. These findings highlight the need to educate the public about the risks and develop strategies to protect the integrity of democratic processes. Developing tools to detect AI-generated content and promoting media literacy are critical steps to curb the spread of misinformation and protect democratic society.

We invite you to sign up for the Mimikama newsletter at Newsletter and register for our online lectures and workshops at Online Lecture to learn more about the critical use of information in the digital age and how you can effectively counter misinformation .

Source: news.nd.edu ; arxiv.org

You might also be interested in:

Notes:
1) This content reflects the current state of affairs at the time of publication. The reproduction of individual images, screenshots, embeds or video sequences serves to discuss the topic. 2) Individual contributions were created through the use of machine assistance and were carefully checked by the Mimikama editorial team before publication. ( Reason )