The texts created by the artificial intelligence (AI) ChatGPT are currently still highly error-prone, as a study by the University of Southern California led by Mayank Kejriwal and engineering student Zhisheng Tang shows. They have reviewed ChatGPT and other AI-based systems for their ability to work rationally.

Huge data sets as a basis

ChatGPT relies on existing basic components when formulating texts. It “learns” from huge data sets spread across the internet and returns what is statistically most likely to be correct. “Despite their impressive capabilities, large language models don’t really think. They tend to make elementary mistakes and even invent things. However, because they produce fluent speech, people tend to believe that they can think ,” says Kejriwal.

This, Kejriwal and Tang say, led them to investigate the supposed cognitive abilities of the models - work that has become more important now that such text creation models are widely available. They defined computer rationality as the ability to choose the one that comes closest to the truth or to hit it precisely when faced with various possible solutions. At ChatGPT, the scientists say they did not find this rationality in many cases.

Innocent professor in the pillory

A particularly blatant case was uncovered Washington Post As part of a research study, a lawyer in California asked ChatGPT to compile a list of legal scholars who had sexually harassed someone. The name of law professor Jonathan Turley also appeared on the list. He made sexually suggestive comments and attempted to inappropriately touch a student during a school trip to Alaska. The AI ​​“cited” a March 2018 article in the Washington Post as its source. But there is no such article. The school trip mentioned never took place either. Where ChatGPT got the information from could not be reconstructed.

“It's a very specific combination of facts and untruths that make these systems quite dangerous ,” says Kate Crawford, a professor at the University of Southern California who has been affected herself. She said she was recently contacted by a journalist who was using ChatGPT to research sources for a story. The bot suggested Crawford and offered examples of her relevant work, including an article title, publication date, and citations. Everything sounded plausible - and everything was fake.

Source: pte


Related to the topic:
ChatGPT FAQ: The most important questions and answers
Abuse of ChatGPT: AI model used to create malware
ChatGPT - How fraudsters and criminals use artificial intelligence for fraud, misinformation and cybercrime
GPT-4 & Co: Why you are glorified AI loves to
lie Fraud with ChatGPT fake: Sensitive data is at play


Notes:
1) This content reflects the current state of affairs at the time of publication. The reproduction of individual images, screenshots, embeds or video sequences serves to discuss the topic. 2) Individual contributions were created through the use of machine assistance and were carefully checked by the Mimikama editorial team before publication. ( Reason )