In recent years, artificial intelligence (AI) has made enormous progress and has become an important tool in many areas of our daily lives. The possibilities that AI offers are impressive, ranging from automating processes to personalized recommendations.

Limitations of today's algorithms

However, AI also has its limitations, and one of those limitations is the fact that language models like ChatGPT tend to make things up that aren't true. These hallucinations or fictions often sound very convincing and are difficult to resolve. As Aljoscha Burchardt from the Center for Artificial Intelligence in Berlin explains to the Tagesanzeiger , ChatGPT sometimes answers like a “precocious child” who answers questions confidently and convincingly, but often asserts incorrect facts.

Language models work by mimicking the connections between neurons in the human brain. Thanks to increasingly cheaper computing power, artificial neural networks can become larger and larger and can be trained with more and more data. This means they can recognize subtle patterns in the data, but it also means they don't learn consistent world knowledge. As Jonas Andrulis, head of the Heidelberg company Aleph Alpha , puts it, ChatGPT does not know the concept of truth and often hallucinates.

Language models like ChatGPT can only statistically model word sequences and calculate which word is most likely to follow next. How factual your texts are depends on how frequently the word sequence occurs in the training data. ChatGPT is characterized by the fact that it takes a lot of previous text into account when guessing the next word, which makes its texts seem coherent and human.

The AI ​​is inexperienced in making associations and “hallucinates”

People have a tendency to add things when they write texts. This additional information is based on background knowledge that may not have been asked for, but which adds value to the text as a whole. When trained with human text, language models learn the human tendency to go beyond the given context, which can lead to AI “hallucinations.”

However, despite advances in technology in recent years, language models still have their limitations and cannot replace human values ​​and judgment. It is important to be aware of this and understand the risks and challenges associated with using AI.

Where are you travelling to?

However, there are also positive developments in the field of AI research that aim to make the technology safer and more trustworthy. One possibility is to program AI in such a way that the processes within the algorithm become transparent and explainable. Which means users can understand how an AI system comes to its decisions.

Another option is to train AI systems to instill human-like values ​​and ethics. Only by equipping AI systems with approximations of the values ​​and beliefs of human societies can one assume that they also work in harmony with human values ​​and norms. Virtues such as: not making false statements and preferring to admit that you cannot answer something.

There is still much work to be done to make AI safer and more effective. It is therefore important that we rise to this challenge and advocate for the responsible use of AI. This is the only way we can reap the benefits that technology offers while minimizing the potential risks. If the tools are truly tailored to the needs and requirements of society, we can be sure that AI technology will benefit people and the environment.

Sources: Tagesanzeiger
Images: Lexica ( CC0 )

More about artificial intelligence:
Midjourney: Image AI can no longer be used for free
Experts are calling for a temporary stop to the large AI systems
ChatGPT is taking the work off of Americans

Notes:
1) This content reflects the current state of affairs at the time of publication. The reproduction of individual images, screenshots, embeds or video sequences serves to discuss the topic. 2) Individual contributions were created through the use of machine assistance and were carefully checked by the Mimikama editorial team before publication. ( Reason )