Artificial intelligence (AI) has found its way into almost all areas of our lives, and it is tempting to use this technology in the medical field as well. But a recent study conducted at Brigham and Women's Hospital at Harvard Medical School has revealed alarming results. OpenAI's intelligent research and copywriting software ChatGPT, known for its text-handling skills, is proving to be a risky guide to cancer treatments. Researchers discovered that in addition to providing expert advice, the chatbot also spread dangerous misinformation that could potentially undermine trust between patients and doctors.

Dangerous ChatGPT advice despite supposed expertise

The study found that of the 104 cancer treatment plans created by ChatGPT, a full 98 percent included accurate recommendations as well as at least one treatment recommendation that met the renowned National Comprehensive Cancer Network (NCCN) guidelines. However, 34 percent of plans revealed recommendations that were not consistent with NCCN guidelines. These discrepancies demonstrate the challenging task that developing cancer treatment plans appears to be and point to the limited capabilities of ChatGPT.

Only in 62 percent of cases did the plans created by ChatGPT fully comply with NCCN guidelines. The complexity of these policies and the limited range of ChatGPT features were cited as the main reasons for these discrepancies. What was particularly concerning was that in 12.5 percent of cases, the chatbot made recommendations that were completely inconsistent with NCCN guidelines. This misinformation creates false hopes for effective treatments, even for incurable cancers.

The humanity of deception

What makes these results even more disturbing is the fact that ChatGPT's answers sound human-like and can therefore seem convincing, even if they are incorrect or dangerous. Oncologist Danielle Bitterman emphasizes how easily patients and even doctors could fall for the misinformation because of the persuasive wording. Not only does this have the potential to create false hope, but it could also strain the relationship between patients and doctors.

The role of AI in healthcare

Despite these troubling results, there is growing interest in using AI in healthcare, particularly to streamline administrative tasks. A recent study suggests that AI could soon be used for early breast cancer detection, significantly reducing the workload of radiologists. However, it is important to emphasize that AI systems such as ChatGPT should only provide evidence that needs to be verified by experienced physicians. They can serve as valuable support, but not as a replacement for medical expertise.

Conclusion

The tantalizing idea that artificial intelligence can take over the complexity of medical decision-making is significantly tempered by the results of the study at Harvard Medical School's Brigham and Women's Hospital. ChatGPT's apparent expertise in cancer counseling contains dangerous pitfalls that could raise false hopes and jeopardize trust between patients and physicians. While AI can undoubtedly play a promising role in healthcare, it is essential to understand its limitations and leave ultimate responsibility in the hands of experienced healthcare professionals.

Source:

Jama Network
Already read? A sudden farewell to Horst Lichter, Barbara Schöneberger and Oliver Pocher? Facebook is currently flooded with misleading fake ads that suggest exactly that. Mimikama clarifies: Horst Lichter, Barbara Schöneberg and Oliver Pocher have not died!

Notes:
1) This content reflects the current state of affairs at the time of publication. The reproduction of individual images, screenshots, embeds or video sequences serves to discuss the topic. 2) Individual contributions were created through the use of machine assistance and were carefully checked by the Mimikama editorial team before publication. ( Reason )