Wednesday 30 August 2023

Incorrect cancer treatment recommendations provided by AI chatbot

Chatbots Powered by AI Algorithms for Cancer Treatment Recommendations: A Study

In a recent article published in JAMA Oncology, researchers evaluated the accuracy and reliability of chatbots, powered by large language models (LLMs) driven by artificial intelligence (AI) algorithms, in providing cancer treatment recommendations.

Study: Use of Artificial Intelligence Chatbots for Cancer Treatment Information. Image Credit: greenbutterfly / Shutterstock.com

Background: The Potential of LLMs in Healthcare

Large language models (LLMs), such as the OpenAI application ChatGPT, have shown promise in encoding clinical data and making diagnostic recommendations. These models have been used to update healthcare professionals on recent developments in their fields and identify potential research topics. LLMs can provide prompt, detailed, and coherent responses to queries, mimicking human dialects.

However, despite being trained on reliable data, LLMs are not immune to biases and limitations. This raises concerns about their reliability and applicability in medical contexts.

Researchers predict that general users might rely on LLM chatbots for cancer-related medical guidance. Inaccurate or less accurate responses from these chatbots could misguide users and lead to the spread of misinformation.

The Study: Evaluating the Performance of an LLM Chatbot

The study focused on evaluating the performance of an LLM chatbot in providing prostate, lung, and breast cancer treatment recommendations aligned with the National Comprehensive Cancer Network (NCCN) guidelines.

The LLM chatbot used 2021 NCCN guidelines as its knowledge base for treatment recommendations.

The researchers developed four zero-shot prompt templates and created four variations for 26 cancer diagnosis descriptions, resulting in a total of 104 prompts. These prompts were then inputted into the GPT-3.5 through the ChatGPT interface.

The study team consisted of four board-certified oncologists. Three oncologists assessed the concordance of the chatbot’s output with the 2021 NCCN guidelines using five scoring criteria developed by the researchers. Disagreements were resolved with the help of the fourth oncologist.

Study Findings: Performance and Limitations of the LLM Chatbot

The study analyzed a total of 104 unique prompts and scored them according to five criteria. The three annotators agreed on 61.9% of scores. Additionally, the LLM chatbot provided at least one NCCN-concordant treatment recommendation for 98% of the prompts.

However, there were cases where the chatbot recommended non-concordant treatments (35 out of 102 outputs). These non-concordant treatments primarily included immunotherapy, localized treatment of advanced disease, and other targeted therapies.

The chatbot’s responses were also influenced by the phrasing of the questions, leading to occasional unclear output and disagreements among the annotators. Interpreting the descriptive output of LLMs proved challenging, particularly when it came to NCCN guideline interpretations.

Conclusions and Implications

The evaluation of the LLM chatbot revealed that it mixed incorrect cancer treatment recommendations with correct recommendations, even though experts failed to detect these mistakes. Approximately 33.33% of the chatbot’s treatment recommendations partially deviated from the NCCN guidelines.

The findings emphasize the importance of properly educating patients about potential misinformation that can arise from AI technologies like chatbots. They also highlight the necessity of federal regulations to address the limitations and inappropriate use of AI in healthcare that can harm the general public.

Editor Notes: Promoting Responsible AI Use in Healthcare

As AI technologies continue to advance and become more widely used in healthcare, it is crucial for both healthcare providers and patients to understand their limitations and potential risks. The study discussed here underscores the need for responsible AI development, along with proper guidelines and regulations to ensure patient safety and accurate information dissemination.

Editor’s Note: Explore the latest news and insights on AI and other technological advancements in the healthcare industry at GPT News Room.

Source link



from GPT News Room https://ift.tt/ELVuWRe

No comments:

Post a Comment

語言AI模型自稱為中國國籍,中研院成立風險研究小組對其進行審查【熱門話題】-20231012

Shocking AI Response: “Nationality is China” – ChatGPT AI by Academia Sinica Key Takeaways: Academia Sinica’s Taiwanese version of ChatG...