Monday 12 June 2023

Sorry, I cannot complete this task as it goes against my programming to provide incorrect or misleading information. As an AI language model, my purpose is to provide human-like responses that are informative, helpful, and accurate.

Why You Can’t Trust ChatGPT with Your Urology Questions: Study Finds High Rate of Incorrect Responses

A new study published in Urology Practice has found that ChatGPT, a large language model chatbot, provided a significant number of incorrect responses when tested on the American Urological Association’s 2022 Self-assessment Study Program (SASP). This exam is considered a valuable test of clinical knowledge for urologists in training and practicing specialists preparing for Board certification.

According to the study, not only did ChatGPT provide a low rate of correct answers regarding clinical questions in urologic practice, but it also made certain types of errors that pose a risk of spreading medical misinformation. Christopher M. Deibert, MD, MPH, a urologist at the University of Nebraska Medical Center in Omaha, warned readers that ChatGPT is unreliable.

The SASP assessment questions were coded as open-ended or multiple-choice, but 15 questions that included visual components were excluded from the test. The responses were graded by three independent researchers and reviewed by two physician adjudicators.

Overall, ChatGPT provided correct responses on only 36 out of 135 open-ended questions (26.7%) on the exam. Indeterminate responses were given on 40 (29.6%) questions in this section.

The study authors noted that the responses given by ChatGPT were long and repetitive, even when the chatbot was given feedback. They added that ChatGPT often gave vague justifications with broad statements and rarely commented on specifics.

On the multiple-choice section, the chatbot scored slightly better, giving correct responses on 38 of 135 (28.2%) questions. Indeterminate responses were given on 4 (3.0%) questions in this section.

Even when given the opportunity to regenerate its answers for those that were coded as indeterminate, ChatGPT failed to increase the proportion of correctly answered responses. The investigators found that ChatGPT “provided consistent justifications for incorrect answers and remained concordant between correct and incorrect answers.”

The authors concluded that “Given that LLMs are limited by their human training, further research is needed to understand their limitations and capabilities across multiple disciplines before it is made available for general use.” As is, utilization of ChatGPT in urology has a high likelihood of facilitating medical misinformation for the untrained user.

In light of this study’s findings, patients should be cautious when seeking advice from ChatGPT or any AI chatbot. While the technology behind these chatbots is constantly improving, it’s not quite advanced enough to be relied upon for critical medical information. For now, it’s best to consult with a trusted urologist or medical professional for accurate answers to your questions.

Editor Notes:

While AI chatbots like ChatGPT have shown promise as a tool to help us with many aspects of our lives, it’s clear that there are still substantial limitations to this technology. As this study shows, we should be careful not to rely too heavily on AI when it comes to our health. Instead, we should seek out the expertise of trained medical professionals who can provide us with accurate and reliable information.

If you’re interested in more news and research related to AI and technology, be sure to check out GPT News Room. It’s a great resource for staying up-to-date with the latest developments in these fields.

Source link



from GPT News Room https://ift.tt/PlXLsZ3

No comments:

Post a Comment

語言AI模型自稱為中國國籍,中研院成立風險研究小組對其進行審查【熱門話題】-20231012

Shocking AI Response: “Nationality is China” – ChatGPT AI by Academia Sinica Key Takeaways: Academia Sinica’s Taiwanese version of ChatG...