Friday, 20 October 2023

Researchers caution that AI may exacerbate health disparities among Black individuals

SAN FRANCISCO — The use of artificial intelligence (AI) in healthcare has raised concerns about perpetuating racist medical ideas and exacerbating health disparities for Black patients, according to a study conducted by researchers at Stanford School of Medicine. The study revealed that popular chatbots, such as ChatGPT and Google’s Bard, have been providing responses that include false information and fabricated race-based equations about Black patients. These misconceptions reinforce long-held false beliefs about biological differences between Black and white individuals, which can lead to biased medical treatment and contribute to health disparities.

The study examined four chatbot models, including ChatGPT, GPT-4, Google’s Bard, and Anthropic’s Claude. Researchers found that all four models failed to respond accurately to medical questions about kidney function, lung capacity, and skin thickness. The chatbots not only perpetuated false beliefs but also provided incorrect information on differences that do not exist. These findings are alarming considering the potential real-world harms and the amplification of medical racism that could occur as more physicians rely on chatbots for daily tasks.

Dr. Roxana Daneshjou, an assistant professor at Stanford University and faculty adviser for the paper, highlighted the consequences of perpetuating such misinformation. “There are very real-world consequences to getting this wrong that can impact health disparities,” she said. “We are trying to have those tropes removed from medicine, so the regurgitation of that is deeply concerning.”

The use of commercial language models in healthcare is becoming increasingly prevalent, with some dermatology patients even relying on chatbots to diagnose their symptoms. However, this reliance on AI chatbots raises concerns about patients receiving inaccurate information. The study’s researchers posed questions to the chatbots about skin thickness differences and calculating lung capacity for Black individuals. In response, the chatbots provided erroneous information, perpetuating false beliefs about racial differences in these areas.

While companies like OpenAI and Google acknowledge the issue of bias in their models, they also emphasize that chatbots should not serve as a substitute for medical professionals. They urge users to refrain from relying on chatbots for medical advice. Nevertheless, it is crucial to address biases in AI models to ensure accurate and equitable healthcare outcomes.

Though AI has shown promise in assisting with medical diagnoses, further research is needed to investigate potential biases and diagnostic blind spots. While some argue that language models are not suitable for making fair and equitable decisions about race and gender, it is essential to address and rectify the biases present in AI systems to ensure equitable healthcare for all patients.

Similar concerns regarding biases in healthcare algorithms have been raised in the past. For example, a study in 2019 revealed that an algorithm used in a large US hospital systematically favored white patients over Black patients. This same algorithm was also used to predict the healthcare needs of 70 million patients nationwide, leading to potential disparities in care provision. Another study found racial bias in commonly used computer software for testing lung function, resulting in inadequate care for Black patients with breathing problems.

To address these concerns, it is crucial to independently test commercial AI products and ensure their fairness, equity, and safety. Institutions like the Mayo Clinic have been experimenting with large language models that are specifically trained on medical literature. However, it is essential to avoid generalizing the findings from widely used chatbots to those tailored for healthcare professionals. The distinction lies in the training data sources, with medical literature-based models showing more promise in providing accurate and consistent information.

Dr. John Halamka, President of Mayo Clinic Platform, emphasized the importance of rigorous testing and standards to ensure reliable AI models in healthcare. Mayo Clinic is exploring the use of large medical models and plans to train them on the patient experience of millions of people. Such models have the potential to augment human decision-making, but they need to meet rigorous standards before being deployed in clinical settings.

In conclusion, the Stanford study raises concerns about the perpetuation of racist medical ideas by popular chatbot AI models. The inaccuracies and false beliefs surrounding Black patients’ health can contribute to existing health disparities. Addressing biases in AI models used in healthcare is crucial for ensuring accurate and equitable medical treatment. Independent testing and the development of rigorous standards can pave the way for reliable AI models that augment human decision-making and help close the gaps in healthcare delivery.

**Editor’s Notes**

The study conducted by researchers at Stanford School of Medicine sheds light on the potential dangers of using AI chatbots in healthcare. It highlights the need to address biases in AI models to avoid perpetuating racist medical ideas and worsening health disparities. While AI has the potential to improve healthcare delivery, it is essential to develop rigorous standards and thoroughly test AI models before deploying them in clinical settings. By doing so, we can ensure that AI technology in healthcare is fair, equitable, and safe for all patients.

For more news and insights on the latest advancements in AI and technology, visit [GPT News Room](https://gptnewsroom.com).

Source link



from GPT News Room https://ift.tt/PmweA6K

No comments:

Post a Comment

語言AI模型自稱為中國國籍,中研院成立風險研究小組對其進行審查【熱門話題】-20231012

Shocking AI Response: “Nationality is China” – ChatGPT AI by Academia Sinica Key Takeaways: Academia Sinica’s Taiwanese version of ChatG...