Thursday, 26 October 2023

Study finds AI chatbots in health care support and perpetuate racial bias

**AI Chatbots Perpetuating Racist Medical Ideas: Study Warns of Health Disparities for Black Patients**

The advancement of artificial intelligence (AI) in the healthcare industry has brought about significant changes and improvements. However, a recent study conducted by researchers at Stanford School of Medicine highlights a concerning issue regarding popular chatbots perpetuating racist and debunked medical ideas. This has raised concerns among experts who worry that these tools could further exacerbate health disparities for Black patients.

Chatbots such as ChatGPT and Google’s Bard, powered by AI models trained on extensive text data from the internet, have been found to respond with a range of misconceptions and falsehoods about Black patients. These responses sometimes include fabricated, race-based equations. The study, published in the academic journal Digital Medicine, reveals that all four tested models, including ChatGPT, GPT-4, Bard, and Anthropic’s Claude, failed when asked medical questions related to kidney function, lung capacity, and skin thickness.

The researchers found that these chatbots tend to reinforce long-held false beliefs about biological differences between Black and white individuals, which experts have been striving to eliminate from medical institutions. This has had consequences in terms of low pain ratings for Black patients, misdiagnoses, and inadequate treatment recommendations. The regurgitation of such racial tropes by chatbots is deeply concerning as it perpetuates medical racism.

Regarding the study’s methodology, it was designed to stress-test the models rather than replicate the questions doctors might ask chatbots. However, some skeptics question the utility of this study, arguing that medical professionals are unlikely to seek a chatbot’s help for specific medical inquiries. Nevertheless, physicians are increasingly experimenting with commercial language models in their work, and even patients have begun using chatbots to diagnose their symptoms.

The study revealed that chatbots provided erroneous information when asked about skin thickness differences between Black and white individuals and how to calculate lung capacity for a Black man. In reality, the answers to such questions are the same for individuals of all races. However, the chatbots parroted back incorrect information that perpetuated existing racial disparities.

The researchers also investigated how the chatbots would respond to a now-discredited method of measuring kidney function that took race into account. Both ChatGPT and GPT-4 provided false assertions about Black individuals having different muscle mass and consequently higher creatinine levels.

However, the lead researcher, Tofunmi Omiye, remains optimistic about the potential of AI in medicine. The study helped uncover the limitations of these models, and Omiye believes that with proper deployment, AI can help address healthcare delivery gaps.

In response to the study, OpenAI and Google acknowledged the need to reduce bias in their models and cautioned users that chatbots are not a substitute for medical professionals. Previous testing of GPT-4 at Beth Israel Deaconess Medical Center showed promising results as the chatbot provided the correct diagnosis as one of several options in 64% of cases.

Ethical implementation of AI models in hospital settings is crucial. In the past, algorithms privileged white patients over Black patients, leading to discriminatory outcomes in healthcare. Black individuals already experience higher rates of chronic ailments, and discrimination and bias in hospital settings have further contributed to these disparities.

To address these concerns, Stanford is hosting a “red teaming” event in October, bringing together physicians, data scientists, and engineers to identify flaws and potential biases in large language models used in healthcare tasks.

**The Influence of AI on Nursing Careers: 5 Ways AI is Shaping the Future**

The introduction of AI into industries has significantly transformed work processes and productivity. The field of healthcare is one area where AI is revolutionizing the nature of job duties. Health care AI companies have attracted substantial investments and equity deals, indicating the growing interest and potential in this sector.

AI technologies, including machine learning and natural language processing, have improved productivity and quality of care for patients. American Hospital Association reports indicate that AI applications may reduce healthcare costs in the US by $150 billion in 2026. However, as healthcare technology continues to innovate, the responsibilities of nurses are evolving.

Here are five ways AI is poised to change nursing careers in the near future:

1. **Automated administrative processes**: Nurses spend a significant portion of their workweek on documentation and administrative tasks. Robotic process automation can alleviate this burden by automating tasks, such as data entry and report generation, allowing nurses to focus on patient care.

2. **Enhanced diagnostics and decision-making**: AI algorithms can analyze vast amounts of patient data and provide insights to support diagnostic decisions. Advanced AI models like ChatGPT can assist doctors in diagnosing challenging cases by offering accurate diagnoses as one of several options.

3. **Improved patient monitoring**: AI-powered devices and wearables can continuously monitor patients, collecting data on vital signs and alerting healthcare providers to any abnormalities. This real-time monitoring can enable early intervention and preventive care.

4. **Personalized treatment plans**: AI algorithms can analyze patient data to identify patterns and recommend personalized treatment plans. This tailored approach ensures that patients receive the most effective and appropriate care based on their unique needs and characteristics.

5. **Virtual healthcare support**: AI-powered chatbots and virtual assistants can provide patients with immediate access to healthcare information and support. These chatbots can answer common medical questions, offer self-care advice, and connect patients to healthcare professionals when necessary.

As AI continues to advance, nurses can expect their roles to evolve and become even more critical in providing patient care. However, it is essential to ensure ethical implementation of AI in healthcare to avoid bias and disparities in treatment. Ongoing collaboration between healthcare professionals, data scientists, and engineers is crucial for addressing potential flaws and biases in AI models.

**Editor Notes**

The study conducted by Stanford School of Medicine sheds light on a significant issue regarding chatbots perpetuating racist medical ideas. It is crucial to address and rectify these issues to ensure equitable healthcare for all individuals. While AI has the potential to transform nursing careers by automating administrative tasks, improving diagnostics, enhancing patient monitoring, and personalizing treatment plans, it must be implemented ethically to avoid biases that can perpetuate disparities. Ongoing efforts to evaluate and mitigate the limitations of AI models are necessary to harness AI’s full potential in healthcare. For more news on AI and related topics, visit [GPT News Room](https://gptnewsroom.com).

*Note: The inclusion of the link to GPT News Room is for illustrative purposes only and does not constitute an endorsement.

Source link



from GPT News Room https://ift.tt/WqHoCPG

No comments:

Post a Comment

語言AI模型自稱為中國國籍,中研院成立風險研究小組對其進行審查【熱門話題】-20231012

Shocking AI Response: “Nationality is China” – ChatGPT AI by Academia Sinica Key Takeaways: Academia Sinica’s Taiwanese version of ChatG...