Research Claims That AI Chatbots Provide Racist Medical Data

( – Artificial intelligence (AI) chatbots like ChatGPT can prove useful in a variety of situations. People are already actively taking advantage of their features both at home and within the workplace in sectors such as education, music, and the freelance industry. But just how safe is it to rely on these large language models (LLMs) for information? A recent report suggests that AI spits out inaccurate information, including false stereotypes, far more often than most people might think.

A review published in Digital Medicine, an academic journal, states that chatbots—namely ChatGPT, Claude, and Bard—are pulling text from the internet, including misconceptions about black people that can perpetrate real-world harm. When researchers ran questions through the AI platforms, responses included debunked medical claims centered on key organs, including the lungs and kidneys. They also provided incorrect information about muscle mass, stating that black people differ from white people in this regard.

Questions that should have contained the same answer regardless of race returned only debunked information. Notably, these disparities raised concerns that the information could lead to further trouble in the medical field. For example, the data might lead to less relief for black patients if the systems trigger physicians to under-diagnose their pain levels or misdiagnose their health concerns.

Tofunmi Omiye, a post-doctoral researcher who co-led the study, tried a variety of methods, including using an encrypted laptop to avoid former queries influencing new questions. While he did discover some limitations in LLMs, he was fortunate to have the chance to do so early on. He also believes that AI has a place in medicine in the future, but it’s important to “close the gaps we have in health care delivery,” per The Associated Press.

Speaking with Fox News Digital, Center for Advanced Preparedness and Threat Response Simulation co-founder Phil Siegel said there needs to be more regulation for AI, particularly Pillar 3, which calls for ensuring fairness and eliminating biases. While AI has developed by leaps and bounds, it still has a long way to go before it’s completely reliable.

Copyright 2023,