Deepfakes Could Increase The Disinformation Problem In Health Care

( – Over the last few years, concerns around deepfakes have risen to the surface. These fakes are digitally altered videos of people made to appear as though they’re someone they’re not. Understandably, they could have a serious impact if taken at face value. Now, with the emergence of artificial intelligence (AI), there are even more concerns, particularly within the healthcare industry.

According to Axios, the technology around deepfakes is only getting better, which experts say could cause challenges when it comes to public health emergencies, like the recent pandemic. The spread of misinformation could raise the stakes at a time when there’s already much politicization in the healthcare industry.

Chris Doss, who works with RAND Corporation, conducted a study on the use of technology in scientific communication and noted that it’s more important than ever “to be vigilant about it and try to get a hold of it now when it’s still” showing signs of unrealized potential.

The concerns are that if the videos appear to be coming from top experts and recognized officials, it can be harder to stop the spread of misinformation. Deepfakes could also be used—in image and audio form—to conduct in-depth phishing via phone calls and messages, a risk many Americans already face.

Then, there’s the possibility that the images could be used to access a hospital’s information systems and not only call to reset a password but also gather sensitive data that could then be sold on the dark web or used to steal someone’s identity.

Consistent exposure to deep fakes seems to hinder one’s ability to recognize the fraudulent material. For example, according to the RAND study, experience with them doesn’t necessarily mean they’ll be easier to detect. Rather, “the opposite might be true.”

Some are hesitant to cut off AI in the medical industry because it does have its uses. But does the good outweigh the bad at this point?

Copyright 2023,