Insider Says Microsoft’s AI Program Generates Violent, Indecent Images and Ignores Copyright

( – Artificial intelligence (AI) has grown exponentially in the past few years. So much so that big tech companies have jumped on the bandwagon, hoping to make an impact and generate revenue from the technology. Microsoft is one such company, and one of its AI engineers is now blowing the whistle on one of its tools.

In order to provide users with the best tools, testing is an integral part of the development process. Red-teaming is a term for testing a product for vulnerabilities, and that’s exactly what Shane Jones was doing in December when he noticed something truly disturbing. While playing around with Copilot Designer, which uses OpenAI technology, he noticed that many of the images were violent. For example, he saw images of underage drug use and drinking and teenagers with assault weapons. Then, when he entered the words “pro-choice,” it produced demons, monsters, and violent scenes. These images are direct violations of the company’s responsible AI principles.

Jones said when he saw these images, it dawned on him, “Wow, this is really not a safe model.” He was spurred into action. First, he reported the concerns internally at Microsoft, but the company refused to remove the product from the market. Then, he went to OpenAI, which didn’t respond to his private messages, so he took them public in an open letter on LinkedIn. Microsoft demanded he remove the post, which he did, but he didn’t let that deter him.

The engineer has written to and met with lawmakers, noting that he has repeatedly asked Microsoft to add disclaimers and change the rating of the tool on the Google app to indicate it is only for mature audiences, with nothing changing. It’s still rated E for everyone in the Play Store at the time of writing. In fact, he said the Copilot team is addressing only the most damaging issues, and this isn’t one of them.

Jones is now urging the Federal Trade Commission to work with big tech companies, including Microsoft, to make AI platforms safer.

Copyright 2024,