Tech Company OpenAI Discreetly Removes Bar on Military Use From Terms of Service

(NewsSpace.com) – Artificial intelligence (AI) has grown by leaps and bounds over the past few years and has become a more prominent presence in the last few months. When OpenAI released ChatGPT in 2022, it had specific terms of service that banned use by the military and for warfare. However, that text has since been altered, and it’s raising eyebrows.

According to Computer World, up until January 10, OpenAI’s policy banned its technology from being used for “weapons development, military, and warfare.” Now, it reads that the service is banned for use in tasks that would “bring harm to others.” The change became necessary when the company began working with government agencies, specifically in the defense realm.

A statement from an OpenAI spokesperson said that ChatGPT is not allowed “to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property,” but there are specific uses that align with the company’s mission. The spokesperson notes that OpenAI is “working with DARPA” to create cybersecurity tools that would be beneficial for “critical infrastructure and industry.” Since the original policy terms didn’t make it clear if that type of work would be allowed, the company changed it.

The use of chatGPT by the military has been a source of division within OpenAI. Experts have also cautioned that the technology needs to be harnessed, lest it lead to an extinction event. In May 2023, many public figures and experts signed an open letter that expressed their concerns about the risks that AI, left unchecked, poses.

Acknowledging the risk, OpenAI published a blog in which it said it was preparing a readiness team that would help safeguard against the start of a nuclear war or other potentially catastrophic events, which include but aren’t limited to cybersecurity, radiological, biological, and individualized persuasion threats.

Copyright 2024, NewsSpace.com