OpenAI has dropped its prohibition on providing its technology to military entities, according to an executive
ChatGPT creator OpenAI is working with the US military on several artificial intelligence projects after dropping a prohibition on the use of its technologies for “military and warfare” purposes, a company executive told Bloomberg on Tuesday at the World Economic Forum in Davos.
The AI pioneer is developing “open-source cybersecurity software” and discussing how to prevent suicides among military veterans with the US government, OpenAI vice president of global affairs Anna Makanju said.
While Makanju did not elaborate on either project, she explained that OpenAI’s decision to remove a blanket prohibition on the use of its AI tech for “military and warfare” applications was in line with a broader policy update “to adjust to new uses of ChatGPT and its other tools,” according to Bloomberg.
“Because we previously had what was essentially a blanket prohibition on military, many people thought that would prohibit many of these use cases, which people think are very much aligned with what we want to see in the world,” she explained.
Despite the ban’s repeal, Makanju insisted OpenAI continues to prohibit the use of its technology to “develop weapons, destroy property, or harm people.”
However, Microsoft, which owns a large part of OpenAI and enjoys the unrestricted use of its advanced AI technologies, has long contracted with the US military and other branches of the government, and lacks any inbuilt prohibition on weapons development, according to Bloomberg.
In addition to partnering with the Pentagon for military applications, OpenAI is expanding its operations in the realm of “election security,” according to CEO Sam Altman, who also spoke to Bloomberg during the Davos conclave.
“Elections are a huge deal,” he said, declaring it “good” that “we have a lot of anxiety” about the process.
His company is reportedly working on preventing the use of its generative AI tools to spread “political disinformation,” such as deepfakes and other artificially-generated media that could be used to attack or prop up candidates during the 2024 voting cycle.
Last month, OpenAI and Microsoft were sued by the New York Times for copyright infringement, with the self-anointed paper of record declaring their generative AI capabilities to be unfair competition and an existential threat to press freedom. The lawsuit seeks “billions of dollars in statutory and actual damages” for “unlawful copying” and use of the NYT’s intellectual property.
Susman Godfrey, the law firm representing the NYT, also proposed a class action lawsuit against the AI titans in November for “rampant theft” of authors’ works, alleging the companies illegally used nonfiction authors’ writings without their permission to “train” their blockbuster chatbot ChatGPT