ChatGPT: OpenAI eases restrictions on military use

485

OpenAI has announced an update to its usage policy, signalling a relaxation of restrictions pertaining to the deployment of its technology in military and warfare applications.

The updated language in OpenAI’s policy maintains a strict prohibition on utilizing its services for specific purposes such as weapon development, causing harm to others, or property destruction, as clarified by a spokesperson for the organization.

The spokesperson emphasized OpenAI’s commitment to establishing a set of universally applicable principles that are not only easy to remember but also practical, particularly given the widespread global use of their tools by everyday users who now have the capability to construct GPTs.

OpenAI introduced the GPT Store on January 10. The innovative marketplace allows users to both share and explore personalized iterations of ChatGPT referred to as “GPTs.”

In the updated usage policy, OpenAI has incorporated overarching principles such as “Don’t harm others,” aiming for a broad yet easily understandable framework that holds relevance across diverse contexts.

Additionally, the spokesperson highlighted explicit prohibitions on specific applications, including the development and utilization of weapons, providing a more comprehensive guideline for users.

Concerns

Certain AI experts express concerns about the perceived generality of OpenAI’s policy revision, particularly in light of the current utilization of AI technology in the Gaza conflict. Notably, the Israeli military has disclosed its use of AI for target identification in airstrikes within the Palestinian territory, prompting apprehensions about the potential ramifications of OpenAI’s broadened policy amidst real-world conflict scenarios.

Also Read: OpenAI launches custom GPT store

“The language that is in the policy remains vague and raises questions about how OpenAI intends to approach enforcement,” Sarah Myers West, managing director of the AI Now Institute and a former AI policy analyst at the Federal Trade Commission, told The Intercept.

Military collaborations

Though OpenAI did not offer many specifics about its future plans, refraining from providing intricate details. However, the modifications in language capabilities seem poised to create opportunities for potential collaborations with military entities in the future.

According to a representative from OpenAI, the adjustments in its policy are driven, in part, by the identification of national security applications that align with their goals. Notably, OpenAI has initiated collaborations, such as with the Defense Advanced Research Projects Agency (DARPA), aimed at catalyzing the development of innovative cybersecurity tools.

The spokesperson added that the tools are specifically designed to enhance the security of open source software, which plays a pivotal role in supporting critical infrastructure and various industries.

Comments are closed.