OpenAI reviews Pentagon contract — explicit ban on mass surveillance and why it matters

Sam Altman announced the inclusion in the contract with the U.S. Department of Defense of a clause that explicitly prohibits the use of OpenAI systems for mass domestic surveillance. We examine what legal and geopolitical consequences this has for civil rights and for countries watching AI development — in particular Ukraine.

82
Share:
Ілюстративне фото: Depositphotos

What happened

OpenAI CEO Sam Altman announced his intention to revise the contract with the U.S. Department of Defense (the Pentagon). The text will be amended to include a clause that would explicitly prohibit the use of OpenAI systems for mass domestic surveillance of U.S. citizens and residents.

What exactly is proposed

Under the stated principles, the document will specify that AI may not be deliberately used for internal monitoring, tracking, or the collection of personal data from commercial sources in a manner that falls under the definition of mass surveillance. The restrictions are to be aligned with the U.S. Constitution and applicable national laws.

"I will not carry out orders that I deem unconstitutional."

— Sam Altman, CEO of OpenAI

What this means for intelligence agencies

Altman also said that OpenAI services should not automatically become tools for intelligence bodies, including the National Security Agency (NSA), without separate changes to the agreement. This means that using the company's technologies for domestic surveillance would require additional legal and procedural steps.

Industry context

At the same time, other market players are also setting positions on the use of AI for military and law-enforcement purposes. For example, Anthropic previously announced that it would not agree to relax its models' restrictions for mass surveillance or the creation of fully autonomous weapons. Against this backdrop, the market is paying increasing attention to transparent rules and corporate responsibility.

The market is also reacting: the Claude app topped the ranking of free apps in the App Store, while the number of ChatGPT removals increased. The company behind ChatGPT cited financial figures — a reported $110 billion raised with a $730 billion valuation — and references to a new Pro Lite $100 plan were found in the service's code.

Why this matters for Ukraine

OpenAI's move to establish clear bans on using its systems for mass surveillance is not only a matter of U.S. domestic law: it sets a precedent for global AI governance practices. For Ukraine, which is fighting for information security and the protection of citizens during war, it is important that AI providers adhere to norms that limit tools of repression and mass control.

Summary and forecast

If the provisions are enshrined in the contract, this will create a strong precedent: major AI suppliers will be forced to more clearly declare the boundaries of cooperation with state structures. This increases transparency and provides legal grounds to challenge potential abuses. However, the key question remains practical: who and by what mechanism will monitor compliance with these restrictions and whether they will become the norm for other companies in the industry.

Now the ball is in the court of lawmakers, regulators, and the providers themselves: will declarations turn into concrete, verifiable rules — and will they prevent the export of technologies that could threaten freedoms in countries where authoritarian practices prevail?

World news