OpenAI has tweaked its pact with the US Department of Defense to curb AI in domestic spying. The update bars tracking Americans, even via bought data. CEO Sam Altman announced it on X.
“We have been working with the DoW to make some additions in our agreement to make our principles very clear,” Altman wrote. He detailed new clauses that align with the Fourth Amendment and other laws.
The deal now blocks use by intel agencies like the NSA without fresh approval. Altman stood firm on rights. “If I received what I believed was an unconstitutional order, of course I would rather go to jail than follow it,” he stated.
This follows a hasty Friday rollout that sparked outrage. Critics blasted loose rules on surveillance and weapons. The original tied limits to existing laws, which some view as flexible.
Altman admitted the error. “One thing I think I did wrong: we shouldn’t have rushed to get this out on Friday,” he said. “The issues are super complex, and demand clear communication.”
The timing raised brows, coming after the government ditched rival Anthropic. President Trump ordered a halt to its AI use over security concerns. The Pentagon then flagged Anthropic as a supply chain risk, banning military partners from dealing with it.
Anthropic stuck to no mass spying or fully autonomous weapons. CEO Dario Amodei refused to budge. “Allowing current models to be used in this way would endanger America’s warfighters and civilians,” Anthropic posted.
Defense Secretary Pete Hegseth fired back. “Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic,” he declared on X.
Anthropic vowed a court challenge. “We will challenge any supply chain risk designation in court,” it said. The firm claims the tag hurts US innovation.
OpenAI’s deal, worth up to $200 million, allows AI in classified setups. But initial terms seemed vague. A source told The Verge it boiled down to “any lawful use,” potentially enabling wide data scoops.
Past US spying scandals fuel doubts. Leaks like Snowden’s showed legal loopholes in surveillance. OpenAI first defended its stance. “Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman posted earlier.
Experts poked holes. AI scholar Lawrence Chan noted ambiguities. “The first paragraph doesn’t say ‘no autonomous weapons’! It says ‘AI can’t control autonomous weapons as long as existing law (that doesn’t exist) or the DoD says so,'” he tweeted.
Critic Gary Marcus contrasted firms. “Dario did what was right. Sam did what was lucrative. We will all suffer the consequences,” he wrote.
OpenAI added a ban on high risk auto decisions like social scores. It touts layers of safeguards. “In our agreement, we protect our red lines through a more expansive, multi-layered approach,” the company blogged. “We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections.”
The Trump admin rebranded the Pentagon as Department of War, evoking old school grit. It signals a hard line on AI rules. Some chuckle at the name swap, but it underscores a push for fewer ethics hurdles in defense tech.
Altman pushed for equity. “In my conversations over the weekend, I reiterated that Anthropic should not be designated as a SCR, and that we hope the DoW offers them the same terms weโve agreed to,” he said.
Industry watchers fear the blacklist chills bold ethics stands. A tech employee letter to Congress demanded scrutiny. “This sets a dangerous precedent,” it argued.
X reactions poured in. User Tenobrus questioned the fix. “This does seem like a good addition to your contract, if you can in fact get them to agree to the amendment. But you have to know that any polite requests to de-designate are paper thin justifications, and the timing clearly was opportunistic. This reads damage control,” he posted.
this does seem like a good addition to your contract, if you can in fact get them to agree to the amendment.
but you have to know that any polite requests to de-designate are paper thin justifications, and the timing clearly *was* opportunistic . this reads damage control
โ Tenobrus (@tenobrus) March 3, 2026
Another, Nikhil Sharma, called it reactive. “Adding principles language to a military contract after public backlash is not safety culture. It’s damage control. The contract was signed without it. That tells you everything about where priorities actually were,” he tweeted.
adding principles language to a military contract after public backlash is not safety culture. it’s damage control. the contract was signed without it. that tells you everything about where priorities actually were.
โ Nikhil Sharma (@ImNikhil117) March 3, 2026
Skeptic ostyn asked point blank. “So if the DoW agreed to this, why is Anthropic a supply chain risk?” he replied.
So if the DoW agreed to this, why is Anthropic a supply chain risk?
โ ostyn (@ostynhyss) March 3, 2026
User ElloSunsh1ne piled on trust issues. “You lied about 4o retirement, lied about routing, broke transparency promises. This contract language is worth exactly what your other promises were: nothing,” she wrote.
You lied about 4o retirement, lied about routing, broke transparency promises. This contract language is worth exactly what your other promises were: nothing.
โ ๐ธ๐๐๐๐ฎ๐๐๐๐ฝ๐พ๐๐โ๏ธ (@ElloSunsh1ne) March 3, 2026
Brian Krassenstein urged more. “Good start. Now call out President Trump for what heโs really doing, picking and choosing winners,” he said.
Good start. Now call out President Trump for what heโs really doing, picking and choosing winners.
โ Brian Krassenstein (@krassenstein) March 3, 2026
Altman followed up with reflections. “(I also would like to share this, which I wrote after thinking a little more.) There is a lot we will talk about in the coming days, but since this is one of the first ‘real deal’ decisions we have faced, I wanted to share a few things that have been heavily on my mind the past few days,” he began.
(I also would like to share this, which I wrote after thinking a little more.)
There is a lot we will talk about in the coming days, but since this is one of the first “real deal” decisions we have faced, I wanted to share a few things that have been heavily on my mind the pastโฆ
โ Sam Altman (@sama) March 3, 2026
He outlined core values. “These are the principles I care most about for this decision: alignment, democratization, empowerment, and individual agency,” Altman continued.
On democracy, he stressed balance. “The democratic process must stay in control, and we must democratize AI. OpenAI should not decide the fate of the world; no private company should. We need to work with governments, but also we need to make sure individuals get increasing power,” he explained.
He highlighted education needs. “Things are moving so fast that we need to urgently educate the world so that the democratic process has time to catch up. I think one of our most important strategic decisions ever was the principle of iterative deployment,” Altman added.
Privacy got a nod. “In particular, the key element required for democracy, such as protection of privacy, must be defended by all of society,” he said.
Altman claimed a role in debate. “I believe that, as some of the creators of this new technology, we deserve to and are obligated to have a loud voice about the risks, pitfalls, and benefits we see,” he asserted.
He eyed government ties. “I think we are heading towards a world where the relationship between governments and AI efforts is critical. This will be difficult but it has to happen; I do not see any good future where we don’t get there. There should not be games and fights in the press like this; drastic government action should be avoided,” Altman warned.
He evoked threats. “I think there are real dangers coming to the world, and maybe pretty soon; I tried to put myself in the mindset of how I’d feel the day after an attack on the US or a new bioweapon we could have helped prevent,” he shared.
OpenAI eased its no military rule in 2024. The shift eyes big bucks amid AI rivalry. But it courts backlash from privacy hawks.
Polls show Americans crave tight reins on home spying tech. This deal may cement OpenAI’s defense spot, but erode user faith.
Featured image via YouTube screengrab.





