In his denunciation of Anthropic, Secretary of Defense (or Secretary of War if you prefer) Pete Hegseth posted on X that “The Terms of Service of Anthropic’s defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield,” and then added in that same post, “Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service.”
He also characterized Anthropic’s stance, holding firm against hypothetical future uses of its products for mass surveillance or fully autonomous weaponry, as “duplicity” and a “betrayal” in the course of declaring Anthropic a supply-chain risk, and banning use of its products among military contractors.
When someone’s actions rise to the level of betrayal against me—a dagger in the back, in other words—I tend to be done with them, but the start of a major new war was only hours away when Hegseth said that, and so, according to reporting from the Wall Street Journal and Axios, it seems Hegseth yanked the proverbial dagger out and kept right on working with Anthropic.
The Journal claims that the Pentagon’s Central Command (CENTCOM) uses Anthropic’s Claude in some capacity for “intelligence assessments, target identification and simulating battle scenarios.”
As I noted yesterday, the consumer-facing Claude mobile app has been climbing the charts, reaching new heights of popularity ever since Donald Trump declared the team at Anthropic a bunch of “leftwing nut jobs.” At the time, Claude was the number 2 ranked free app in Apple’s App Store, but Ryan Donegan, an Anthropic spokesman, emailed Gizmodo late yesterday to say it “just hit #1 on the US App Store, an all-time high. Surpassing ChatGPT as the most downloaded app.”
It’s not clear how much of this newfound popularity is connected to Anthropic’s conflict with the government, however, since according to Donegan, daily signups for Claude have tripled over the past four months.
OpenAI, meanwhile, has trumpeted a deepening bond with the Pentagon thanks to a new agreement involving military applications of OpenAI products in classified use cases. Anthropic, “may have wanted more operational control than we did,” OpenAI CEO Sam Altman has since stated.
In any case, Anthropic and OpenAI are both dealing in hypotheticals about the future. There aren’t ChatGPT-powered killbots suddenly operating in Iran because of OpenAI’s new agreement with the government. But there have, apparently, been operations informed in some way or another by Claude-based modeling and research.
And all indications are that such uses meet with Anthropic CEO Dario Amodei’s approval. “We are still interested in working with them as long as it is in line with our red lines,“ Amodei said yesterday of the Pentagon.

