None of the big AI labs are your friends so don’t get too excited, but the Pentagon is, according to an anonymously sourced Axios story, threatening to quit using Anthropic AI tools because of the company’s “insistence on maintaining some limitations on how the military uses its models.”
The Pentagon source who spoke to Axios apparently said that of all the AI companies it deals with, Anthropic is the most “ideological.”
Keep in mind that Anthropic created Claude and Claude Code, and an unsettling pattern has begun where it releases a tweak for its vibe-coding systems, and Wall Street obediently sells stock in whatever kind of business its latest tools are trying to replace. This is a company that clearly wants to conquer the world, so it might not be a good idea to take any comfort in the many stories about how the people who work there are kinda uncomfortable with what conquering the world entails.
Last year, Anthropic trumpeted its excitement about winning its $200 million Pentagon contract, calling it “a new chapter in Anthropic’s commitment to supporting U.S. national security.”
But at any rate, one interesting thing about Axios’ story is that the two areas of concern cited by the Anthropic representative it quotes, “hard limits around fully autonomous weapons” and “mass domestic surveillance,” were also two areas of concern mentioned by founder-CEO Dario Amodei on Ross Douthat’s Interesting Times podcast, released three days ago by the New York Times.
Much of this is also spelled out in Amodei’s hand-wringing essay from last month, titled “The Adolescence of Technology.”
But just last week on Interesting Times, Amodei suggested that any president, not just Donald Trump, could hand too much power to AI defense tech, and that’s why he says he’s “worried about the autonomous drone swarm,” and notes that, “The constitutional protections in our military structures depend on the idea that there are humans who would—we hope—disobey illegal orders. With fully autonomous weapons, we don’t necessarily have those protections.”
Amodei also sketched what an abuse of AI for mass domestic surveillance might entail:
“It is not illegal to put cameras around everywhere in public space and record every conversation. It’s a public space—you don’t have a right to privacy in a public space. But today, the government couldn’t record that all and make sense of it. With A.I., the ability to transcribe speech, to look through it, correlate it all, you could say: This person is a member of the opposition.”
Losing Claude would actually matter, according to Axios. The Defense official who leaked to Axios claimed that “the other model companies are just behind” Claude.
Exactly what the bone of contention is remains a bit muddled in the Axios story, which (deep breath) claims that the Pentagon claims Anthropic sought information from Palantir about whether its tech was part of the January 3 U.S. attack on Venezuela. Anthropic denies expressing concerns that “relate to current operations.” Nonetheless, the Pentagon official says the issue was raised “in such a way to imply that they might disapprove of their software being used, because obviously there was kinetic fire during that raid, people were shot.”
Also, how exactly something like Claude Code, which allows anyone to make software if they’re inclined, might be incorporated into the real-world puncturing of human bodies with bullets remains similarly unclear.

