After Anthropic’s weeks-long standoff with the Pentagon, the company won one milestone: A judge granted Anthropic a preliminary injunction in its lawsuit, which sought to reverse its government blacklisting while the judicial process plays out.
“The Department of War’s records show that it designated Anthropic as a supply chain risk because of its ‘hostile manner through the press,’” Judge Rita F. Lin, a district judge in the northern district of California, wrote in the order, which will go into effect in seven days. “Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation.”
A final verdict could be weeks or months out.
Anthropic spokesperson Danielle Cohen said in a Thursday statement, “We’re grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits. While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.”
“I do think this case touches on an important debate,” Judge Lin said during the Tuesday hearing. “On the one hand, Anthropic is saying that its AI product, Claude, is not safe to use for autonomous lethal weapons and domestic mass surveillance. Anthropic’s position is that if the government wants to use its technology, the government has to agree not to use it for those purposes. On the other hand the Department of War is saying that military commanders have to decide what is safe for its AI to do.”
On Tuesday, Judge Lin went on to say, “It’s not my role to decide who’s right in that debate… The Department of War decides what AI product it wants to use and buy. And everyone, including Anthropic, agrees that the Department of War is free to stop using Claude and look for a more permissive AI vendor.” She added, “I see the question in this case as being … whether the government violated the law when it went beyond that.”
It all started with a memo sent by Defense Secretary Pete Hegseth on Jan. 9, calling for “any lawful use” language to be written into any AI services procurement contract within 180 days, which would include existing contracts with companies like Anthropic, OpenAI, xAI, and Google. Anthropic’s negotiations with the Pentagon stretched on for weeks, hinging on two “red lines” that the company did not want the military to use its AI for: domestic mass surveillance and lethal autonomous weapons (or AI systems with the power to kill targets with no human involvement in the decisionmaking process). The rollercoaster series of events that followed has included a barrage of social media insults, a formal “supply chain risk” designation with the potential to significantly handicap Anthropic’s business, competing AI companies swooping in to make deals, and an ensuing lawsuit.
With its lawsuit, Anthropic argues that it was punished for speech protected under the First Amendment, and it’s seeking to reverse the supply chain risk designation.
It’s rare, and potentially even unheard of until now, for a US company to be named a supply chain risk, a designation typically reserved for non-US companies potentially linked to foreign adversaries. Anthropic’s designation as such raised eyebrows nationwide and caused bipartisan controversy due to concerns that disagreeing with a presidential administration could potentially lead to outsized retribution for a business in any sector.
Anthropic’s own business has been significantly affected by the designation, according to its court filings, which say that it has “received outreach from numerous outside partners … expressing confusion about what was required of them and concern about their ability to continue to work with Anthropic” and that “dozens of companies have contacted Anthropic” for guidance or information about their rights to terminate usage. Depending on the level to which the government prohibits its contractors’ work with Anthropic, the company alleged that revenue adding up to between hundreds of millions and multiple billions could be at risk.
During Tuesday’s hearing, both companies had a chance to respond to Judge Lin’s questions, which were released in a document the day prior and hinged on matters like whether Hegseth lacked authority to issue certain directives and why Anthropic was named a supply chain risk. The judge also asked, in her pre-released questions, about the circumstances under which a government contractor could face termination for using Anthropic’s technology in their work — for instance, “if a contractor for the Department uses Claude Code as a tool to write software for the Department’s national security systems, would that contractor face termination as a result?”
On Tuesday, the judge also seemed to admonish the Department of War for Hegseth’s X post that caused a lot of widespread confusion per Anthropic’s earlier court filings, stating that “effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”
“You’re standing here saying, ‘We said it but we didn’t really mean it,’” Judge Lin said during the hearing, later pressing on the question of why Hegseth wrote the above barring contractors from working with Anthropic instead of just simply designating Anthropic as a supply chain risk.
In a series of questions on Tuesday, Judge Lin asked whether the Department of War plans to terminate contractors on the basis of their work with Anthropic if it’s separate from their work with the department, and a representative for the Department of War responded, “That is my understanding.”
Judge Lin asked, “Let’s say I’m a military contractor. I don’t provide IT to the military. I provide toilet paper to the military. I’m not going to be terminated for using Anthropic — is that accurate?” The representative for the Department of War responded, “For non-DoW work, that is my understanding.” But when the judge asked whether a military contractor providing IT services to the Department of War, but not for national security systems, could be terminated for using Anthropic, the representative for the Department of War did not give a concrete answer.
During the hearing, Judge Lin cited one of the amicus briefs, which she said used the term “attempted corporate murder.” She said, “I don’t know if it’s ‘murder,’ but it looks like an attempt to cripple Anthropic.”
“We are continuing to be irreparably injured by this directive,” a lawyer for Anthropic said during the hearing, citing Hegseth’s nine-paragraph X post.
In a recent court filing, the Department of Defense alleged that Anthropic could ostensibly “attempt to disable its technology or preemptively alter the behavior of its model either before or during ongoing warfighting operations” in the event it felt the military was crossing its red lines — a theoretical situation that the Pentagon said it deemed an “unacceptable risk to national security.” The judge’s pre-released questions seem to challenge that statement, or at least request more information on it, stating, “What evidence in the record shows that Anthropic had ongoing access to or control over Claude after delivering it to the government, such that Anthropic could engage in such acts of sabotage or subversion?”
Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.
- AIClose
AI
Posts from this topic will be added to your daily email digest and your homepage feed.
FollowFollow
See All AI
- AnalysisClose
Analysis
Posts from this topic will be added to your daily email digest and your homepage feed.
FollowFollow
See All Analysis
- AnthropicClose
Anthropic
Posts from this topic will be added to your daily email digest and your homepage feed.
FollowFollow
See All Anthropic
- ReportClose
Report
Posts from this topic will be added to your daily email digest and your homepage feed.
FollowFollow
See All Report

