Washington — President Trump announced Friday that he is ordering all federal agencies to “immediately” stop using Anthropic’s artificial intelligence technology, as the company nears a Pentagon deadline to drop its push for guardrails over the military’s use of its AI.
“I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology,” Mr. Trump wrote on Truth Social. “We don’t need it, we don’t want it, and will not do business with them again!”
The president said he will give agencies six months to phase out its use of Anthropic’s products and threatened to take additional action against the company if it does not assist during that period.
“Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow,” he wrote.
The Defense Department and Anthropic have been at odds over the military’s use of its AI model, Claude, and the safeguards that the company, led by CEO Dario Amodei, wants in place over the use of the technology. The Pentagon has insisted Anthropic agree to give the military unrestricted access to its AI model, and Defense Secretary Pete Hegseth set a deadline of 5 p.m. Friday for it to agree to drop its guardrails.
Hegseth has warned that if the two parties failed to reach a deal, he’d designate Anthropic as a “supply chain risk.” Two sources told CBS News that the Pentagon argues that a contractor that believes it has a say in government policy decisions cannot be relied upon to work with other U.S. partners and contractors.
Anthropic was awarded a $200 million contract from the Pentagon last July to develop AI capabilities that would advance national security. But the company has asked the Defense Department to agree to certain limits, including a restriction against using Claude to conduct mass surveillance of Americans, sources told CBS News.
The company also seeks to ensure Claude isn’t used by the Pentagon for final targeting decisions in military operations without any human involvement, one source familiar with the matter said. Claude is not immune from hallucinations and is not reliable enough to avoid potentially lethal mistakes, such as unintended escalation or mission failure without human judgment, the source said.
In an interview with CBS News on Thursday, the Pentagon’s chief technology officer, Emil Michael, said the military has “made some very good concessions” in order to reach a deal with Anthropic. The Pentagon offered to “put it in writing that we’re specifically acknowledging” federal laws that restrict the military from surveilling Americans, he said, and offered language “specifically acknowledging these policies that have been in place for years at the Pentagon regarding autonomous weapons.”
“At some level, you have to trust your military to do the right thing,” Michael said.
But an Anthropic spokesperson said Thursday that the new contract language it received from the Pentagon “made virtually no progress on preventing Claude’s use for mass surveillance of Americans or in fully autonomous weapons.”
“New language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will,” the company said.
In his own statement Thursday, Amodei said that the threats from the Defense Department would do nothing to change its position on the need for guardrails around the use of its AI systems.
“Our strong preference is to continue to serve the Department and our warfighters — with our two requested safeguards in place,” he said. “Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will be available on the expansive terms we have proposed for as long as required.”
Caitlin Yilek,
Jennifer Jacobs and
Jo Ling Kent
contributed to this report.
AI: Artificial Intelligence
More

