The U.S. government and Anthropic, the maker of the Claude AI assistant, are now facing off in federal court over a Pentagon decision that could cost the company billions of dollars.
Defense Secretary Pete Hegseth designated Anthropic a national security supply chain risk on March 3. The move came after months of negotiations between the Pentagon and Anthropic broke down.
The dispute centers on Anthropic’s refusal to remove restrictions on how its AI can be used. Specifically, the company would not agree to let its technology be used in autonomous weapons or for domestic surveillance.
The Pentagon said those limits were unacceptable. In a court filing, it argued that allowing Anthropic continued access to military systems would introduce “unacceptable risk” into defense supply chains.
The government also raised concerns about what it called Anthropic’s ability to “disable its technology or preemptively alter the behavior of its model” during active military operations if the company felt its own policies were being crossed.
The Justice Department, filing on behalf of the Trump administration, pushed back on Anthropic’s First Amendment claims. It said the dispute was about contract negotiations and national security, not free speech.
The filing said it was Anthropic’s refusal to lift restrictions — which the government called “conduct, not protected speech” — that led President Trump to direct all federal agencies to cut ties with the company.
Anthropic filed its main lawsuit in California federal court on March 9. The company called the designation “unprecedented and unlawful” and said it violated both free speech and due process rights.
A second lawsuit was filed in a Washington, D.C. appeals court challenging a separate Pentagon designation under a different law — one that could extend the blacklisting to the entire federal government.
Microsoft, which both uses Anthropic’s Claude model and supplies the U.S. military, filed an amicus brief supporting Anthropic last week. The company warned the designation could harm the broader AI sector.
Anthropic said it was reviewing the government’s latest filing. The company said the lawsuit was “a necessary step to protect our business, our customers, and our partners.”
Anthropic has also disputed claims that its technology poses a danger. The company said AI is not yet safe enough for use in autonomous weapons and that it opposes domestic surveillance on principle.
The White House did not respond to a request for comment.
The company’s executives have warned the blacklisting could cause billions of dollars in losses in 2026. The designation is typically reserved for organizations from foreign adversary nations, such as Chinese firm Huawei.
The post U.S. Government Defends Pentagon Blacklisting of Anthropic in Federal Court appeared first on CoinCentral.

