
Federal Judge Raises Alarm Over Pentagon's Move to Designate Anthropic a Security Risk
A federal judge has expressed serious concern that the Pentagon's actions against AI firm Anthropic may constitute illegal retaliation and a First Amendment violation.
Federal Judge Questions Pentagon's Designation of Anthropic as Supply-Chain Risk
A federal judge has voiced sharp criticism over the U.S. Department of Defense's decision to label AI company Anthropic a supply-chain security risk, suggesting the move may be an unlawful attempt to silence a company that dared to challenge the government publicly.
"It looks like an attempt to cripple Anthropic," said Judge Lin during a hearing held Tuesday in San Francisco. "It looks like the department is punishing Anthropic for trying to bring public scrutiny to this contract dispute, which would of course be a violation of the First Amendment."
The Heart of the Dispute
The conflict stems from Anthropic's efforts to place restrictions on how its artificial intelligence tools could be utilized by the U.S. military. In response, the Trump administration labeled the company a national security risk — a move Anthropic argues is nothing short of illegal retaliation.
The AI firm has since filed two separate federal lawsuits challenging the designation. Tuesday's proceedings were part of the case being heard in San Francisco, while a separate ruling from a federal appeals court in Washington, D.C., is expected shortly without a formal hearing.
At the heart of Anthropic's request is a temporary injunction to halt the security designation while litigation proceeds. The company hopes such relief would reassure nervous clients and convince them to maintain their business relationships in the interim. Judge Lin can only grant the pause if she concludes Anthropic has a strong likelihood of prevailing in the broader case. Her decision on the injunction is anticipated within days.
Pentagon's Justification Under Scrutiny
The Department of Defense — which has rebranded itself as the Department of War — defended its actions, claiming it followed proper procedures in determining that Anthropic's AI tools could not be trusted to perform reliably in high-stakes military operations.
Attorney Eric Hamilton, representing the Trump administration, argued during the hearing that the government's concern was that Anthropic might go beyond raising objections and actually manipulate its own software to behave in ways the department did not intend or sanction.
However, Judge Lin drew a clear distinction between the Pentagon's right to choose its vendors and the limits of its authority. She acknowledged that Defense Secretary Pete Hegseth holds the power to decide which contractors the department works with — but stressed that it falls within her jurisdiction to determine whether Hegseth overstepped legal boundaries by taking punitive actions beyond simply terminating Anthropic's government contracts.
The judge described the security designation as "troubling," noting that the measures taken against Anthropic and its AI model Claude did not appear to be proportionate to or directly targeted at the national security concerns the department cited.
Hegseth's Sweeping Social Media Order Raises Eyebrows
Adding fuel to the controversy, Defense Secretary Hegseth posted on X last month declaring that, effective immediately, no contractor, supplier, or partner doing business with the U.S. military could engage in any commercial activity with Anthropic.
When Judge Lin pressed Hamilton on the legal basis for such a sweeping directive, he conceded that Hegseth has no legal authority to bar military contractors from using Anthropic's services for work unrelated to the Department of Defense. When asked why Hegseth made the post at all, Hamilton offered a blunt response: "I don't know."
A Powerful Tool Used Against an Unusual Target
Judge Lin also questioned whether the Pentagon had explored less severe alternatives before resorting to the supply-chain-risk designation — a tool she noted is typically reserved for foreign adversaries, terrorist organizations, and other clearly hostile actors.
Michael Mongan, an attorney from WilmerHale representing Anthropic, called it extraordinary for the government to weaponize such a powerful designation against what amounted to a "stubborn" negotiating partner in a commercial contract dispute.
The Road Ahead for Anthropic
The Pentagon has announced plans to phase out Anthropic's technology over the coming months, replacing it with AI solutions from Google, OpenAI, and Elon Musk's xAI. Officials also stated that safeguards have been put in place to prevent Anthropic from making any unauthorized changes to its AI models during the transition period.
Hamilton admitted uncertainty about whether Anthropic could even update its AI models without the Pentagon's approval — a claim Anthropic flatly disputes.
The case has ignited a broader national debate about the growing role of artificial intelligence in military operations and whether Silicon Valley companies should defer to government preferences when it comes to deploying technology they have built.


