
Microsoft Joins Tech Giants in Backing Anthropic's Fight Against Pentagon Supply-Chain Label
Microsoft has filed a court brief supporting Anthropic's legal challenge against a Pentagon designation that threatens to cut the AI firm off from government contracts.
Microsoft Sides With Anthropic in High-Stakes Pentagon Dispute
Microsoft has stepped into a high-profile legal showdown between AI company Anthropic and the US Department of Defense, filing a formal court brief in support of Anthropic's bid to overturn a controversial government designation that effectively locks it out of federal contracts.
The tech giant submitted an amicus brief to a federal court in San Francisco, arguing that a temporary restraining order was essential to prevent widespread disruption to vendors and suppliers whose products are built on Anthropic's AI technology. Microsoft was not alone in its support — Google, Amazon, Apple, and OpenAI have jointly signed a separate brief also backing Anthropic's legal position.
Deep Pentagon Ties Make Microsoft's Move Significant
Microsoft's involvement carries considerable weight given how deeply embedded it is within US military infrastructure. The company holds a stake in the Pentagon's $9 billion Joint Warfighting Cloud Capability contract — a Biden-era agreement shared with Amazon, Google, and Oracle — along with additional software and enterprise service deals valued at several billion dollars. Its government relationships span defense, intelligence, and civilian agencies alike.
Under the Trump administration, Microsoft further expanded its federal footprint in September, signing yet another multibillion-dollar agreement to accelerate cloud services and artificial intelligence adoption across federal institutions.
How the Dispute Unfolded
The conflict traces back to failed contract negotiations last month over a proposed $200 million agreement that would have deployed Anthropic's AI systems on classified military infrastructure — talks that collapsed just as the United States was preparing for military action against Iran.
Negotiations broke down after Anthropic drew firm boundaries around how its technology could be used, specifically objecting to its deployment for mass surveillance of American citizens or as a component in autonomous lethal weapons systems. Defense Secretary Pete Hegseth responded by classifying Anthropic as a supply-chain risk — a designation that has historically been reserved for companies with ties to foreign adversaries, particularly China.
The Pentagon formally notified Anthropic of this decision last week, and the company reports that government contracts have already begun to be terminated. The Pentagon's chief technology officer, Emil Michael, made the government's position crystal clear in a CNBC interview, stating there was "no chance" the agency would reopen negotiations with Anthropic.
Anthropic Defends Its AI Safety Stance
In its legal complaint, Anthropic was candid about the boundaries of its own technology and the reasoning behind its restrictions. The company stated directly that it lacked confidence its Claude AI model could "function reliably or safely if used to support lethal autonomous warfare," framing its usage limits as grounded in a thorough understanding of the system's risks and current limitations.
Beyond the contractual dispute, Anthropic has raised constitutional concerns, arguing that the Pentagon's use of the supply-chain risk label amounts to ideological retaliation against the company for its publicly stated positions on AI safety — a potential violation of its First Amendment rights.
Broader Context: Pentagon Strike Investigation
Separately, a Pentagon investigation is underway into a Tomahawk missile strike on the Shajarah Tayyebeh elementary school, which Iranian officials say killed at least 175 people. Preliminary findings reportedly indicate US responsibility for the deaths, with the strike believed to be a targeting error stemming from outdated intelligence data. It remains unclear whether AI systems played any role in the strike.
