A federal appeals court in Washington D.C. has refused to block the Pentagon’s “supply chain risk” designation of Anthropic, ruling that the company failed to meet the stringent requirements for emergency relief. The decision leaves the designation in place while Anthropic’s underlying lawsuit against the Department of Defense and other agencies proceeds โ€” a significant setback in what has become one of the most consequential AI policy battles of 2026.

What the Court Decided

Anthropic had sought emergency injunctive relief to pause the Pentagon’s designation while the lawsuit worked through the courts. Emergency relief requires meeting a high legal bar โ€” plaintiffs must show, among other things, that they are likely to succeed on the merits of their case and that they face irreparable harm without immediate intervention. The court found that Anthropic had not met those requirements, declining to grant temporary relief.

The decision does not rule on the merits of Anthropic’s underlying claims. The lawsuit itself โ€” challenging the legality of the “supply chain risk” designation โ€” continues. But the ruling means the designation remains active, with all its practical consequences, for the foreseeable future.

What’s at Stake

The Pentagon’s designation goes significantly further than simply ending a business relationship. A “supply chain risk” label is a formal designation that signals to other government agencies and federal contractors that Anthropic poses a potential risk in procurement chains. The practical consequences include:

  • Direct exclusion from government contracts across all federal agencies, not just the Department of Defense
  • A chilling effect on private sector companies that sell into government-adjacent markets and may avoid Anthropic products to reduce their own procurement risk
  • Reputational implications that extend beyond the immediate contract loss

Industry groups representing hundreds of companies had urged the court to pause the designation, arguing it was being used as a policy instrument rather than a genuine security assessment โ€” and that allowing it to stand creates a blueprint for sidelining AI companies through procurement decisions rather than legislation.

The Broader Precedent Question

The appeals court’s refusal to grant emergency relief doesn’t resolve the central legal question: can the government apply a “supply chain risk” designation โ€” originally designed for hardware and physical infrastructure โ€” to an AI software vendor, and does doing so without clear criteria or due process violate Anthropic’s rights?

That question will play out in the full litigation. But the longer the designation remains in force, the more normalized it becomes as a mechanism for government influence over the AI industry. If other agencies follow the Pentagon’s lead while the lawsuit proceeds, Anthropic’s effective exclusion from federal procurement could extend well beyond its original scope.

What It Means for Anthropic’s Business

For Anthropic’s day-to-day operations and its revenue trajectory โ€” which recently crossed $30 billion in annualized run rate โ€” the immediate impact is limited. The company’s growth is driven almost entirely by private enterprise customers, not government contracts. Claude is available on all three major cloud platforms and serves over 1,000 business customers each spending $1 million or more annually.

The longer-term concern is more strategic. A company that the U.S. government has formally designated as a supply chain risk faces friction in any government-adjacent market โ€” healthcare systems, financial institutions under federal oversight, defense contractors, and infrastructure providers who work with federal agencies. None of those customers are prohibited from using Anthropic products, but the designation creates a due diligence burden that didn’t exist before.

The Anthropic vs. Pentagon Fight in Context

The case began in March 2026 when the Pentagon designated Anthropic as a supply chain risk, triggering a lawsuit and an emergency request for relief. A hearing on temporary relief was initially set for March 24. The case has since moved to the appeals court, which has now declined emergency intervention. The litigation will continue, but without the protection Anthropic sought to limit the designation’s impact while the merits are argued.

Conclusion

The appeals court decision is a setback but not a final ruling. The fundamental question โ€” whether the government can use procurement designations as an unaccountable tool to influence which AI companies can operate in federal markets โ€” remains unresolved and will be answered over the coming months. Browse our directory to follow Claude and the broader Anthropic ecosystem as this case develops.