On April 9, the U.S. Court of Appeals for the Federal Circuit in Washington, D.C. ruled to uphold the Department of Defense’s “supply chain risk” designation for Anthropic, denied its request for a stay, and the legal battle over AI ethical red lines and the definition of national security is still not over.
(Background: The judge sided with Anthropic and barred the U.S. Department of Defense from punishing Claude with a “supply chain risk label.”)
(Background addendum: What is Claude? Full analysis of pricing, features, Claude Code, Cowork — the most detailed guide for Anthropic in 2026)
Table of Contents
Toggle
On April 9, the U.S. Court of Appeals for the Federal Circuit in Washington, D.C. rejected AI giant Anthropic’s request to stay enforcement, ruling to keep the Department of Defense’s decision to list it as a “supply chain risk.”
The court’s reasoning was direct: the government’s national security interests in managing AI supply chains take priority over the financial losses Anthropic would bear. This designation was typically used for companies in adversarial countries or for potential threat entities; now it has landed on a U.S.-based AI unicorn, and the symbolic meaning is clear.
The incident began in July 2025. Anthropic and the Pentagon signed a $200 million contract to integrate Anthropic’s AI model Claude into the Maven intelligent system, helping carry out intelligence analysis and target identification missions.
However, the negotiations between the two sides broke down in September 2025. Anthropic insisted on establishing two ethical red lines: refusing to use Claude in fully automated weapon systems, and refusing to use it for domestic surveillance. These two positions fundamentally conflicted with the expectations of the Trump administration. Trump then ordered federal agencies via social media to stop using Anthropic products and set a six-month phase-out period.
From late 2025 into early 2026, the Department of Defense formally listed Anthropic on the supply chain risk roster. This designation directly cut off Anthropic’s eligibility to participate in government defense contracts.
At present, this legal battle has conflicting court rulings. The federal court in San Francisco had previously, at the end of March, approved a preliminary injunction allowing Anthropic to continue collaborating with non-defense government organizations; but on April 9, the ruling by the U.S. Court of Appeals for the Federal Circuit in Washington, D.C. strengthened the Department of Defense’s ban posture and refused to grant any stay.
This means Anthropic is currently caught in an awkward legal gray zone: it can collaborate with some government units, yet is barred from defense contracts. Anthropic said it believes this case constitutes political retaliation and violates constitutional protections, and it will continue to file appeals. Accelerating the trial timeline will be the key next step.
But according to a report by Electronic Engineering Times, although Anthropic suffered a major commercial blow and was unable to participate in large-scale defense contracts, its image of sticking to ethical stances received a positive response in the general user market instead, attracting more corporate and individual users who have concerns about AI safety.
The impact of this ruling extends beyond just one company, Anthropic. It reveals a deeper structural contradiction: when an AI developer’s ethical framework clashes with the government’s definition of national security, where the scales of the current legal system tip right now already has an initial answer. The final ruling in this case will have far-reaching reference value for how the entire tech industry negotiates the boundaries of AI use with the government.