The US NSA secretly used Anthropic Mythos: a dual approach of one hand blocking and the other hand permitting by the Pentagon

robot
Abstract generation in progress

The U.S. National Security Agency (NSA) is reportedly using Anthropic’s top-tier AI cybersecurity tool Mythos—yet its superior unit, the Department of Defense (DoD), has simultaneously listed Anthropic as a “supply chain risk,” and is in federal court fighting to block Anthropic models from entering government systems. Axios exclusively revealed this absurd tug-of-war of right-hand-left-hand behavior on 4/19, making the relationship between Anthropic and the U.S. government even more tangled and complex.
(Background: Did Anthropic thaw relations with the Trump administration? Treasury Secretary and White House Chief of Staff meet CEO Dario Amodei)
(Additional background: Anthropic’s new model Mythos is so powerful that even their own team doesn’t dare release it—within a few hours, it can autonomously break into global Linux and string together a complete chain of vulnerabilities)

Table of Contents

Toggle

  • Mythos: The AI locked in a safe by Anthropic
  • Didn’t win the lawsuit, but the business is thawing
  • Where is the red line: Can surveillance agencies use tools that “cannot be used for surveillance”?
  • The three-way AI power struggle inside the U.S. government

The Pentagon says Anthropic is a supply chain risk, but the NSA has quietly been using Anthropic’s most powerful AI tool—this isn’t a satirical bit,

According to Axios’ report on 4/19, a DoD unit has formally classified Anthropic as a “supply chain risk,” because it believes the AI tools could endanger U.S. national security. However, the National Security Agency (NSA), which is also under the DoD, is currently testing Anthropic’s newest model—Mythos Preview, the most tightly restricted version of all. Two subordinate units of the same organization take completely opposite stances toward the same company.

Mythos: The AI locked in a safe by Anthropic

Mythos is not a normal Claude model. Through the Project Glasswing alliance, Anthropic limits access permissions to about 40 organizations; the list includes tech and financial giants such as Amazon, Apple, Google, Cisco, CrowdStrike, JPMorgan, Microsoft, and Nvidia.

The reason is simple: this model is just too effective. Based on available information, Glasswing members mainly use Mythos to scan exploitable security vulnerabilities in their own environments—and it has already found thousands of high-severity vulnerabilities across all major operating systems and browsers. With this capability, Anthropic chooses to control access via a whitelist rather than releasing it publicly.

How does the NSA obtain Mythos access? Has it formally joined the Glasswing alliance? At present, it’s unclear. Axios’ report also acknowledges that how the NSA actually uses Mythos is currently unknown. This information gap alone already indicates how sensitive the issue is.

Didn’t win the lawsuit, but the business is thawing

Let’s rewind to March. Anthropic filed lawsuits in two federal courts against the Department of Defense, directly challenging the Trump administration’s decision to list it as a “supply chain risk.” On 4/8, the federal appellate court denied Anthropic’s emergency request to pause the injunction, losing the first round of the legal battle.

But the plot outside the courtroom goes in a completely different direction. On 4/17, Anthropic CEO Dario Amodei entered the White House and met with Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent—the topic was exactly how Mythos is used within government agencies. The very existence of this meeting is a signal: the White House does not intend for the DoD’s “supply chain risk” label to be the final answer.

The White House is actively pushing to have federal agencies obtain access to Claude Mythos. The Department of Defense has requested that Claude be made available for “all legitimate purposes,” and a report on 4/16 showed that the White House had stepped in to mediate negotiations between Anthropic and government agencies over usage terms.

Where is the red line: Can surveillance agencies use tools that “cannot be used for surveillance”?

Anthropic’s core position has never changed: Mythos cannot be used for large-scale domestic surveillance, nor for autonomous weapons development. These two restrictions are written in black and white usage limitations—not vague moral declarations.

The problem is that the NSA’s core business is, by nature, large-scale signals intelligence collection and monitoring. The contradiction between the two cannot be eliminated just by rephrasing it. Anthropic is obviously aware of this—this is also why entry requirements for Glasswing are so strict, and why each member’s usage scenarios are limited.

If the NSA truly gains access to Mythos, the biggest issue isn’t whether it is being used, but “where” it is being used. The boundary between vulnerability scanning and intelligence collection—in the operations of a national-level intelligence organization—is often just a matter of an internal memo’s distance.

The three-way AI power struggle inside the U.S. government

What’s truly worth observing in this incident isn’t what tool the NSA used, but the fact that the U.S. government has already shown a clear internal split on AI national security policy:

Department of Defense (DoD): Blocking Anthropic, arguing supply chain risk, and fighting in court.

White House (Chief of Staff + Treasury): Actively mediating and pushing to enable federal agencies to gain access to Mythos.

Intelligence agencies (NSA): No matter what the ones above are arguing about—use it first.

Three directions, three logics, all operating at the same time. This isn’t a policy failure; it’s the real state of affairs until the U.S. government finds a shared framework for AI capabilities and national security control.

From a broader perspective, this conflict reveals a structure that is about to become commonplace: the capabilities of AI tools are already outpacing how quickly existing policy frameworks can absorb and process them. The DoD can say in court that Anthropic is a risk, and the NSA can simultaneously be using Anthropic’s models—these two things don’t exclude each other because they operate at different decision-making levels.

For Anthropic, this absurd comedy may actually be the best possible preview of the ending: it doesn’t matter if it loses the lawsuit; the White House is already negotiating terms, and the intelligence agencies are already using the tools. Political thawing often doesn’t begin with court rulings.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin