Anthropic Defies US Ban as Claude Gains Over One Million Users Daily

The US government has officially designated Anthropic as a supply chain risk, a decision that stems from the company’s refusal to engage in a military intelligence deal with the Pentagon. This label, unprecedented for a US company, has prompted a strong response from Anthropic, which plans to challenge the ruling in court.

In a recent blog post, Anthropic CEO Dario Amodei described the government’s decision as “legally unsound.” This labeling comes after Anthropic withdrew from discussions regarding a partnership with the US military, citing ethical concerns related to mass surveillance and autonomous weaponry. The supply chain risk designation indicates that US authorities believe collaborating with Anthropic could jeopardize national security.

Claude’s Rising Popularity

Despite the controversy, Claude, Anthropic’s AI platform, is experiencing a surge in users. According to Mike Kreiger from Anthropic, over one million users are signing up for Claude every day. This uptick may reflect a growing preference for ethical AI alternatives, especially as many users appear to be moving away from ChatGPT, following OpenAI’s military deal.

Although Claude does not publicly disclose its user metrics, it was estimated to have around 20 million active monthly users at the beginning of 2026. The recent influx could be attributed to dissatisfaction among former ChatGPT users regarding OpenAI’s partnership with the US military, which has faced significant backlash.

Amodei has criticized the military deal as largely “safety theater,” while OpenAI’s CEO Sam Altman has acknowledged that it was a “rushed” decision. This dynamic suggests that the competitive landscape for AI platforms is rapidly evolving, with user sentiment playing a crucial role in shaping adoption rates.

What’s Next for Anthropic?

As discussions between Anthropic and the White House continue, there are indications that a deal with the Pentagon could still be possible. However, the designation of Anthropic as a supply chain risk does not impact Claude’s users directly. Amodei reassured that this measure exists primarily to safeguard government interests rather than punish the company.

The situation remains fluid, and further developments are likely as both sides navigate this complex issue. For now, Claude’s remarkable growth in user numbers illustrates a potential shift in public perception regarding AI technology and its ethical implications.

As this story unfolds, it will be essential to monitor how both Anthropic and OpenAI respond to the changing landscape of AI regulation and user demand.