AI | Anthropic suffers a severe data breach exposing over 510,000 source codes; electronic pet features disclosed early

robot
Abstract generation in progress

Anthropic’s Claude developer, Anthropic, reports a serious incident. The terminal proxy tool it newly launched yesterday, “Claude Code,” reportedly suffered what may be the largest-scale full original source code leak ever seen, including more than 510,000 lines of TypeScript code, more than 40 tool modules, and several core features not yet released—such as “electronic pets” and “dreaming”—that were also uploaded, drawing market attention.

Spokesperson: Human error—not a security vulnerability

Citing a spokesperson, the report said the leak did not involve any sensitive customer data or credentials, nor did it cause any loss. “It’s simply a packaging problem caused by human error, not a security vulnerability. We are taking steps to prevent such incidents from happening again.” Anthropic also immediately pulled down the problematic version and removed the old package, but did not issue an official takedown notice to mirror repositories that have already spread on GitHub.

Two major leaks in a week

However, some believe this “accident” may allow other software developers and Anthropic’s competitors to learn more about the development process behind the coding tool. According to a social media post that includes a link to the relevant code, since it was published at 4 a.m. on Tuesday, the number of views has already exceeded 21 million. Yet Anthropic has experienced two major data leak incidents in just one week. According to a recent report by Fortune, the company’s planned artificial intelligence (AI) model descriptions and around 3,000 internal documents were recently discovered in a publicly accessible data cache.

Some analysts noted that the incident highlights the risks for AI tools between rapid iteration and open-source ecosystems. The source code has already spread widely, but Anthropic has not proactively open-sourced it; it still holds full copyrights. Developers can only use it as a research reference, and if they modify it without authorization and commercialize it, they could face takedowns or legal accountability. Meanwhile, the “accidental” exposure of Claude Code’s technical details also reflects the multiple challenges AI faces in a highly competitive environment—balancing tool capabilities, development efficiency, and security controls—and industry impact and regulatory discussions will likely continue to heat up. The leak has also triggered market concern.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin