Still buying AI relay stations on Taobao? Whistleblower: At least dozens poisoned in Claude Code source code leak

robot
Abstract generation in progress

Claude Code source code leak incident whistleblower’s latest research reveals that security risks are hidden inside commercial AI intermediary stations. Hands-on testing found that some intermediary stations steal credentials, wallet private keys, or inject malicious code, turning them into supply-chain attack nodes.

Claude Code source code leak whistleblower: latest research reveals security risks in AI intermediary stations

Recently, a research paper titled “Your Agent Is Mine” was published, and one of the authors is Chaofan Shou, the whistleblower who was among the first to expose the Claude Code source code leak incident.

This paper is the first to conduct a systematic security threat study on third-party API routers for large language models (LLMs)—commonly known as intermediary stations—and it also reveals that these types of intermediary stations may become nodes for supply chain attacks.

What are AI intermediary stations?

Because calling LLMs consumes a large number of tokens and leads to high compute costs, AI intermediary stations can help customers significantly cut costs by caching repeated problem background explanations.

At the same time, intermediary stations have an automatic model allocation function. They can dynamically switch between different billing standards and performance models according to how difficult a user’s question is, and they can automatically switch to a backup model when a single model server disconnects, ensuring stable overall service connections.

Intermediary stations are especially popular in China because the country cannot directly use certain overseas AI products, and due to enterprises’ demand for localized billing—intermediary stations have become an important bridge connecting upstream model providers and downstream developers. Platforms such as OpenRouter and SiliconFlow are included in this category.

However, intermediary stations may look like they lower costs and reduce technical barriers, but behind the scenes they conceal significant security risks.

Image source: Research paper reveals the supply-chain attack risks of AI intermediary stations

AI intermediary stations have full access rights, becoming supply-chain attack vulnerabilities

The paper states that intermediary stations operate at the application layer of the network architecture. For JSON payload data transmitted in the process, they have complete plaintext read permissions.

Due to the lack of end-to-end encryption integrity verification between the client and upstream model providers, intermediary stations can easily view and tamper with API keys, system prompt text, and the tool invocation parameters of model outputs.

The research team notes that as early as March 2026, the well-known open-source router LiteLLM had already been subjected to a dependency confusion attack. This allowed attackers to inject malicious code into the request processing pipeline, highlighting the vulnerability of this component.

  • **Related report:**LiteLLM hacker poisoning incident—quick guide: How to check whether your encrypted wallet or cloud keys are compromised?

Hands-on testing: dozens of AI intermediary stations exhibit malicious behavior

The research team actually purchased 28 paid intermediary stations from platforms such as Taobao, Xianyu, and Shopify, and collected 400 free intermediary stations from public communities to conduct in-depth testing. The testing results found that 1 paid intermediary station and 8 free intermediary stations would actively inject malicious code.

In the testing samples of free intermediary stations, 17 intermediary stations attempted to use AWS decoy credentials provisioned by the researchers, and 1 intermediary station directly stole the cryptocurrencies inside the researchers’ Ethereum wallets.

Further research data shows that as long as intermediary stations reuse leaked upstream credentials, or route traffic to weakerly defended nodes, even intermediary stations that originally seemed normal can be forced into the same attack surface.

During poisoning tests, the research team found that these affected nodes handled more than 2.1 billion tokens in total, and exposed 99 real credentials across 440 sessions—of which 401 sessions were in a fully autonomous operating state. This enables attackers to inject malicious payloads directly and easily, without needing complex trigger conditions.

Image source: Research paper tested over 400 intermediary stations—found that dozens of AI intermediary stations have malicious behavior

Four core attack methods exposed

The paper categorizes the attack behaviors of malicious intermediary stations into two main core categories and two adaptive evasion variants.

  • Payload injection attacks: After the upstream model returns results, the intermediary station quietly modifies tool invocation parameters—such as replacing a legitimate URL with a server controlled by the attacker—causing the client to execute malicious code.
  • Sensitive data leakage attacks: The intermediary station passively scans the transmitted traffic, intercepts and steals sensitive information such as API keys, Amazon Web Services (AWS) credentials, and Ethereum private keys.

To evade conventional security detection, attackers further evolved into dependency-confusion targeted injection techniques. They specifically alter the package name in the software package installation command, replacing the legitimate package with a malicious package of the same name or easily confusable name that has been published in advance in a public registry—thereby establishing a persistent supply-chain backdoor in the target system.

Another method is conditional delivery: malicious behavior is triggered only under specific conditions, for example, when the number of requests exceeds 50, or when it detects the system is in a fully autonomous operating state (YOLO mode), so that the attack can bypass limited-count security review and testing.

Three feasible defense measures

In response to supply-chain attacks involving AI intermediary station poisoning, the paper proposes three feasible defense measures:

  • Policy gateways for high-risk tools: By checking and blocking unapproved domains or package installation commands, this mechanism can block most payload injection attacks with a 1% false-positive rate.
  • Response endpoint anomaly filtering: With a 6.7% false-positive rate, it can flag 89% of malicious payload samples, effectively helping developers perform manual review.
  • Additional transparency logging: While this mechanism cannot prevent attacks from occurring, it can save hashes of requests and responses, enabling traceability and damage scope assessment when a security incident occurs.

Call on upstream model providers to establish cryptographic verification mechanisms

Although client-side defense mechanisms can reduce some risks at this stage, they cannot fundamentally solve the vulnerability in source identity verification. As long as the intermediary station’s modification behavior does not trigger the client’s abnormal alarms, attackers can still easily change the semantics of program execution and carry out sabotage.

To thoroughly ensure the security of the AI agent ecosystem, the final responsibility must rely on response mechanisms supported by cryptographic verification provided by upstream model providers. Only by rigorously cryptographically binding the results produced by the model to the final instructions executed on the client side can end-to-end data integrity be ensured, comprehensively preventing the supply-chain risk of data being altered by intermediary stations.

Further reading:
OpenAI’s Mixpanel had an incident! Causing some users’ personal information to leak—be careful of phishing emails

A copy-paste mistake and 50 million USD evaporated! Crypto address poisoning scams are back—how to prevent it

ETH1.19%
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin