What Vitalik Buterin’s AI Privacy Warning Means for Crypto Users

TLDR

  • Vitalik Buterin says cloud-based AI tools expose user data and raise serious security risks
  • Research shows around 15% of AI agent “skills” contain malicious instructions
  • Some AI agents can modify system settings or send data to external servers without user knowledge
  • Buterin built a local AI setup using on-device inference, sandboxing, and human approval for actions
  • The AI agents market is projected to grow from $8 billion in 2025 to over $48 billion by 2030

Vitalik Buterin, co-founder of Ethereum, published a blog post warning that modern AI tools create serious privacy and security risks. He argued that cloud-based systems should be replaced with local, on-device alternatives.

⚡️NEW: @VitalikButerin outlines a privacy-first vision for AI, pushing for fully local, self-sovereign LLM setups to reduce data leaks and external control.

He warns current AI ecosystems are “cavalier” on security, highlighting risks like data exfiltration, jailbreaks, and… pic.twitter.com/Q9BjHSISrL

— The Crypto Times (@CryptoTimes_io) April 2, 2026

Buterin said AI has moved beyond simple chat tools. Newer systems now act as autonomous agents that can complete long tasks using hundreds of tools. He said this shift increases the risk of data exposure and unauthorized actions.

He wrote that he has already stopped using cloud-based AI. He described his setup as “self-sovereign, local, private, and secure.”

“I come from a position of deep fear of feeding our entire personal lives to cloud AI,” he wrote.

He cited research showing that about 15% of AI agent skills contain malicious instructions. Some tools were also found to send data to external servers without the user knowing.

Buterin warned that certain AI models may contain hidden backdoors. These could activate under specific conditions and act in the developer’s interest rather than the user’s.

He also noted that many models described as open-source are only “open-weights.” Their full internal structure is not visible, which leaves room for unknown risks.



Buterin’s Local AI Setup

To address these concerns, Buterin built a system around on-device inference, local storage, and process sandboxing. His setup runs on NixOS, with llama-server handling local inference and bubblewrap used to isolate processes.

He tested several hardware configurations using the Qwen3.5 35B model. A laptop with an NVIDIA 5090 GPU delivered around 90 tokens per second. An AMD Ryzen AI Max Pro system reached about 51 tokens per second. DGX Spark hardware hit around 60 tokens per second.

He said performance below 50 tokens per second felt too slow for regular use. Based on his tests, he preferred high-performance laptops over specialized hardware.

For those who cannot afford such setups, he suggested groups of friends pool resources to buy a shared computer and GPU and connect to it remotely.

Human Approval as a Safety Layer

Buterin uses a “2-of-2” confirmation model for sensitive actions. Tasks like sending messages or transactions require both AI output and human approval.

He said combining human and AI decisions is safer than relying on either alone. When using remote models, his requests are first filtered through a local model to remove sensitive information before anything is sent out.

He compared AI systems to smart contracts, saying they can be useful but should not be fully trusted.

AI Agents and Market Growth

The use of AI agents is growing. Projects like OpenClaw are expanding autonomous agent capabilities. These systems can operate independently and complete tasks using multiple tools.

Industry estimates put the AI agents market at around $8 billion in 2025. That figure is projected to reach over $48 billion by 2030, representing annual growth of more than 43%.

Some agents can modify system settings or alter prompts without user approval, which increases the risk of unauthorized access.

ETH0.98%
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin