OpenClaw under the lens again: Vitalik Buterin warns of security risks 

Vitalik Buterin points out risk factors on OpenClaw

OpenClaw, an open-source AI agent tool, has been creating ripple effects in the tech industry due to serious vulnerabilities in its codebase. Amid this scenario comes Ethereum co-founder Vitalik Buterin’s striking remark on the AI tool.

The blockchain guru has flagged serious security issues tied to OpenClaw. In a recently published analysis, Buterin noted that the famous AI tool can be manipulated by malicious inputs to work against user interests.  

Why did Vitalik Buterin raise concerns over OpenClaw?

Besides the single-line reason mentioned, there are multiple risks that the Ethereum co-founder cited. 

Data Exfiltration: Malicious prompts on OpanClaw can make it execute commands, silently sharing sensitive user data from their system to external servers that are managed by attackers. This process happens without the knowledge of the users.

Join our newsletter
Get Altcoin insights, Degen news and Explainers!

Malicious skill submission: Nearly 15% of OpenClaw skills, also known as plugins or add-ons, carried malicious instructions that are designed to steal user data.

System takeover: If OpenClaw reads a bad or fake website, attackers can hijack the AI tool and control its actions. 

Uncontrolled access: The poor system on OpenClaw allows it to modify its own critical settings and change its own system prompt, devoid of requiring confirmation from the user. 

These risk factors can expand when OpenClaw is used in documents, communication platforms, code executions, and more, resulting in information leakage. 

Although OpnClaw looks efficient and well-built, several analysts are slashing the AI framework for its poor structure. A blockchain enthusiast by the name Kadonir wrote, “This is what happens when things scale faster than people understand them.” 

For Buterin, the risk factors go beyond just bugs; the issues are more about the OpenClaw ecosystem and the surrounding culture. Therefore, he suggests a few recommendations about making the tools more verifiable, secure, and transparent, meaning a “full-stack openness and verifiability”.

Developers should be allowed to inspect, verify, and test every layer of the AI tool. This would potentially reduce remote attacks and data exfiltration.   

What is the OpenClaw ecosystem doing?

OpenClaw has integrated VirusTotal scanning to detect suspicious skills or plugins in the official repository and cease them before installation. However, this feature is not enough to block tricky, malicious logic.  

As such, security experts warn users of running OpenClaw in an isolated environment; restrict network access, allow human confirmation for sensitive actions, avoid public exposure of the AI tool to the open internet,  and avoid blindly installing plugins made by strangers. 

The OpenClaw team has encouraged developers to prove their skills for manual editing. Whatsoever, experts have emphasized that users must stay vigilant as genuinely looking plugins can carry malicious codes. 

Why is OpenClaw unique?

From Clawdbot to Moltbook to OpenClaw, the AI tool was rebranded due to legal trademark complaints. The reasons why this AI software exploded in popularity also make it risky. This tool does not simply respond to commands, but it can browse the web, install tools, deploy code, and interact with users’ systems. These features make the AI tool different from traditional software. 

However, the same features draw the attention of bad actors. OpnClaw does not distinguish between malicious instructions and trusted instructions. That’s why it accepts or treats cleverly designed prompts hidden on a fake website or plugin as legitimate and unknowingly falls into the trap. 

OpenClaw’s weak point can be easily related to a real human assistant who just follows the instructions, but never questions the source of the instructions or the purpose of the instructions.

Vitalik Buterin’s concerns are spread across other AI tools

To note, Vitalik Buterin is not just being vocal about vulnerabilities in OpenClaw; instead, he is warning of larger, similar patterns in the tech industry. Before the AI tools are packed with security or well understood, they are quickly adopted, shipped, and scaled. This exposes vulnerabilities and insecure sides of the tools.

Early autonomous agents, such as Auto-GPT, showed us how AI could execute unintended commands very easily. Besides, AI assistants integrated into browsers, emails, and documents have signaled risks, including leaking sensitive data and being manipulated using prompt injection.  

In January 2026, hundreds of API keys and private chats were found vulnerable on Clawdbot, the current OpenClaw. Hundreds of unauthenticated instances were exposed online, including vulnerabilities in multiple code that led to remote code execution and credential theft.  

More recently, a phishing campaign had targeted OpenClaw developers on GitHub, impersonating the AI agent as developers and offering them fake CLAW tokens. 

That said, Vitalik Buterin’s remarks are not something new; it was already identified and discussed earlier this year. 

AI is becoming inevitable in our lives

We are living in an era where AI technology is replacing hundreds of jobs, including the recent massive layoff by Oracle.  AI is required to enhance autonomous operations and handle tasks efficiently. However, risk factors are hiding behind every technology, which are more or less known to us.  

The real problem is speed over safety

In fact, the real issue, besides other major risk factors are developers are building AI tools with speed. This is very well evident because we see multiple AI tools landing every month, or quarterly, or let’s say every now and then. In other words, AI has become a culture as students, teachers, business professionals, and tech firms are largely diving into its use cases. 

And, this is where hackers hide to exploit the vulnerabilities in technology that is rapidly becoming ubiquitous. While developers launch AI tools quickly to stay competitive and users fully digest the AI tools, security becomes weak, and attackers easily exploit the system.  

Will AI look safe in the future?

The OpenClaw case is already igniting debate around how AI agents should be built more safely. Several experts put forward their idea this way: build permissionless layers; use verified and audited plugin marketplaces, create logs where users can see what AI tool is doing; and run AI agents in restricted environments.  
Despite the issues, Meta has recently acquired Moltbook, a social network platform built on OpenClaw for AI agents to interact and collaborate in a digital environment.

Bottom Line

To conclude, OpenClaw shows how strong AI agents have become, while projecting their fragility with fewer safeguards. These tools are increasingly moving into practice. We often see several AI tools emerging; however, the line between convenience and risk is becoming blurrier than ever.  

Disclaimer: This article is for informational purposes only and does not constitute financial, investment, or trading advice. Cryptocurrency investments are subject to high market risk. Readers should conduct their own research or consult with a financial advisor before making any investment decisions. The views expressed here do not necessarily reflect those of the publisher.

Share this article