OpenClaw is the unruly AI agent. Here’s why security experts warn you to be cautious
Welcome to Eye on AI, hosted by AI reporter Sharon Goldman. This edition covers: OpenClaw’s unruly side…Anthropic’s new $20 million super PAC pushing back against OpenAI…OpenAI’s first model built for ultra-fast output…Anthropic will absorb electricity cost hikes at its AI data centers…Isomorphic Labs claims to have broken ground on a new biological frontier beyond AlphaFold.
Over the past few weeks, OpenClaw has demonstrated just how recklessly AI agents can behave — while also gathering a loyal fanbase.
This free, open-source autonomous AI agent (created by Peter Steinberger and originally named ClawdBot) takes familiar chatbots like ChatGPT and Claude and equips them with tools and independence to interact directly with your computer and other online entities. Picture sending emails, reading your messages, booking concert tickets, reserving restaurant tables, and more — all while you presumably relax and enjoy treats.
The catch with granting OpenClaw such powerful capabilities for exciting tasks? Unsurprisingly, it also opens the door to inappropriate actions, including data leaks, unintended command execution, or quiet hijacking by attackers — either via malware or so-called “prompt injection” attacks. (Where malicious instructions for the AI agent are hidden in data the agent might process.)
Two cybersecurity experts I interviewed this week noted that OpenClaw’s appeal lies in its lack of restrictions, essentially giving users almost unlimited freedom to customize it as they wish.
“The only rule is there are no rules,” stated Ben Seri, co-founder and CTO of Zafran Security (a firm specializing in threat exposure management for enterprises). “That’s part of the appeal.” But this appeal can quickly become a security nightmare, as rules and boundaries are key to warding off hackers and leaks.
Classic security concerns
The security worries are fairly traditional, said Colin Shea-Blymyer — a research fellow at Georgetown’s Center for Security and Emerging Technology (CSET), where he works on the CyberAI Project. Permission misconfigurations (defining who or what can perform which actions) mean people might accidentally give OpenClaw more authority than intended, and attackers can exploit this.
For instance, much of OpenClaw’s risk stems from what developers call “skills” — essentially apps or plugins the AI agent uses to take actions like accessing files, browsing the web, or running commands. The key difference: unlike a regular app, OpenClaw decides independently when to use these skills and how to link them together. This means a minor permission error can quickly escalate into a much bigger problem.
“Suppose you use it to access a restaurant’s reservation page, but it also has access to your calendar full of personal details,” he said. “Or what if it’s malware that navigates to the wrong page and installs a virus?”
Shea-Blymyer noted that OpenClaw does include security sections in its documentation and aims to keep users informed. However, the security issues remain complex technical problems most average users won’t fully grasp. And while OpenClaw’s developers may work hard to fix vulnerabilities, they can’t easily resolve the core issue: the agent’s ability to act autonomously — which is exactly what makes the system so captivating initially.
“That’s the fundamental trade-off in these systems,” he said. “The more access you grant them, the more fun and interesting they become — but also the riskier.”
Enterprise adoption will be gradual
Zafran Security’s Seri acknowledged that it’s nearly impossible to curb user curiosity about a system like OpenClaw, but emphasized that enterprises will be far slower to adopt such an unmanageable, insecure tool. For average users, he advised experimenting as if they were handling a highly explosive substance in a chemistry lab.
Shea-Blymyer pointed out that it’s a good thing OpenClaw is emerging first among hobbyists. “We’ll gain valuable insights into the ecosystem before anyone attempts to deploy it in an enterprise setting,” he said. “AI systems can fail in unimaginable ways,” he explained. “[OpenClaw] could teach us a lot about why different LLMs act the way they do and about emerging security risks.”
But even though OpenClaw is a hobbyist experiment now, security experts view it as a glimpse into the autonomous systems enterprises will eventually face pressure to implement.
For now, unless someone wants to be the subject of security research, average users should probably avoid OpenClaw, Shea-Blymyer said. Otherwise, don’t be shocked if your personal AI agent strays into very dangerous territory.
With that, here’s more AI news.
Sharon Goldman
sharon.goldman@
@sharongoldman