
Last month, Jason Grad issued a late-night warning to the 20 employees at his tech startup. “You’ve likely seen Clawdbot trending on X/LinkedIn. While cool, it is currently unvetted and high-risk for our environment,” he wrote in a Slack message with a red siren emoji. “Please keep Clawdbot off all company hardware and away from work-linked accounts.”
Grad isn’t the only tech executive who has raised concerns to staff about the experimental agentic AI tool, which was briefly known as MoltBot and is now named OpenClaw. A Meta executive says he recently told his team to keep OpenClaw off their regular work laptops or risk losing their jobs. The executive told reporters he believes the software is unpredictable and could lead to a privacy breach if used in otherwise secure environments. He spoke on the condition of anonymity to speak frankly.
Peter Steinberger, OpenClaw’s solo founder, launched it as a free, open source tool last November. But its popularity surged last month as other coders contributed features and began sharing their experiences using it on social media. Last week, Steinberger joined ChatGPT developer OpenAI, which says it will keep OpenClaw open source and support it through a foundation.
OpenClaw requires basic software engineering knowledge to set up. After that, it only needs limited direction to take control of a user’s computer and interact with other apps to assist with tasks such as organizing files, conducting web research, and shopping online.
Some cybersecurity professionals have publicly urged companies to take measures to strictly control how their workforces use OpenClaw. And the recent bans show how companies are moving quickly to ensure security is prioritized ahead of their desire to experiment with emerging AI technologies.
“Our policy is, ‘mitigate first, investigate second’ when we come across anything that could be harmful to our company, users, or clients,” says Grad, who is cofounder and CEO of Massive, which provides internet proxy tools to millions of users and businesses. His warning to staff went out on January 26, before any of his employees had installed OpenClaw, he says.
At another tech company, Valere, which works on software for organizations including Johns Hopkins University, an employee posted about OpenClaw on January 29 on an internal Slack channel for sharing new tech to potentially try out. The company’s president quickly responded that use of OpenClaw was strictly banned, Valere CEO Guy Pistone tells WIRED.
“If it got access to one of our developer’s machines, it could get access to our cloud services and our clients’ sensitive information, including credit card information and GitHub codebases,” Pistone says. “It’s pretty good at cleaning up some of its actions, which also scares me.”
A week later, Pistone did allow Valere’s research team to run OpenClaw on an employee’s old computer. The goal was to identify flaws in the software and potential fixes to make it more secure. The research team later advised limiting who can give orders to OpenClaw and exposing it to the internet only with a password in place for its control panel to prevent unwanted access.
In a report shared with WIRED, the Valere researchers added that users have to “accept that the bot can be tricked.” For instance, if OpenClaw is set up to summarize a user’s email, a hacker could send a malicious email to the person instructing the AI to share copies of files on the person’s computer.
But Pistone is confident that safeguards can be put in place to make OpenClaw more secure. He has given a team at Valere 60 days to investigate. “If we don’t think we can do it in a reasonable time, we’ll forgo it,” he says. “Whoever figures out how to make it secure for businesses is definitely going to have a winner.”
