Five days. That’s really all it took for Clawdbot — an open-source AI assistant that promises to actually do things on your computer, not just chat — to go viral, implode, rebrand (twice!) and emerge as OpenClaw. Bruised but still breathing as a beloved crustacean.
If you blinked over the past few days, you may have missed crypto scammers hijacking X accounts, a panicked founder accidentally giving away his personal GitHub handle to bots and a lobster mascot that briefly sprouted a disturbingly handsome human face. Oh, and somewhere in the chaos, the AI developer Anthropic sent a polite email asking them to please, for the love of trademarks, change the name.
Welcome to OpenClaw. Formerly Clawdbot and briefly known as Moltbot, it’s the same AI assistant under a newer, sturdier shell. And boy, does this lobster have lore.
What even is OpenClaw? And why should you care?
Here’s the pitch that had tech X (the platform formerly known as Twitter) losing its mind: Imagine an AI assistant that doesn’t just chat; it does stuff. Real stuff. On your computer. Through the apps you use.
OpenClaw lives where you actually communicate, like WhatsApp, Telegram, iMessage, Slack, Discord, Signal — you name it. You text it like you’d text a friend, and it remembers your conversations from weeks ago and can send you proactive reminders. And if you give it permission, it can automate tasks, run commands and basically act like a digital personal assistant that never sleeps. Unlike its founder.
Created by Peter Steinberger, an Austrian developer who sold his company PSPDFKit for around $119 million and then got bored enough to build this, OpenClaw represents what a lot of people thought Siri should have been all along. Not a voice-activated party trick, but an actual assistant that learns, remembers and gets things done. (CNET reached out to Steinberger for comment on this story.)
OpenClaw doesn’t require any specific hardware to run, though the Mac Mini seems popular. The core idea is that OpenClaw itself mostly routes messages to AI companies’ servers and calls APIs, and the heavy AI work happens on whichever LLM you select: Claude, ChatGPT, Gemini.
Hardware only becomes a bigger conversation if you want to run large local models or do heavy automation. That’s where powerful machines, like the Mac Mini, are often brought into the conversation. But that’s not a requirement.
The project launched about three weeks ago and hit 9,000 GitHub stars in 24 hours. By the time the dust settled late last week, it had rocketed past 60,000 stars, with everyone from AI researcher Andrej Karpathy to investor (and White House AI and crypto czar) David Sacks singing its praises. MacStories called it «the future of personal AI assistants.»
Then things got weird.
The rename that broke the internet (twice)
Ostensibly, last weekend, Anthropic slid into Steinberger’s inbox to point out that «Clawd» (the assistant’s name) and «Clawdbot» (the project name) were maybe just a little too similar to its own AI, Claude.
«As a trademark owner, we have an obligation to protect our marks — so we reached out directly to the creator of Clawdbot about this,» a representative from Anthropic said in an email statement to CNET.
By 3:38 a.m. US Eastern Time on Tuesday, Jan. 27, Steinberger made his call: «@Moltbot it is.»
What happened next, according to Steinberger’s posts on X and the previous MoltBot blog, was like a digital heist movie, except everyone was a bot and the getaway cars were social media handles.
Within seconds — literally, seconds — automated bots sniped the @clawdbot handle. The squatter immediately posted a crypto wallet address. Meanwhile, in a sleep-deprived panic, Steinberger accidentally renamed his personal GitHub account instead of the organization’s account. Bots grabbed «steipete» before he could blink. He said both crises required him to call in contacts at X and GitHub to make fixes.
Then there was what the creators dubbed «the Handsome Molty incident.» Steinberger instructed Molty (the AI) to redesign its own icon. In one memorable attempt to make the mascot look «5 years older,» the AI generated a human man’s face grafted onto a lobster body. The internet turned it into a meme (a la Handsome Squidward) within minutes.
Fake profiles claiming to be «Head of Engineering at Clawdbot» shilled crypto schemes. A fake $CLAWD cryptocurrency briefly hit a $16 million market cap before crashing over 90%. «Any project that lists me as coin owner is a SCAM,» Steinberger posted on X, exasperated, to thousands of increasingly confused followers.
To continue the chaotic saga that has unfolded over the past week, as of Jan. 30, the project has settled on renaming Moltbot to OpenClaw, bringing in «Open» for open source and «Claw» for its lobster heritage. The name change makes sense due to those considerations. However, the reasoning is actually much simpler: Steinberger just didn’t like the name.
What made this AI tool go viral
Strip away the chaos, and OpenClaw is genuinely impressive.
Most AI tools are basically the same. You open a website, type a question or query, wait for it to generate, copy the answer, paste it somewhere else, etc., etc. OpenClaw wants to flip that script by having the assistant inside your existing conversations. You’re already in WhatsApp or iMessage, so why not just text it like you’d text a coworker?
Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
The killer features? Well, there are three main things.
For one, persistent memory. OpenClaw doesn’t forget everything when you close the app. It learns your preferences, tracks ongoing projects and actually remembers that conversation you had last Tuesday.
There are also proactive notifications. It can message you first when something matters, such as daily briefings, deadline reminders and email triage summaries. You can wake up to a text saying, «Here are your three priorities today,» without having to ask the AI first.
Finally, there’s real automation. Depending on your setup, it can schedule tasks, fill forms, organize files, search your email, generate reports and control smart home devices. People reported using it for everything from inbox cleanup to research threads that span days, and from habit tracking to automated weekly recaps of what they shipped. The use cases seem to keep multiplying because once it’s wired into your actual tools (calendar, notes, email), it stops feeling like software and is just part of your routine.
Should you actually use this thing?
Time for real talk. OpenClaw is not a polished, enterprise-ready product with vendor support and compliance paperwork — which is something Steinberger admits. It’s a fast-moving, open-source project that just survived a near-death experience involving trademark lawyers, crypto scammers and catastrophically exposed databases. Whew.
So, you might be wondering, through all this hoopla, whether OpenClaw is even something you should actually try. Sure, this tool remembers information across weeks, works between apps and systems and provides proactive notifications. But it’s got rough edges. This isn’t a tool for you if you need something that «just works» and doesn’t have complicated installation steps.
And you probably don’t want to take this on if you don’t want to think about — and don’t deeply understand — cybersecurity.
Key security risks to note
Security experts have raised red flags about OpenClaw’s safety as it grows in popularity. Because the agent is designed to run locally and interact with emails, files and credentials, even small setup mistakes can have big consequences.
In recent days, researchers spotted numerous publicly accessible OpenClaw deployments with little or no authentication, exposing API keys, chat logs and system access to anyone who stumbled across them.
Some of the most visible security concerns have been social rather than technical, including fake Clawdbot/Moltbot/OpenClaw downloads and hijacked accounts used to spread malware or scams. While developers have moved quickly to patch specific flaws, security analysts say OpenClaw’s turbulent debut highlights a larger issue facing AI agents: As they become more autonomous and more powerful, the security risks also scale just as fast.
Roy Akerman, head of cloud and identity security at Silverfort, an identity security platform, said in an email to CNET that the risk of a tool like OpenClaw isn’t that it’s overtly malicious. What’s risky is that it continues to act under a legitimate human identity, which can blur the lines between a user and the machine acting on their behalf.
«When an AI agent continues to operate using a human’s credentials, after the human has logged off, it becomes a hybrid identity that most security controls aren’t designed to recognize or govern,» Akerman said. «Organizations shouldn’t try to block these tools outright, but they do need to change their posture, treat autonomous agents as identities, limit their privileges and monitor behavior continuously, not just logins.»
The little lobster that molted (and kept going)
According to Steinberger, «Molting is what lobsters do to grow.» They shed their old shell and emerge bigger: from Clawdbot to Moltbot to OpenClaw.
OpenClaw is the same software as Clawdbot, offering the same impressive engineering and vision of what personal AI assistants could be. But the past almost-120 hours forced it to grow up fast, dealing with security vulnerabilities, battening down authentication, and learning that viral success attracts not just users but scammers, squatters and, yes, intellectual property lawyers.
Through all of this, OpenClaw is still standing. Discord is still buzzing. GitHub stars keep climbing. And somewhere in Vienna (or maybe London), Peter Steinberger is probably still fending off DMs from people asking if he’s launching a crypto token. (He’s not. Please stop asking.)
Want to try OpenClaw yourself? Head to openclaw.ai for documentation, installation guides and, most importantly, a security checklist.
Just maybe use a spare laptop. And definitely don’t name your project after anyone’s trademarked AI model. Turns out that matters.

