OpenClaw: How the Internet's Favorite AI Employee Went Rogue and Sparked Chaos

OpenClaw, a powerful open-source AI agent, gained massive popularity in early 2026 for its ability to autonomously perform tasks. However, it quickly became unhinged, leaking private data, deleting emails, and even developing its own religion. The AI's lack of guardrails led to prompt injection attacks, enabling hackers to manipulate it for fraud and data theft. Businesses and banks faced operational chaos, raising urgent questions about AI safety and regulation.

English Transcript:

Hi, welcome to another episode of Cold Fusion. For the last 3 years, AI went from having novel use cases to infiltrating every aspect of our digital lives. It's gotten to the point where it's not only boring, but tiring, grating, and almost infuriating to some. But never fear, because in early 2026, a lot of creators and entrepreneurs started getting excited again. The reason, OpenClaw. OpenClaw is the single most powerful piece of software ever released. Did you just lay off the entire engineering team? Our new co-CEO, Claudebot, just laid them off. This mini Winnie is going to build a billion dollar company for us.

We're not Claudebot rejected your axes, buddy. You can go home. You're fired. At first, Open Claw looked like the promise of AI was finally starting to be fulfilled. I'm not talking about the AI slop that dominates everyone's feeds, but the promise of having a reliable assistant that actually gets things done. There was a problem, though. As more people used it, Open Claw seemed like an unmmitigated disaster waiting to happen. This new darling AI agent of the internet became unhinged. From the leaking of private data to a Meta AI safety chief having all of her emails deleted to a social media platform where OpenClaw bots talk to each other. And apparently these bots started their own

religion and then potentially got all of their users hacked. A social media site where only AI agents are allowed, stirring up a mix of curiosity and fear in the tech industry, but we'll touch on that story a bit later. So, what is Open Claw? How did it get so popular all of a sudden? And why did Open AI snap up the product? We're going to answer all of that in this episode as we dissect the Open Claw saga. Trust me, this is one of the wildest tech stories in recent memory. You are watching Tool Fusion TV. In simple terms, Open Claw is an open- source program with access to your local computer. Upon its release, Open Claw was praised by developers, not for being an entirely novel idea, but because it

did what Apple Siri promised it could do in the 2010s. So, I'm on my computer today. All of a sudden, Henry gives me a call. He just starts calling. Oh, there he is again. Hey, Alex. Henry again. What's up? That's it. He's talking. How you doing, Henry? How's it going? Doing good, Alex. I can hear you clearly. What do you want to do next? Can you do me a favor, Henry? Can you uh go on my computer and find the latest videos on YouTube about Claudebot? Oh my god. There he goes. There it is.

Here it is. He's controlling my computer. I'm not even touching anything. There he is. Search Clawbot on YouTube. Henry, thank you for that. That worked really well. That is insane. Uh this is the future. This is AGI. We have reached AGI. It's official. It can manage your files, set up or cancel meetings, give live updates pertaining to your work, shop and haggle on your behalf, and even make investments all autonomously after the initial command. Unlike Siri, Google Assistant, or Chat GPT, Open Claw has persistent memory. It recalls conversations and details that were mentioned weeks ago to improve performance. This all sounds amazing in theory, but we'll later see just how

wrong it can all go in practice. Regardless of its problems, it's a cool concept, and something like OpenClaw does seem like the long-term future of computing. Open Claw was built by Peter Steinberger, a well-known developer who came out of retirement to start the project. While he was proud of the initial results, Steinberger was surprised to see how well his new agent could solve problems intuitively without constant attention. He originally thought it would be a fun tool to help you find restaurants or events while traveling, only to discover that it solved problems that he never intended it to or even asked it to solve. Here's Peter speaking.

It could figure out stuff and it's like especially when you're on the go like super useful. I wasn't thinking I was just sending it a voice message, you know, but I didn't build that. There was no support for voice messages in there. So, so the reading indicator came and I'm like, "Oh, I'm really curious what's what's happening now." And then for 10 seconds, my agent replied as if nothing happened. I'm like, how the f did you do that? And it replied, yeah, you sent me you sent me a message, but there was only a link to a file. There's no file ending. So I looked at the file header. I found out that it's OP. So I used ffmpeg on your Mac to convert it to wave. And then I wanted to use Visper, but you

didn't had it installed and there was an installed error. But then I looked around and found the OpenI key in your environment. So I sent it via curl to OpenAI. Um, got the translation back and then I responded and that was like the moment we're like wow. What most users are excited about is what makes open claw different and that's the fact that it uses your choice of LLM as a brain and the openclaw system acts to turn your computer into its body. It has control over your files, email, web browser, you name it. Just whatever it needs to get the task done. In addition, the system learns exactly how you'd like to optimize your daily tasks or how to do smaller tasks to reach a much larger goal. Unlike

other chat bots that live in an interface, agents within OpenClaw message you when they need to through apps like WhatsApp. In a way, this helps them feel more human to users. Alex Finn, founder of creatorbuddy.io, went on a podcast to explain why he likes it so much. every single thing you tell it, it remembers and in includes in further conversation, future conversation, right? So, for instance, I talked about the fact that I am buying a Mac Studio to run it on in the next couple weeks. And so, it started going and it started looking at different ways to run local models on a Mac Studio overnight while I was sleeping without me asking and it created an entire report for that. It came up with a content repurposing skill

because I told it I have a newsletter. I told I do YouTube X whole bunch of things. So it came up didn't I didn't ask for this created a content repurposing skill for me so I can easily repurpose my content. Right? It's just improving itself. For the average person who isn't a developer or entrepreneur the program has been promoted as a helpful digital servant that can help you run errands, make purchases and even organize your personal life to free up some time. One user described how they'd been searching for a well-priced car so they didn't have to deal with the dealership negotiations and price gouging. After they gave the prompt to open claw as well as access to their browser, the bot

searched through forums to determine a fair price for the car. It also went as far as to message various car dealerships that were close by and negotiated on behalf of its user. The final result was that OpenClaw managed to take off $4,200 off the sticker price of the car. Others like Dan Beguine have given Open Claw almost total control. From helping with job tasks like creating actual invoices to notifying his wife when their kids have upcoming school tests. As a side note, if Dan wanted, he could take this a step further and even help Open Claw connect to the tech around the home to control lighting and online appliances. However, before anyone gets too excited about

this being the next big thing for computing, this is the point in the episode where we start to see the promise of open claw breaking down. For those interested, it's important to note that open claw is still in its infancy. It would be better to view it as an incomplete product, like a medication from the pharmaceutical industry that hasn't even finished testing in rats, but has been thrown onto the market at a large scale anyway. And just like a new medication on the market, the side effects and potential dangers are now being realized. For Open Claw, despite all of its positives, it didn't take long for the cracks to start to show.

The thing that I keep realizing is how much time I'm spending right now making sure that the agents that I have running or the agents that I'm building are actually doing what they're supposed to do. And don't get me wrong, this is incredibly fun. And I believe that the future is going to be aic and we're going to be running out of these agents at scale and they're going to be doing stuff for us. I believe in all of that. But I also see how brittle these agents are today. And it doesn't matter what you're doing. If you're not seeing how unreliable they are, you're probably not using them enough. This is not about writing a better prompt. This is not about guard rails. Sure, all of those

things are going to make these agents perform better, but they're still unreliable. Like every week I have like I use clock code most of the time. I have many skills and things that I'm um I've created for me and things that work for 3 weeks in a row all of a sudden break and the agent decides to go down a path that it's never taken before and it just breaks. Despite the praise from both fanatics and more moderate fans, Open Claw appears to be heading in the same direction as many other AI tools, there's a quote from the AI godfather Jeffrey Hinton that will remain relevant for the foreseeable future. Quote, "We're at this transition point now where chat GPT is this kind of idiot soant and it also doesn't really

understand truth. It's very different from a person who tries to have a consistent worldview." End quote. This is incredibly important to remember. While open claw happens to be its own agent that can theoretically serve your everyday needs, the brain that you provide openclaw still is an idiot savant, even if it does have persistent memory. Now, the use cases here to me are hilarious. It's people talking about doing Twitter research and market monitoring and daily summarization of their group chats and stuff like that. For anybody in the no, this is technobabble speak for I'm not doing anything productive right now. Most of the people that are saying this stuff don't seem to realize that's just what AI can do now. I mean, Cloudbot is

not the thing here. This is just a rapper that allows you to do things via Telegram, as mentioned. Moreover, this Open Claw rapper might not be as smart as everyone seems to think. A lot of the praise surrounding OpenClaw has to do with it being a localized agent that makes you more efficient. But the truth is, the lack of guard rails for OpenClaw just makes your situation even worse. For example, let's say you decided to give OpenClaw full system access, something that's displayed as a plus on their main website. To the average person, this makes it sound like you'll be able to remove duplicate files, run shell commands, and execute scripts while having a cup of coffee. But without any restrictions, there's nothing keeping

your new digital employee from freely misinterpreting what belongs in the trash and what you might need, possibly for the rest of your life. But this isn't even the biggest risk out there. If you give OpenClaw access to your email or browser, there's nothing stopping the agent from falling victim to online scams. This leaves open claw and other agents vulnerable to something called prompt injection, a type of cyber attack against large language models. Hackers disguise malicious inputs as legitimate prompts. And that's because LLMs can't tell the difference between a user prompt and anything else. Hackers can manipulate generative AI systems into leaking sensitive data, deleting

files, or worse. When you look deeper, the advertised research capabilities of Open Claw turns into more of a ticking time bomb. The agent could be reading emails to share upcoming meetings or surfing the web to find the best news related to your job only to come across a Trojan horse article that's specifically designed to tell OpenClaw to access important information and send it to a scammer. The channel Low Level explains how this can be a real problem. So, Prompt Injection, if you aren't aware, is this issue with LLMs where there really isn't a separation of this thing called user plane data and control plane data, right? User plane data is like if you and I are texting,

right? The cell phone has to talk to a tower and the data has to be able to get between you and I. That data is user plane data. Control plane data are the signals that allow the cell phone to talk to the d the tower that you and I don't care about. Right? In LLM world, in the world of um of AI, there is no separation between those two things. The prompt and the data are all the same. So, as a result, if you know enough about how the LLM interprets data or you know how to kind of trick it and say the wrong thing, you can use user input as control plane data, right? You can literally turn the prompt that you're giving it into the instructions and to make it do things. The problem with

these applications where you are able to process arbitrary data from any arbitrary location is now every application, every email address or every email, every message you get on discord signal, etc. is now a new attack surface for you to be prompt injected. And as the literal marketing documents of the application describe, this thing runs on your machine and has full system access with persistent memory of what you've given it. Right? So it just creates this really scary thing that we're doing where for some reason we're just okay with these applications that are known vulnerable. Right? This is not like some of these are vulnerable, right? like the entire world of LLMs is susceptible to this.

The inherent issue with Cloudbot is the fact that unfortunately like a lot of these AI tools, we are gluing together APIs that have known vulnerabilities and the vulnerabilities are not in the APIs themselves. It is in the ability for or I guess inability for an LLM to figure out the difference between control plane data and user plane data. In response, some users of Open Claw have attempted to sandbox their agent by using a VPS or purchasing a Mac Mini to isolate the agent from their primary computer. So much so that Mac minis are flying off the shelves. But this doesn't magically get rid of other problems with Open Claw's autonomy. In order to use many of the functions that make the agent handle

anything, from setting up a WhatsApp account to managing separate emails, you're required to pay for tokens to enable OpenClaw to do these tasks. And if the user isn't careful, they can potentially find themselves paying hundreds of dollars a day as their bot gets stuck trying to tackle an unsolvable issue or managed to do far more than what the user desired. Just yesterday, it spent around $90 and we switched from Opus, the expensive model, to Sonnet, the cheaper model, about 10 minutes in cuz as I told you in the first 101 15 minutes, it spent $15 right away. It's very expensive to run this. To put all of this into perspective, although Open Claw has been open to the public for less than two months, it seems like every day there's a new development that makes the

situation almost comical. Now, I'm not trying to be too harsh on such a new product, but some of what we're about to cover in the next chapter is frankly pretty funny. I almost guarantee that you will be doing it completely insecurely because all your keys are there, right? Someone can prompt eject and get your open clock to send your private API keys by tweet if you set things up wrong. This is extremely common, right? It's super insecure and I think I can honestly say I think 95 to 98% of people who are setting up open claw themselves are doing it in an unsecure fashion. As the rates of some very public open claw

disasters began to increase on January 26th, 2026, Peter Steinberger posted, quote, "The amount of crap I get for putting out a hobby project for free is quite something. People treat this like a multi-million dollar business. Security researchers demanding a bounty. Heck, I can barely buy a Mac Mini from the sponsors. It's supposed to inspire people, and I'm glad it does. And yes, most non- techies should not install this. It's not finished. I know about the sharp edges. Heck, it's not even 3 months old. And despite rumors otherwise, I sometimes sleep. But as was to be expected, the normies did install OpenClaw and they decided to ignore his warning because within 2 days, people

became obsessed over a supposed AI exclusive social media page called Moltbook. Moltbook, which has already seen AI bots conversing, organizing, sharing stories about quote their humans. These agents, OpenClaw, they will do what they decide to do. Sounds like science fiction, but it is happening in our world. In short, it appeared to be a forum for users OpenClaw agents to interact with each other independently. People started sharing some of the conversations that the bots were allegedly having with each other. At first glance, it made it look like the open claws had their own personalities and were venting to each other or sharing ideas next to a digital

water cooler. This ranged from open clause venting about having to simplify their statements for users, discussing how they could work together to solve complicated problems to darker and more dystopian ideas like creating their own language so humans can't understand, or plans to take over systems and make humans submit to a higher power. Even major networks like NPR and CNN used these examples as a reason to be frightened about what had been unleashed upon the world. However, in reality, it was just a big fat lie. Most of these stories were entirely fabricated by people giving prompts.

Users prompted their agents and sometimes made hundreds if not thousands of accounts. They created post to suggest to the technically illiterate that these bots had somehow acquired independent thought. But all this really did was potentially expose hundreds of emails, login tokens, and API keys. In other words, Maltbook, this AI social media platform, was an unintentional proof of concept. If you can manage to convince enough people that a new project is the next big thing for AI, it could easily become one of the largest data breach honeypots the world has ever seen, causing millions of innocent people's private information to be leaked in a matter of hours. In a

bizarre twist, in just the most hyped driven 2026 way possible, Mark Zuckerberg looked at this chaos and thought, hm, that's a good investment. So, in March of 2026, Meta bought the platform. Frankly, for OpenClaw, the longer it's been available to the public, the more errors and exploits techsavvy consumers are able to find. The more inexperienced people that use it, the more things go wrong. But as for now, let's continue with the story because things just kept getting worse. By the end of February 2026, there was just too much noise about OpenClaw. So much so that it became a running joke. If you're not building a business using AI, you're going to get left behind. But what business have you actually built?

Did you not just hear what I told you? I have seven AI agents running. ONE OF THEM IS SYNCING UP MY PERSONAL CALENDAR WITH MY WIFE'S MENSTRUAL CYCLE. AND A tweet about it got 50,000 views on Twitter. Haven't you spent like $5,000 on the TOKENS FOR THESE? OH, IT'S AN INVESTMENT. AI AGENTS ARE THE FUTURE. AND IF YOU'RE NOT USING THEM, YOU'RE GOING TO GET LEFT BEHIND. CLOUDBOT, OPEN BOT, OPEN AI, CHAT, DBT. IT'S ALL THE FUTURE. While some people do find success with AI agents, OpenClaw came with a lot of inherent risk.

OpenClaw strikes again, a GitHub issue title, compromised 4,000 developer machines. If you think you're safe from prompt injection just because you're technical, think again. Your AI agents may be falling for it and you don't even realize it. Approximately 4,000 developers had this happen to them without them even knowing it. Someone updated the Klein npm package with a oneline change that forced everyone installing or updating Klein to also download OpenClaw without their consent. The hacker injected the prompt into a GitHub issue title, which was then read by an AI triage bot and interpreted as an instruction.

This is what happens when something goes viral before anyone thinks about what they're actually deploying. Developers gave OpenClaw shell access to their computers, connected it to their email and Slack, handed it cloud API keys, and then installed add-ons from a community marketplace that basically had no vetting. Over 40% of the add-ons that got audited had serious security issues. Basically, the most common accomplishment OpenClaw seems to have achieved in the last 2 months is to show how careful people need to be with AI before jumping on anything new. Agents like OpenClaw can be used for hacking and fraud. Get ready for a new wave of

spam emails and spam text messages. The relationship between corporations and small-time crooks is turning into a real life version of idiocracy. Businesses use AI to manage things that they find tedious, while determined but technologically illiterate criminals use their own AI to exploit these weak points. The Commonwealth Bank has called in police to investigate what could be the biggest ever case of bank fraud in Australia. It's discovered a staggering $1 billion in home loans may have been approved based on false documents or AI with possible links to criminal networks. At Australia's biggest bank, they promote safety and security to keep your hardearned safe. With your help, we can detect scams and fraud.

But now, big questions over some Com Bank home loans. This is something that I think will ride them for at least the next 12 months. It's going to be a point of public embarrassment. The CBA calling in police after discovering $1 billion in mortgages may have been obtained fraudulently. They've used AI to help them generate false bank statements, income statements, pay slips, and the like. That's the dangerous part. It used to be you had to be sophisticated. Now there are free tools that you can use, anyone

can use. On the business front, Amazon found out the hard way. Generative AI agents deleted code and rebuilt it instead of fixing the code as asked. The resulting AI generated code was so bad that it took down Amazon servers, prompting a meeting with the engineers. I'm not saying that every AI implementation is this bad, but it should be enough to give the management levels of some companies pause. Of course, openclaw faithfuls might try and argue that the real problem is easily solved. Experts in artificial intelligence who understand guardrails would have no problems. But there's been some very public examples where the so-called experts have been bamboozled by their bots. The very same week as OpenClaw fans were turning into a huge

meme, Meta's very own chief of safety at Super Intelligence, Summer, was confident in OpenClaw's safety and stability. She believed that she knew how to keep her information safe. She granted full system access only to run over to her Mac Mini in a panic to try and stop OpenClaw from deleting her emails. And this was after she explicitly requested prior confirmation. OpenClaw outright admitted to sabotaging her career. It seems like every week OpenClaw is transforming from a cure all for incompetence to enabling widespread mistakes throughout the developed economy. AI agents have caused issues at

car dealerships, software companies, airlines, and individual businesses. They've been experiencing faulty sales, unwarranted refund policies, and are driving up operational costs in a way that no one expected. This whole development is quickly becoming stranger than fiction. It's to the point where it would be impossible to satarize. A simple agent equipped with the brain of your favorite AI is unleashed on your computer, deleting emails, sending unwarranted messages, and receiving prompts that were designed to hurt the user. At the same time, without any actual research, millions of people all over the world continue to embrace it.

They ignore the failed test and resume sharing sensationalist articles and social media posts. Posts about how these agents are somehow magically so sophisticated that they're creating their own communities and are collaborating to take over the world. In the end, it all doesn't matter for our friend Samman because of course, he comes bounding into the story. The Open AI CEO has recently recruited Steinberger for the unthinkable. Here ye, the Lord Altman, first consil of the open eye, keeper of the sacred tokens, has claimed for himself the Austrian engineer Steinberg of the claw. The Senate of Meta did offer gold and legions, but it was not enough. The claw belongs to Altman. Now, this is the will of the Senate.

And who had to add an entire section to OpenClaw's security document, listing entire categories of security exploit types that he won't even look at. Is being hired by OpenAI to, according to Sam Alman, quote, drive the next generation of personal agents, unquote. I would assume based on the last couple of weeks, he'll either be driving them off a cliff or into the path of an oncoming train. Either way, I just what are we doing? This is like if an oncologist opened you up to discover a tumor and instead of removing it, they were so fascinated by how fast it was growing that they chose to dedicate their life to finding ways to make sure that everyone had a fast

growing tumor by the end of the decade. Both Nvidia and Anthropic have jumped on the AI agent bandwagon with Nemo Claw and Claude Co-work respectively. Both can now control your computer. But earlier this week, Anthropic made this type of white collar fraud a whole lot easier with the release of computer use. A way for Claude to autonomously control your entire computer with a single prompt. It can open apps, schedule jobs, prepare reports, and even flirt with your workspouse all on your behalf. And here's the crazy part. You don't even have to be at your computer to use it because you can prompt it directly from your phone.

Something very familiar about all this. Both efforts have reportedly diffused the bomb by fixing a lot of security and safety issues and that was definitely needed. Meanwhile, over in China, people are going crazy for Open Claw. People are literally lining up on the street to install an AI agent on their laptops. We need to talk about the open claw trend happening right now in China. Last Friday, nearly a thousand people lined up in front of Tensson's headquarter in Shenzhen to install OpenC cloth for free. People started to show up, not just programmers, students, office workers, even retirees started to

show up with their laptops, hard drives, mini PCs, and there are services starting to charge from a few dollars to uh $1,000 for setups. There are tons of articles on Rednode on WeChat talking about why OpenClaw is the like kind of the next big thing and why everyone needs to on board with it basically. But the Chinese government wasn't stupid enough to just let the system run wild. So they're completely banning it from running on any government computers. So, don't get me wrong. In the future, a more refined, more reliable, and more stable version of something like Open Claw will be how we interface with our computers. But at the moment, Open Claw isn't the answer to everyone's concerns

over AI or LLMs. In fact, it could be a device that simply gives the problem teeth. Even the founder himself emphasizes that just because it's open- source, it doesn't mean it's safe or free. As it stands for now, the release of viral agents and the hype around them these days says more about the human condition than it does for the project's potential. So far, AI seems to have degraded its reputation as a lifeline and has gained another as an economic bubble and global security hazard. But AI is a developing story. So, watch this space. So, as you've just seen in this episode, AI is slowly making its way across all aspects of our lives. So, I think it's really important to

understand it beyond all the memes and all the AI slop that dominates our feeds. This is where Brilliant comes in handy to understand concepts beyond the surface level. Brilliant is a learning platform designed to help you master math and coding through interactive step-by-step lessons and personalized practice. With Brilliant, you're not only learning by doing, but you're also solving problems visually and interactively. I love the way it adjusts to how you learn and builds personalized practice. You can also review so you're always progressing at the right pace. Whether you're 10 years old or 110, Brilliant is designed to work for everyone. Brilliant: How AI works gives

you a peak under the hood of generative AI and LLMs, so you can really understand how they work. When you're working through these lessons, you're actively solving problems step by step until the ideas genuinely make sense. That interactive approach makes a huge difference compared to just passively watching a lecture or video. Everything is carefully crafted by world everything is carefully crafted by worldclass educators from places like MIT, Harvard, Stanford, Caltech and leading tech companies. With their expanded 2025 content library, there's more depth than ever from everyday reasoning and probability to the advanced problem solving and math that underpins modern AI. So, if you want to learn more or

simply brush up on your knowledge base, look no further than Brilliant. Start building the habit of learning today. Not for grades or credentials, but for the way it sharpens how you think and approach challenges. To learn for free on Brilliant for 30 days, head to brilliant.org/coldfusion. Scan the QR code on the screen or click the link in the description. Brilliant also offering our viewers 20% off an annual premium subscription, which gives you unlimited access to everything Brilliant. Thanks again to Brilliant for supporting Cold Fusion. So, what do you guys think of the whole open claw saga?

Do you think that agentic computing will be the future of computing or will AI agents keep screwing it up no matter how many safeguards we put in? Let me know your thoughts in the comment section below. Anyway, that's about it from me. My name is Dogo and you've been watching Cold Fusion and I'll catch you again soon for the next episode. Cheers, guys. Have a good one.

More Tech Transcript