Anthropic's Mythos AI Raises Alarm Over Cybersecurity and National Security Risks

Anthropic's new AI model, Mythos, has sparked alarm due to its ability to detect and autonomously exploit vulnerabilities in computer systems. The model, part of a secretive project involving major tech companies, raises concerns about misuse by nation-states and criminal gangs. The US Treasury and Federal Reserve have summoned Wall Street leaders to discuss risks to the banking system. Anthropic's ethical stance and contract with the Pentagon have led to legal disputes, highlighting the lack of a comprehensive AI regulatory framework.

English Transcript:

If you look at the AI sector landscape more broadly, what you've seen is OpenAI's early dominance being challenged increasingly by Anthropic. Anthropic is drawing investor interests at a valuation of $800 billion or higher, making it close to, if not the most valuable startup in the world. Turbocharging the hype is Anthropic's latest model called Mythos. Only a small number of companies outside of Anthropic have access to it and on a limited basis.

This limited release has been dubbed Project Glass Wing and all the participants include a who's who of Silicon Valley. We know that Apple, Google, Polo Alto Networks, Crowdstrike, the Linux Foundation, Amazon that are part of this glass wing project. It's a highly unusual arrangement. These are fierce competitors who are trying to get the best model out there and Anthropic is not only showing them sort of some IP that hasn't been released yet, they're allowing their competitors to potentially criticize some of the work that they did. The unorthodox rollout has generated a lot of publicity and already there are reports of unauthorized users accessing the model.

This is terribly basic cyber security practice 101 that was missed here. We have been hearing about this theoretical threat that AI poses to cyber security and what Methos has proven is that threat is a reality. There are big worries that this if it falls into the wrong hands, it could essentially lead to some of the greatest cyber risks that humanity's ever faced. With massive potential for havoc and the stakes so high, is tech era of move fast and break things finally over? Well, Bloomberg has learned that the Treasury Secretary Scott Bessent and Fed Chair Jerome Pal summoned Wall Street leaders to an urgent meeting on concerns that the latest AI model from Anthropic

will usher in an era of greater cyber risk. Although the discussions were of course private, the gist of the meeting was, "Hey banks, better start testing this thing now before it's too late." The US banking sector is obviously systemically super important to the US economy and the global economy. Trillions of dollars go through these banks pipes every single day. The threat Mythos poses is twofold. It has the ability to detect vulnerabilities in basically every web browser, every computer system that it's so far been tested on. That could be finding issues in a lot of the web services we use, financial payment

services, operating systems like on iPhones and Android. So, you're talking bugs, potential holes in infrastructure that these banks might have been sitting on for years or decades and maybe haven't patched up that might be vulnerable to outside hackers. But Mythos doesn't just find bugs. It knows how to take advantage of them. You would have to kind of be able to direct it in some way, but largely it was autonomously able to go and find bugs and exploits and come up with a plan on how to actually take action on them and potentially do some harm to the product that it found the floor in. It lowers the playing field in terms of the skill set that you need, the time it takes. You could do this on mass and

also the cost. We've seen historically that nation states might spend years and huge amount in financial investment in creating a army of military hackers who are extremely experienced great coders. Having access to something like this reduces the need for that at all. And it's not just nation state hackers too. It's financial career criminals. These criminal gangs that operate all around the world. The risks don't stop in the US. We've also had the central banks of Canada, now also the UK as well, calling companies together to talk about mythos and the risks that are posed by it.

The engagement that I've had from CEOs in the last week uh in the UK has been significant. Fear of dangerous AI and its consequences have been with us for decades and is ubiquitous in popular culture. I'll be back. But science fiction aside, the ethics of how to deploy AI is rapidly becoming the loudest debate in the industry, and it's central to Anthropic's founding. A group of early OpenAI employees left to start Anthropic with the goal of building more advanced AI systems, but in a safer and more responsible way. And that has really been their brand throughout their history. The company's

MO has led to a showdown with the Pentagon, which contracted with Anthropic in July of 2025. Anthropic is really pushing back at two key points. One to have more clear guards preventing AI from being used to aid in mass surveillance and then the other being AI used to basically use autonomous weapons. For the Pentagon, the argument is that no private company should basically be meddling in the defense department's affairs. No private company should be telling the defense department and the armed forces how to use technology on the battlefield. Anthropic is now in court suing the Pentagon over the Pentagon's decision to declare a supply chain risk. A very rare designation usually reserved for foreign adversaries. So, while the Department of

Defense seeks to blacklist Anthropic, the Treasury and White House are pushing to expand access. Suddenly, it feels like parts of the Trump administration are waking up to the reality that they probably need to work with Anthropic. There's really no regulatory framework around AI. At the end of the day, a private company made a choice seemingly on its own to withhold its model from wider release. But nobody told it to do that. That is probably the ultimate indictment on the current state of the regulatory framework, at least in the US. But longer term, if you're trying to establish AI supremacy in the world, it may not be the best idea to cut off the company that now increasingly seems like it's at the vanguard of US AI development.

A flip side to all the caution around Mythos is the idea that hamstringing its rollout actually amounts to a brilliant marketing campaign. There's no way they made an AI too powerful to release. We're about to be victims of doom marketing that helps them raise capital and it gets people pretty hyped up. With Methos, I think you're seeing a company that genuinely is concerned about the capabilities of its models. At the same time, is it in some way self- serving to show the world, we've built something incredibly powerful, but we just don't want you to have it. If it's bait, then from a Wall Street perspective, it's been taken. Take a look at the S&P 500 and NASDAQ. A

Bloomberg index tracking AI related companies continues to outperform both. And that's before AI darlings like Anthropic and OpenAI have even gone public. There does appear to be a race to go public right now in the AI sector. The most recent reporting that we have says that Anthropic could go public as soon as October. Open AAI is also widely expected to be vying for public offering as soon as this year, and Elon Musk's XAI might beat them both to the punch after merging with SpaceX, which is set to go public as soon as June. But the uproar on Wall Street adds pressure to prove profitability.

OpenAI initially went after consumers. OpenAI's Chatbt now has more than 900 million weekly users. That rivals the reach of a lot of large social networks. Anthropic took a different route. Their core business was enterprise customers. This brings us back to that meeting with the banks. Banks already spend vast amounts on their technology and that's one of the reasons that companies like Anthropic and OpenAI want to partner with them already, right? These companies are already investing really heavily both in

their own internal AI tech, but also in the contracts that they're signing with outside vendors like Anthropic. Spending on cyber security just keeps growing. Expenditures are expected to reach $300 billion by 2030. Where Anthropic and OpenAI have really excelled in recent months is building tools that automate the process of writing and debugging code. Now, Anthropic, OpenAI, and other privately held firms are trying to sort of prove that their technology is not just useful for software developers, but that it's broadly useful for scientists, for financial firms, for creatives. That's still a work in progress. The same technology that might one day lead to a scientific breakthrough will sometimes struggle with basic math.

Cyber security has emerged as another interesting and key use case here, but it's kind of an offshoot of coding. So with methos, they're tapping into that hype. In their own words, the same capabilities that made AI models dangerous in the wrong hands make them invaluable for finding and fixing flaws in important software. The mythos story is about trying to fight fire with fire. While it creates opportunities for havoc, it also promises a stronger rebuild, all while opening up a lucrative revenue stream in the meantime. Up until now, the main conversation about AI has been what's going to happen to our jobs. Because so much of this technology has been to streamline back office processes, find

efficiencies. But now the worries are not just about job safety, they're actually about system safety and security safety. And that takes this to another level.

English Subtitles:

Read the full English subtitles of this video, line by line.

Loading English Subtitles:...