Mythos AI Model Raises Cybersecurity Concerns in Washington

Anthropic's new Mythos AI model demonstrates advanced cybersecurity capabilities, including successfully exploiting vulnerabilities and building attack chains, which has alarmed Washington officials and prompted discussions about AI regulation and national security risks.

Full English Transcript of: The new AI model that’s alarming Washington | The Economist

Mythos is a new AI model trained by Anthropic. The reason why it's causing a fuss is that Anthropic say it is an extraordinarily competent cyber security engineer. It can hack things really well. Um to give you an example, Opus 4.6, the the previous top tier model from Anthropic, the one that the public can use, is quite good at finding weaknesses in technology. It found a vulnerability in a version of Firefox that has since been fixed. But if you ask it to then use that vulnerability to hack a computer, it falls down. Anthropic tried hundreds of times. They got twice a working exploit on the very first step.

Mythos 181 successful exploits of that and then 27 went further and actually built a working attack chain that affected the Windows registry. A real like deep level attack. That's a step change, right? It's not just sort of a couple of percent higher on a benchmark. And it raises the risk that anyone using Mythos becomes a top tier hacker even if they don't have any tech capability themselves. Okay, Zany, you've been in New York and Washington. How much is this Mythos moment uh causing alarm there? A lot. I mean, it's interesting. I got here basically uh a week ago, sort of just after Mythos was announced by Anthropic. And I've been talking to a

bunch of government officials. I've been talking to tech leaders. I'm talking to business leaders in bigger gatherings and one-on-one. And I've kind of watched over the past week as the alarm has grown. There is real alarm in the hitherto very hands-off Trump administration. Secretary of the Treasury Scott Bessant and the head of the Fed Jay Pal summoned in the banks for an emergency meeting last week. And I've really sensed as even in other parts of the administration now people are going, "Oh my god, this is a real wakeup moment." As people are realizing the potential risks involved in a model like this. And it's kind of it's been sort of alarming and interesting to watch as pretty much everyone I've spoken to as the days go on has

mentioned mythos within about the first 5 minutes of our conversation. This is you know been perceived as dangerous by anthropic itself. What's been um Dario's response to that? So they have released it behind closed doors. This is actually not the first time Dario has done this. In 2019, OpenAI had made GPT2, right? This large language model, and we were all very excited because it could produce good text, and they decided not to release it for six months after they trained it because they were afraid of what it might do on the internet. That decision was probably wrong in hindsight, like it upticks in spam and fake reviews weren't perhaps worth holding away from the public. This time, I think there's

really something there. They have kept it available only to themselves and 11 handpicked partners, massive companies like Apple and Microsoft and the Linux Foundation which makes the open source operating system. And the idea is the hope is that these companies will be able to use Mythos to fix their products before any capability like Mythos makes it to the public at large. Now Zany, you've met Dario recently and he doesn't mind the PR. So I mean and this has got him a lot of publicity. Is this for real in your view and is it just about cyber security or might it go broader than that? So I've met Dario a number of times over

the years and he has long been of these uh you know AI gods the most publicly focused on safety and he has always said uh you know this is going to be very dangerous. We need to have government regulation. We need to be very careful and as Alex said you know he held back an earlier model. He's worried about boweapons. He's been absolutely and so there are a lot of people particularly his competitors who are saying, "Oh god, crying wolf again. This is just marketing and you know the other models are quite close behind and they're just making a big splash about this because it's good for them." But I my sense is that you know this time really is different and I have talked to members

of senior people in companies that actually now have mythos and they are all you know corroborating the idea that this is actually extremely powerful. So I do think Dario Amod is definitely one who's focused on safety. Of course this is this is good for him and so I'm sure there's not a little no little bit of self-servingness about it but I think it's actually serious and for real. I think that leads us to the next question which is okay there's a danger here bad things can happen there is the beginnings of a response to it I want to look now at how systematic these sort of protections can be and zany I want to start with you I remember David Saxs who when he was in administration was um you know a booster

for a kind of less a fair approach let them cook let them carry on he used to say do you think the administration is ready to intervene? Well, I think uh I think the short answer is yes. Um because I think they have really been freaked out by the power of mythos. But you're right, it's a very big shift from an administration that came in basically poo pooing the Biden administration which was very focused on trying to create kind of a regulatory framework. And in comes the Trump administration, absolutely not. They're accelerationists. They want to go as fast as possible unfettered

competition. And so the question is now how do you do it? And I think that it's not just what they want to do, it's that how little time there is. Um, mythos exists. And so my sense is that we are going to have kind of informal actions very fast which will involve continuing this approach that the most powerful models are first released only to a small set of you know trusted companies. That kind of limited release I think will be rolled out because it also suits the companies. The government is going to get involved and I think will basically say we need to see these models. We need to know what they're and we're going to have a say on how far things are commercialized and then I think we can the question is what

happens thereafter and probably and this is what people are talking about we will it will evolve into some kind of industry-led sort of certification um approach which will be the kind of big model builders getting together with the government saying okay this kind of model is all right for release. The kind of trade-offs in this are huge. have the heavy hand of government, then America falls behind. You don't get the benefits of this extraordinary technology. That's bad. Go too slow and you have an AI disaster, an AI accident. And as you know, Ed, I have always thought that the kind of race dynamic between America and China and between these companies meant that we wouldn't get any change in the

Trump administration's approach until there had been an AI accident. And I think mythos may actually be the wakeup call before an AI accident, but it was pretty close. So, Zani, that leaves me feeling pretty alarmed. I mean, in other words, we've got, you know, a small space in which to begin to get talking and take this issue seriously, begin to get governments involved. But it is a very short time. You should be alarmed. I mean, I you know, you absolutely should be alarmed. This is an extraordinarily alarming moment. However, I'm going to, you know, with my perennial optimism, offer you one reason to perhaps be a bit hopeful. Firstly, I do think this

has been a wakeup call for this administration which had been so extremely hands-off. The question is whether they are kind of competent enough to work out how to deal with it with Anthropic and others. And remember just a few weeks ago they were you know furiously having a row with Anthropic and you know deeming Anthropic this company a supply chain risk that couldn't work for the Pentagon. So you know it's going to demand some cooperation. But the other is there is this meeting coming up, the summit uh between President Xi and President Trump in May. And I will wager um I'm completely speculating here, but I will wager that this subject will be discussed because I actually even though you know America is focused on being

ahead in AI, I think there is a recognition that there are some things that it is in no country's interest to have. It is in no country's interests to have the capabilities of taking down critical infrastructure in the hands of some crackpot somewhere. And so it's I think we will get the beginnings of some conversation about how you can have coordination or standards because that is essential for any approach to be lasting. I think it'll be done in an environment of massive mistrust. I'm not putting a huge amount of weight on it going anywhere. But I do think when you have a moment like this, people start thinking differently.

English Subtitles

Read the full English subtitles of this video, line by line.

Loading subtitles...