Pentagon: Warfare and Why It Matters

The Pentagon's Project Maven, launched in 2017, aimed to integrate AI into drone surveillance and targeting, sparking debate over the ethics and risks of automated warfare. The program sought to speed up kill chains and reduce human error, but critics warn of lowered barriers to conflict and potential for catastrophic mistakes. As AI advances, the military grapples with balancing efficiency against moral and strategic safeguards.

Full English Transcript:

He's never suggested he wanted to kill more people. He doesn't use rhetoric like that. Other people do and other people on the team said to me anonymously, "Great, let's get AI. Now we can kill people all the time." Uh so there are definitely people who considered that AI would help speed up and scale killing of the people perceived as America's enemies. Katrina Manson, welcome to the show. Thanks. Thanks for having me. Let's just assume that everyone listening and watching has never heard of Project Maven, knows nothing about it. What is it?

It was an effort that began in 2017 for the Pentagon to develop an AI tool. Uh develop a way of using AI on the battlefield so that AI could AI computer vision, a specific kind of AI could look at drone feed video footage. Um this was the feed that the US was taking in uh the counterterrorism wars, the GWAT global war on terror, apply AI to look at it better. That was ostensibly what it was. But it was also part of a much bigger effort uh to really bring AI to the battlefield and speed up the way the Pentagon was thinking about the future of war. Uh the US had been in uh was still in Afghanistan. um Iraq and uh Somalia, Yemen, there's a lot still going on. And people were beginning to think, what about China?

What if the US has to square up to China? And there was a group of people under the first Trump administration who said the US was behind despite having the biggest uh defense budget in the world, bar none. Um the US started to feel that it was using two unsophisticated tools and needed to catch up with what the commercial sector was doing. Bringing AI, trying out um driverless cars. Could the US move to automated warfare in such a way that uh folded in AI? So Project Maven was birthed by the deputy defense secretary at the time, Bob Work. So it had um serious buyin from the top leadership, but what the people who really tried to forward this always felt was that they were going to be up against it. Um they felt that they

risked putting the intelligence side of the shop um out of a job if they could bring AI into operations. um that would essentially be cutting out or going around or somehow undermining the intelligence folks or they thought they would the intelligence folks would feel that way. And um the colonel who was the he wasn't the director. The director was a twostar air force general, three star air force general named Jack Shanahan. Um but the colonel who led it as chief sort of the day-to-day operator, the doer who also had a lot of the vision was called um Dukor. He was a marine colonel. what's his deal? Uh what's his role in this history?

He was the chief of project Maven and really got it going for the first five six years. He was also one of the visionaries behind uh even pursuing it and he worked tirelessly uh to try and bring AI warfare to life. He wanted to get AI out into the battlefield uh in a safer way as possible, but to test it in uh as real life like scenarios as possible. And he came from this background of being Marine for years before he started doing Project Maven. He'd been sent into Afghanistan in October 2001 and had lived through seeing Marines not have sufficient information uh to keep them safe. And he told me that he had can't blanch to fire anyone who got in his way, which immediately got me thinking, why would anyone be in

your way? You surely want to do the same thing. But the answer was no, they didn't. Um and he was very clear from the beginning that AI could put people out of work. Um it would test people's resolve, their metal, their way of thinking about how war was done. And he came at that from this position of I think it's fair to say always believing that the operator had been unfairly insufficiently supported. So the people who were on the front lines needed to get more information. And for him AI was a way of getting information. and to those on the front lines in the early deployments of Maven. Um, if you were an analyst or a drone pilot or a targeter sitting at a screen in the Pentagon or Nevada or some forward

operating base, how did Maven change your job in the really early days? So, if we say before 2020, the first um two or three years, it was actually just a mess. It was such a mess that um the people involved in Maben would say these algorithms don't work. So the first users they worked with were in Somalia in 20 at the very end of 2017 into 2018. They used it. The system was so annoying they stopped using it. Um they then sent someone to try and encourage them. So much of what this project Maven team, which was quite a small team of mostly Marines, but not just Marines, was trying to do was just to get someone to even try it. So it wasn't in all those places that you just listed. Um they did

really well with Special Operations Command. Um the more forward-leaning tech part of the military. Um you can have more personal relationships with the commanders and the operators. Some of them already had relationships with them in former deployments. So they drew on those and it was simply to identify something. If you think about drone video footage, there are multiple frames in a second of footage and the algorithm went to work on each frame. So if the algorithm failed to identify the same object frame to frame, it would flash. So the operators were having real difficulty even looking at it. And if the sensitivity was set um uh very low,

everything was being identified. So in some of those early experiments, there could be, you know, dozens, hundreds of boxes all flashing at the same time and people would just turn it off. So they started to improve on that with the with the very negative feedback that they got. One of the breakthroughs that I was told about is probably the 2018 2019 time frame. And there were some um Marines who'd been booby trapped in uh a raid against a compound in Afghanistan. And so they were getting fired at from multiple places and the wall had exploded. And the AI helped perceive through the smoke. I mean, the drone feed was picking it up, but the AI spotted through the smoke the individual marines

much more quickly than the human eye would have done. And of course, one of the problems the US has faced is I called friendly fire, but uh not identifying their own people. And so being able to count out the Marines with AI very quickly in a specific kind of scenario like that made people start to believe in the potential of AI. But certainly at the time, there was still clearly a human in the loop, right? These the humans were making all the judgments. These were not autonomously directed, right? They weren't AI. These systems weren't launching drone strikes of their own valition, right? There were humans at various checkpoints involved.

Yes. And that language of human in the loop is really interesting because you have a lot of military commanders then and since saying we will always have a human in the loop. Um it's not actually technically the policy of the defense department or department of war as they call themselves now. uh the first directive on autonomy came in I think in 2012 and then it was updated in 23 and the update that was given in 23 says uh appropriate levels of human judgment over the use of force. So it employs something a little closer to supervision rather than each decision who decides what's appropriate. Such a good question. And some of the reporting that's come out on um the big fight between anthropic and p the Pentagon has

focused in on the word appropriate. And so in the end, we're just quibbling over the word appropriate. But from the anthropic perspective, what is appropriate could determine whether a human is involved or not. At the time in 2018, 2019, 2020, there's no sense at all that an algorithm is making these decisions. But of course, people are beginning to wonder if I start relying on algorithms, if I start trusting the outputs without being able to check myself, at what point am I no longer asking the analytical questions that are required for target engagement? And again, they weren't at that stage at that point. But there was certainly, I think, always concern, discomfort from some of the people being asked to use it. And so, it needed

repetition and practice, workflows, all that kind of thing. Initially, as you said, this was about they were using the software to sort through drone footage, but was the idea, the plan always to develop the tech, scale it up, and deploy it across the entire military and defense department? Was that a clear vision from the jump? It really depends who you ask. Um the memo itself that started Project Maven just talked about um uh drone footage uh in the fight against ISIS and potentially extending to other defense intelligence purposes. When I did the research, a really important question for me was to establish two things. one was tell me how you thought about targeting from the get-go uh in terms of this

project because when we haven't got there yet but eventually Google protesters uh become very concerned about work not discovering that their company was working on this um so that's partly an issue of transparency but they were concerned that they could be involved in the business of war and Google said at the time it's only for non-offensive purposes so I really wanted to check was that true was it always intended for non-offensive purposes. Was Google correctly describing the project? And Drew Kukor told me he always had targeting in mind from the outset. That language is not in the memo. Um, others told me Drew Kukall would wse if you said this was a targeting project. But when I actually managed to speak to the

man himself and when I went back and read his papers um his thesis he believed in this idea of white dots that you could look at a map would essentially he wrote this before we even had Google maps but imagine just looking at Google maps clicking with your cursor and being able to pick up the precise coordinate from your cursor and send a weapon to it. So he has his own he was a marine intelligence officer his own papers describing this very idea that then when he also suggests the idea for project leaven and leads it he was very clear with me that he always had targeting in mind and he knew that he would be going up against um eons of intelligence practice where there are specific programs used to take

um a coordinate there are specific ways of checking elevation there are specific things to do for geo rectification. It's obviously a very complex system plus then there are processes of no strike list but the that system was the system that he wanted to not blow apart but he knew he was going to be bulldozing through a part of it. He Colonel Kukori, he's a very interesting character um in the book and in this history really. Um and you know you quote him in the book saying that the the problem with war is the humans. They're materially corrupt, inefficient, and they get tired. You know, I'm familiar with this type of military officer. you know, uh, very often when they, um, like rail against, um, the bureaucracy and that sort of

thing, they're they're really protesting, uh, all those pesky rules of engagement that make it harder to kill people. And, you know, I served with people like this, and I'm not saying they're villains or bad people at all. Um, I just think sometimes well-intentioned people inside the war machine have a very hard time appreciating the importance of guard rails. And in their defense, it's not their job to do that, right? Their job is to prosecute wars. Um, but in to me that quote suggest what they're really looking for is easier ways. Obviously, they want to save lives, right? uh particularly their own troops, but they're looking for ways to make it easier and more efficient to kill. And that's a very dangerous game.

I think he's I mean he is a very interesting person. I think he's very aware of that um read. He comes the way he presents himself certainly is that he comes from a very moral place about the consideration of war. So yes, he does say those things that humans are the problem with war and he can sound um cold in that sense, but he never suggested the rules of engagement um should be diminished. And the main reason he puts forward in his conversations to commercial entities for why they should come on board with the Pentagon was always um we could save civilians this way. we could make sure we don't hurt our own. So I think for him he had been sent to Afghanistan in 2001 uh the month after 9/11 and was one of

those first targeting officers, intelligence officers who was having to suggest targets and of course very quickly US military personnel were being hit by improvised explosive devices. Yep. and he was having to put together the packages. Where should we go? Who should we hit? Who was the enemy? And was frustrated that there was so little information to protect US personnel and has also talked about other moments where he was frustrated the US couldn't um intervene in support of civilians because they didn't have the information. So the way he's always framed it, he's done the first part of what you said. He's he's it's almost brutal or brisk certainly. Um but he's

never suggested he wanted to kill more people. He doesn't use rhetoric like that. Other people do and other people on the team said to me anonymously, "Great, let's get AI now. We can kill people all the time." Uh so there are definitely people who considered that AI would help speed up and scale killing of the people perceived as America's enemies. He himself has a slightly different filter on it. Support for the show comes from Bombas. If your sock drawer could use a little love, you can upgrade to Bombas. They designed their socks with a keen eye for detail, offering everything from dress socks to sport socks. The ladder is made

with a cushioned sweat wicking design that stops them from sliding down your foot, which I hate, while you stay active this spring. I wear my Bombas socks pretty much every day. And I especially wear them when I'm running, which I do all the time. And we're now entering the summer down here on the Gulf Coast, which is essentially a giant sunny sauna. And the only socks I wear at this point are my Bombas socks. Everything else gets too hot or too gross. But when I take off my shoes, when I get back from a long run, my Bomba socks are still dry, still cool, as are my feet. I cannot recommend them enough. They really are great. They have

more socks, too, with breathable, soft, highquality basics, including underwear and t-shirts. You can go to bombas.com/gray and use code grey area for 20% off your first purchase. That's boombas.com/gray What does the chain of decision making look like? How much do we really know about that? So, Sentcom has told me that they're using a variety of AI tools. Um, I've separately reported that includes Maven Smart System, which is the system that Palanteer helped develop for the algorithms to feed into. So, it's almost like the digital display that you'd have in a headquarters or maybe on a handheld device so that you can look at the battlefield digitally and then uh more than 150 different data feeds feed

into it and you can crunch through that using AI. Um so they've got the computer vision but they've also now got large language models specifically claude um which is the anthropic model that is cleared to work on classified cloud and the US fights its war wars on classified networks. So I was told in 20 last year in summer I went to visit NGA which is the combat support agency that supports um the defense department but is also a member of the intelligence community national geospatial intelligence agency. So last summer they told me that with the help of AI Maven smart system can now get through a thousand targets a day. a thousand um in the first 24 hours of the US operations in Iran, they went through a

thousand targets and with the help of LLMs, uh really using that to speed up the processes, the kind of admin processes involved in building a targeting package, getting permission for it, still from a human, still from a commander, still with legal review, but sped up. Um they told me, one person uh official told me they could now get to 5,000 targets in a day if they wanted to. That's a lot. Yeah. So, take something like drones. Obviously, drones are such a big part of modern warfare. Um we're using them. Everybody seems to be using them. Um, are humans still piloting our drones or are these mostly autonomously controlled

now even if there still is somewhere on the back end a human in the loop greenlighting strikes? Ukraine has a lot of drones and Russia has a lot of drones but the US is not producing that many. Uh, the US is desperately trying to now take those lessons on board and produce um and compete them against each other. they are almost entirely not autonomous. So autonomy is the hope and under the Biden administration um the hope especially for something like Hellscape which is the Indopacific command's idea of how they could defend Taiwan from an invasion by China if China decided to do that and the admiral there Admiral Paparro talks about um using autonomous uh weapons to buy him a month. So just make it impossible for

China to take Taiwan and then send in the larger US platforms. So under the Biden administration I think in 22 or three they launched something called replicator which is to bring in cheap their word attitable basically you don't need to use it again um drones and those are meant to be autonomous and so they've been trying to develop the software they've been competing with different companies to do that and through the course of my reporting I discovered that the idea was to take um some of that those algorithms that Maven had produced um train them on data from the Indo-Pacific uh really at that boat level. So, uh, boat drone cameras, aerial drone cameras, uh, infrared, anything that might be looking at a Chinese vessel,

capture those pictures, train the algorithms, sit them on the drone now, instead of having it on a digital uh, at a digital platform at headquarters level, and have that AI on the drone automatically detect the target and then be able to have the drone go and uh, take the target out. It was very tough going those experiments even before the Trump administration got in. They were making progress. They also wanted to do something very ambitious which was to link up drones in the sky, drones on the water and drones under the water into one big autonomous swarming mesh. It sort of boggles the mind. Um the environment part of it wasn't working. So they had the best data stores, I'm

told. So the algorithms were potentially the best, but they couldn't integrate the algorithms onto the platform. And so much of AI isn't the specific piece of tech itself. It's can you make all these platforms talk to each other. Can you make an operator believe in this platform? Can you workflow it? Can you start operating as one continuous ecosystem? And the answer is not without a huge amount of tra practice and trial and error and maybe just no. But it's just a matter of time, right? Autonomy might be a hope, but it's also inevitable, right? It's just a question of the tech getting there and it's moving in one direction, right? Like this that's where this is going. Maybe not tomorrow, maybe not next week, but I

mean look at the progress in the last 12 18 months alone, right? I mean that's where this is going. I'm always wary of the word inevitable as a history student. I was taught nothing is you're a good reporter and I am not a reporter in any sense of the term. So fair enough. I mean, but in support of your point, almost in support of your point, um the Trump administration came in, tore up replicator a little bit, um changed the name, just a repackaging of the name dog, uh the defense autonomous warfare group. So, autonomy is in the name and is warfare. So, all of those concerns where the Pentagon was too nervous to say we want to put AI, autonomy, and death together because

everyone was outraged about it back in 2018, the language now is so much more permissible. The Pentagon is simply saying it. The fight now with Anthropic is over not just autonomy but fully autonomous weapon systems and they're trying again. So they have this new project that I've reported recently on. It was launched in January. Uh it's a $und00 million prize challenge. Same ideas have made to a certain extent of competing the companies against each other. And SpaceX and XAI are one of the contenders. Um, Palanteer I've reported as a contender. Open AI is named as the second on two other contenders. Um, a

couple of others I think I've reported. Uh, and they're all trying to make voice controlled autonomous drone swarming tech. So, you could have an operator on a beach, let's say, saying move left and the drones would move left and you have to hope they could identify the target. That's wild. Um, so there is a quote in the book uh that I really wanted to mention and it's from Jane Penellis. I hope I'm saying her name correctly. She was in charge of testing Maven in those early days. And she said, and now I'm quoting, if the US military wanted to use AI enabled systems, it had to become more accepting of risk.

Based on the people you talked to, like what is the level of acceptable risk she has in mind there? I don't have a number for it. Um, but I think it's about this. I think it means they know that AI is a blackbox technology that can go wrong and that it needs vetting. But at a certain extent, if you are going to put it in a system where you can't see behind the hood, you're going to be relying on something that has inbuilt risk. We know about hallucinations, bias. um she spoke extensively about algorithmic drift, this tendency for an algorithm to get worse over time. So she wanted of course to hold standards high, but she wanted to understand how AI will fail. The risk element is often put to me this way. If

you use AI in an urban environment, there's a huge chance that you could be getting um civilians. If you're using AI in a war at sea, in a China scenario, that box of operations, you're going to have already clear. There won't be civilians walking around cuz it's a sea. The commercial boats will long ago have thought, I'm not going to go in that area or it's banned. And so all you really have are targets at sea. And the risk then for the US becomes, are they going to shoot their own targets by accident? and are they definitely shooting at uh Chinese military vessels who are legal targets under the law of war. But it's the idea that if you go wrong with AI at sea, you're just getting water. And so they might not be as accurate, even though

the claim for AI is often accuracy. Uh but if it does go wrong, the risk of harming civilians is much lower. Are we watching that in real time? So on the first day of this conflict in Iran, American weapons bombed a school in Iran that killed lots of people, lots of children. And that was on the first day of the conflict. And based on the reporting of the Washington Post at least and maybe others by now, these AI systems may have been powered by Claude was involved in identifying hundreds of targets before that conflict started. And presumably many of those targets were the ones that we hit on that first day. Do you know anything about that? Do we is do we do we know if that was in fact an AI

identified target that a human in the loop failed to realize that it was based on I believe like decade old intel. Bunch of caveats first which is that the US says it's investigating and they haven't said they did it themselves. Um the reports uh that are out in outlets not my own um have suggested the US did it. This is there's no suggestion yet that AI is involved. No confirmed suggestion. This is what I would say. The US builds its targeting lists based on a stored data. If something is a valid target or not, it's kept in a list. What the AI can do is identify a specific object, a specific threat or something moving. often if AI is pulling on an existing targeting list, if that school

turns out to be on a military intelligence database when it should have been on the restricted target list. No AI can fix that. So, a key question is, was that school on a targeting list by mistake? Was that target list kept updated when the school peeled away from being, you know, was an IRGC facility and then suddenly it had a bunch of kids in it? Did they update the targets? Could they have been using AI to check against open- source information? If the school was listed on Google Maps, what on earth is the point of AI if you're not checking that? As the US military becomes better at checking open source information and that lesson was really learned in the US support to Ukraine, they were drawing on

social media feeds. They were pulling Twitter posts um so that Maven Smart system could analyze it for is there a yellow flag tied to this bench? Does that mean this town supports Ukraine? Does this mean this town actually supports, you know, or has a Russian presence? Uh has something just exploded over there? If you can pull from social media and use that to inform your understanding of the battlefield, can you pull from Google maps? Now my understanding is any system even if it's open source needs to be an authorized system on US kind of networks. So where is the gap if there is one on being able to pull open source information and cross check? It should be extremely easy for AI to cross check

if there's a girl school. Um, it should happen before uh there's a blink. But the question is, I we just don't know yet. And they may choose to put out a public report. Journalists may have to sue for it. You know, that information will come out. But we know from previous eras that um the Beijing uh embassy uh attack is a really good one from 1999. I think um the US hit the Chinese embassy in Europe and it was two or 300 y off from the target they were meant to hit and they didn't have it labeled right. Now with AI all of that should be much easier. there is an argument to be made that sort your systems out. But if someone doesn't care sufficiently about protecting civilian lives or if someone

isn't uh forcing AI into the bits of the system that will protect people as opposed to speed up the death cycle or the kill chain, all of that becomes a really big problem. And if AI has been involved in any way in this hit, of course it's a well any which way it's a it's not just a tragedy. It's a it's a very powerful mistake that um to go back to anthropic but what do you make of the very public fight between Anthropic and the government. Right. So you know Anthropic from what I understand set a couple of red lines. Um no mass domestic surveillance and no fully autonomous weapons without human oversight. Those were their red lines and apparently they could not come to an agreement

with the defense department. Just what do you make of that and the consequences? I think by the time you have a frontier AI company that is the first to put its model on classified cloud, you have a company that's leaning in a way that is not reminiscent of Google back in 2017 2018. Anthropic was on the very systems where there are lethal operations and clearly comfortable with that. Um if you read Dario's two big essays he Dario is the CEO of Yeah. Dar Yeah. He has these two big essays that he wrote um making his case for why his company should be involved in national security. you're grappling with that thing that everyone in AI who's worried about existentialism or what whatever it is about um whether AI comes to kill us all or not takes over. He's

grappled with that too. And he has found peace with the idea that you can do national security work and still be quote unquote the good guy. Um he talks about a real fear of um robot swarms and his position which I think emerged in greater clarity only partway through this fight is not even that he's against fully autonomous weapon systems is that he's against fully autonomous weapon systems now and it raises questions about what was actually under discussion. Was there a system that he was being asked to put AI on to that he didn't want to or was he just worried that they weren't doing the testing and evaluation right? Because Anthropic did submit I reported to this $und00 million prize

challenge to create voice control drone swarming tech. Um so that's leaning really far forward uh for a company that's concerned about autonomy. they were prepared to take part in the creation of lethal autonomy uh or parts of it. The there's clearly a political dimension because the president himself called um them leftwing nut jobs. Um there is clearly I think you have to take the Pentagon at its word. they're genuinely worried that a company could dictate policy to them or they're certainly genuinely um annoyed at that prospect and the castigation of the company as a supply chain risk, then taking it to court, then having Microsoft file the you know the amicus brief shows that once again the ability for the Pentagon

to get to the tools that it thinks it needs is somehow at risk even when it has a company that was leaning really far into it and whether they can get up um XAI and open AAI onto classified cloud and interm smart system in a way that works as well as Claude in those 6 months of transition time while the US is using Claude in Iran. It's a really unexpected turn of events, I think, to have it uh collapse so spectacularly that relationship just at the point that the US decides to test it the most it's ever been tested. Uh it really is genuinely surprising. It's not my original analogy, but I did hear someone say um that allowing a handful of private companies to control AI is kind of like leaving Amazon, Google, Open Eye,

whoever in charge of the Manhattan Project and then also allowing them to control and profit from the bombs. Of course, like in order for that to land, you have to accept the premise that AI is as revolutionary and transformative attack and as powerful as nuclear weapons. But if you do accept that premise, and I'm certainly open to it, it is startling. So, what is your moral position on it given your kind of own national security background on Well, I served in the military. I wouldn't say I have a national security background um beyond just having been one soldier, but what is my position on what exactly on the morality of AI in these national security uses when you when you're talking about the moral position of the companies?

I'm extremely uncomfortable with it. I understand the utility. I understand all the potential applications and I can see the case for all the lives it might save and all the good it might do. Um but the tail risks really alarm me and my personal view is that war and killing people should be very hard and very costly and anything that makes it easier and faster and cheaper and less expensive in terms of human life when you can just pull a lever or push a button. I think that just makes killing people easier and I just it makes me very uncomfortable. And then that's just even beyond all that I'm not entirely sure that AI is a technology that we are going to be able to control. A lot of

these conversations presume that we'll be able to control what these systems do and don't do. And I'm not sure that's the case, which scares me even more. So, I know I don't know how clear a position that is, but I just that's really all I can say right now that I'm very uncomfortable with it. And I don't really trust anybody to make those with that much power to make those sorts of decisions. Um I don't know. What do you think about that? I think um you know the director of project Maven who is no longer uh Jack Shanahan has spoken up during this crisis of the anthropic Pentagon fault line not in favor of AI even though he was the

director of project Maven to say no LLM should be anywhere near an autonomous weapon at this stage. And you know the other red line that Anthropic raised was uh mass domestic surveillance. Now I don't quite know what they think the Pentagon had in mind because the Pentagon's position is you know we have rules around that and we follow them. Um but the volume of data points available on any given individual, not with traditional intelligence, just with commercially available information from your phone or your route that your vehicle takes um or your shopping habits or you know all of these things and that discussion where you had people like Elon Musk signing on to letters saying we shouldn't have technologists involved in AI because of

the risk of we shouldn't be creating new weapons of war and it's his company that signed on to make these voice controlled uh drone autonomous drones forming tech. The changing comfort levels about what technologists are prepared to accept is a sort of massive tribute to the Pentagon's ability to change people's minds. Support for the show comes from Delete Me. Delete Me makes it easy, quick, and safe to remove your personal data online at a time when surveillance and data breaches are common enough to make everyone vulnerable. The reality is that

we're all susceptible to having our private information stolen. public figures and private citizens alike. Delete Me can help protect you and your family's personal privacy or the privacy of your business from doxing attacks before sensitive info can be exploited. Our colleague Claire White recently tried Delete Me. Delete Me is a tool that anyone online should have in their back pocket. Delete Me has saved me not only hours of removing my data from online, but has saved me hours of worrying about who has their hands on my data and what they're doing with it. Take control of your data and keep your private life private by signing up for Delete Me now at a special discount for our listeners. Get 20% off your Delete

Me plan when you go to joindeleteme.com/vox and use promo code vox at checkout. The only way to get 20% off is to go to jointdeleteme.com/vox and enter code vox at checkout. That's jointdeleteme.com/vox Support for the show comes from Shopify. Every thriving business starts with a series of what if questions. There's what if nobody likes me, but also what if they really do? There's only one way to find out, and you can make it happen with help from Shopify. They say millions of businesses around the world rely on Shopify for e-commerce. From businesses just getting started to household name brands, it can help you with everything from payment processing to analytics to website design. You can

choose from hundreds of templates to create a greatl looking website. Their email and marketing tools can help you get your name out there and stay connected with customers. And if you ever need help, Shopify's 24/7 award-winning customer support has your back. You can turn those whatifs into a thriving business with Shopify today. You can sign up for your $1 per month trial period today at shopify.com/vox. You can go to shopify.com/vox. That's shopify.com/vox. I just happen to think that we are not even close to really stepping back and wrestling with how profoundly all of this tech is going to change our society and our institutions. Did you get the

sense that there were like serious discussions going on about how these tools might migrate from the battlefield to American cities? how um they might end up in the hands of police departments across the country using it for surveillance and god knows what else. Is that obviously that idea is exactly what has animated so many of the protesters of Project Maven and campaigners against the development of these AI tools. And I do think it's interesting that the twin things that Anthropic has raised is not just uh fully autonomous weapons against presumably an enemy, but also domestic surveillance. And it is because the overlay of data and the knitting up of systems presents such a potentially powerful tool. And that will come down

to policy choice and law because the technology is now possible. Uh still hard. Uh but the data points are out there. You just need to suck them up. Do you think policy makers in DC are taking this seriously? Are they paying enough attention? Do they even care? It was put to me. I mean, Congress has spent a long time looking at AI, but there's there's no regulation. And one of the things that the Trump administration has really focused on is setting AI free. So, the way the Europeans are regulating on data specifically, never mind also AI, but on data. um the US is taking a different approach and that's to do with uh the champions would say uh the

innovative US spirit that entrepreneurial ability to go fast and um and make things and maybe break things as well. Uh so I think in this debate between Anthropic and or over the fault line between Anthropic and the Pentagon, several of the expert voices have said where's where's Congress in this? Um and it stops short of regulation at the moment. I don't know, Katrina. I mean I think at some point war ceases to be a human activity. you know, it will still impose enormous human cost, but the actual war fighting um will just be a technological affair for the most part, and that's a very different world, you know, and you can only change the character of war so much before you change the nature of it

entirely. And I think that's where we are. I appreciate you writing this. It's so important. Um and it's so well reported. Uh, I feel like I understand the world, this world better than I did before I opened it. So, um, thank you for writing it and thank you for coming on the show. Thank you. Thanks for the discussion. Once again, the book is called Project Maven. If people want to check out your other your reporting for Bloomberg or any of your other work, where can they go?

Bloomberg. Yeah, just my name on Bloomberg will do it. But Bloomberg's plenty. Thanks for watching. Every week, we bring honest and nuanced conversations about what's happening in culture, tech, and the world of ideas to your video and audio feeds. Episodes of the gray area drop every Monday on YouTube, Apple Podcasts, Spotify, or your favorite listening app. Comment below and let me know what you thought of this conversation. I promise I won't be offended. You can also send us an email at the grey [email protected] or leave us a voicemail at 1 800-214-5749. And if you enjoy what we do, please help support Vox by joining our community on Patreon at patreon.com/vox.

Thanks again.

English Subtitles

Read the full English subtitles of this video, line by line.

Loading subtitles...