Hello and welcome back to Equity Techrunch's flagship podcast about the business of startups. I'm Rebecca Balon and this is the episode where we bring on industry experts to help us explore a trend in the tech world and dive deep. At the Human X conference in San Francisco, I had the chance to sit down with one of the top names in autonomy. And today, we're bringing you a conversation I had with Chris Herbson, CEO of self-driving vehicle tech firm Aurora. For those who aren't following this, self-driving has been almost there for a long time, but the commercialization story may be finally changing with long haul trucking. So, I had Chris walk us through what is actually shifted after a decade of technical bottlenecks, why trucking
could have a better pathway than robojaxis to real revenue, and what founders and investors still tend to get wrong about this market. Chris and I actually just got off a panel and talked about a lot of the stuff we're going to talk about today. So, we're all warmed up for you. Chris, thank you so much for joining me. Thanks for having me. And, you know, thanks for putting up with me for another half hour. So, appreciate it. Please. It's a pleasure. Um, okay. So, give us a little bit of a rundown of where you come from because you've had
such a storyried career in the physical AI space um and where Aurora is today. Well, uh, I've had the privilege of working in self-driving for 20some years at this point. Um, I started when I was at Carnegie Melon University, working on the DARPA Grand Challenges, which were these robot race across the desert. Um, I then got, uh, chance to work with Caterpillar on these giant dump trucks uh, for mines. That was a lot of fun. Uh, and then I got the privilege of helping found what's now Whimo, the Google self-driving car program, and led that for seven and a half years. ultimately stepped away from that. Uh took a little break to figure out what I
wanted to do next and uh hadn't got rid of the bug and so ended up having a chance to start Aurora. Founded that with Sterling Anderson, Drew Bagnell been great co-founders. Uh and we've been at it for nine and a half years and as a company we've been really focused on our mission which is to deliver the benefits of self-driving technology safely, quickly and broadly uh and in the field of trucking. So today, uh, starting April of last year, we have trucks on the road in Texas and now in New Mexico and Arizona hauling goods for customers in a driverless way.
Uh, and then this year we're going to go from having a handful of trucks to having hundreds of trucks. So, it's a big year for us and an exciting year for the industry. We talked about this a little bit on stage, but uh, autonomy has been here for over a decade. Um, you're now running commercial driverless trucks. Are we at the start of scale or is this more of a polished pilot phase? I think we are very much in the begin to scale phase. Uh it's exciting. So we've done over 200 250,000 miles of driverless operations. So we're getting
good experience. We have customers who are learning with us what this means. These are companies like FedEx and Wernern and Schneider and Herschbox, some of the biggest names in trucking. and what they're seeing they like. Uh and so for us it's this big moment of starting to scale and going from a handful of trucks to hundreds of trucks and then ultimately to thousands of trucks next year and on to tens of thousands of trucks. So we've kind of cracked the problem and we're now off to okay how do we really build a business with this and importantly how do we serve customers and make sure we're building their businesses. M how does your road map change when you think about you know macroeconomic trends like
we are in a recession not recession you know people are spending a bit less I imagine that creates less opportunities for shipping but maybe I'm not sure does that change your road map I've had the privilege of working in the space that is really transformational that the impact it will have on safety of America's roads is profound the impact it's going to have on fuel economy and sustainability is great. Um, and the benefit to our customers in how much they can use the trucks they have and how they can build their businesses is again huge. And so the benefits that we expect and we're seeing with these trucks already um really kind of um outscale the macroeconomic situation.
There obviously are bottlenecks like we've established that you know we all thought autonomy was going to happen a lot faster than it did like what are the main bottlenecks is it tech is it regulation is it just you know public acceptance so I think historically it has been primarily technology that this is a really hard problem whether you're driving in a city or whether you're hauling a trailer down the road to 70 mph like these are very hard problems and it's critical that you get it right and that you make it safe right? The like you just can't um cut corners when it comes to safety with these kind of applications. And so I think what you see is the responsible folks are making sure they get to that point where it's
safe. And that has really been what's kind of been the bottleneck up until recently. At this point for Aurora, we are very much supply constraint. So the first generation of hardware that we launched with, we knew we could only make a couple of, you know, 20 of those, 25 of those trucks. Uh, and so we just couldn't build our fleet. We couldn't grow customers beyond that. What's very exciting is in Q2 of this year, we're going to be launching our second generation hardware with our new international LT trucks, and that will be able to scale up to about 1,500 trucks. And then next year we'll launch our hardware that we've been building and developing with Amovio. And that'll
take us to scales of tens of thousands of trucks a year. And so we're kind of unlocking the supply side. The pilots and work we've been doing customers are really creating the demand. Um and you know we've got a lot of demand this year. We're getting demand for next year. It's really exciting to see how it's resonating and the values there for customers. And then on the regulatory side we're seeing progress. So, it's it feels really good. I want to talk about the regulation stuff, but also, you know, as we're talking about you scaling. Um, I'm curious, you know, where all we hear about is chips. Chips, there's so much demand for chips. Data centers are using all these chips. I imagine that
self-driving trucks, you know, you're doing a lot of onboard inference. Are you fighting with data centers and hyperscalers and AI companies to get access to chips? Yeah, we're not directly fighting with them for access to chips. the you know truck is big and has a big engine but it still has a lot of power limitations that you know just aren't relevant for uh data centers and so the kind of chips we use are intended for automotive and physical AI applications and so uh that third generation hardware that we're launched with Emovio will be on the Thor SOC and that's something that's been specially designed for this and it's not
super relevant for data centers so uh no we're not seeing a direct competition for that right now okay you know going back to regulation you know, you're running in Texas, you can't run in California yet, right? They've got uh I believe a ban on uh deployment or testing of fully autonomous or autonomous trucks, right? Anything over like 10,000 pounds or tons or something. How much of your road map is dictated by uh regulation versus the technology? So, so today, you know, obviously our road map is constrained by regulation. We don't break the law or we operate places we're allowed to and that's the vast majority of states in the US and we can build a heck of a business if the
law just stays the way it is today. Um we estimate in the Sunb Belt there's 50 billion vehicle miles traveled and that's mileage that we can go and support and support our customers and grow our business in. Um, what's exciting though is that it's not just customers who are seeing the benefit and value, but we're seeing at a regulatory level and at a policymaking level real interest in this. And so we're seeing California, we expect uh regulations to be released for trucks on the road and we expect that to happen in the next month or so. So that's really promising. Uh and then we see at a federal level uh real interest in there being a framework that would create a consistent set of rules across the United States. I think
that's an important step. It's one that we look forward to, but it's not one that today is limiting our business. Yeah, you can operate in all these other states, but California always seems like the holy grail. I mean, you've got the it's got the shipping like from, you know, the ocean, right? Like you need that coast, I guess. I don't Well, I think another way to think about it is I think California is like the fourth largest economy in the world, right? Right. And so, for sure being and they're sunny. It's a sunny state, right? Are you operating in rain conditions? Uh, we do operate in the rain today.
Yeah. Okay. Fully autonomously. Uh, yes. Okay. Yep. How long ago did you start doing that? We started in April of last year operating in the daytime in good weather just between Dallas and Houston. over the course of the last year we unlocked uh kind of what the what we call oper uh operational design domain increases right so new places we could operate so we started operating at night I think in the middle of last year we started operating to El Paso towards the end of last year and uh I think in January we unlock operating in the rain so yeah what are your um customers most excited about like is it that you can kind of run all night and just do more uh like or do more runs Essentially it's a combination of things. So for
all of these companies safety is top of mind right it is you know one two and three I think when they talk to us and so the opportunity to have a driver that is always vigilant uh that never gets distracted that is able to look 360 degrees and the safety implications of that I think are real and that's not just good for customers of course that's good for us as a society. Um, beyond that, the ability to utilize the truck more. So again, if you're a company that runs trucks, you spend $1500, $200,000 for a truck and you can only operate it half the time because we very rationally limit people to driving a truck 11 hours a day. This allows us to take that truck and use it twice as much. And so if you think again any business
where you have a major capital expenditure, the more you can utilize it, the better it is for your business. Yeah. They're also excited about the sustainability benefit. So, we've done studies that Do we care about sustainability today? Uh, I certainly do. I mean, I certainly do, but it seems like it's gone out of both. But I think you can, you know, you can both think of it as I do as both a societal good that we want the world to be a, you know, cleaner place. We want it there for our kids and grandkids. Um, but also you can look at the bottom line, right? a truck that consumes less fuel is lower cost to operate and emits uh less greenhouse gas and other pollutants, right? And so I think it's a
win on both sides and we expect to be 14 and 34% and we're actually seeing that in practice in our fleet. So it's uh it's again another big deal particularly with our current set of adventures and the impact that has on uh the you know diesel fuel, right? It's it's a big win. We're talking about safety. We're talking about regulation. I remember last year, a couple years ago, uh there was a whole halaloo about safety cones. Do you remember this? You guys, I think filed a complaint against the warning triangle. So, for those who don't know, like so much of like road safety is obviously designed around humans. So, when a truck pulls over on a highway, they're meant to the driver is meant to get out and put out safety cones at
certain distances. so that cars coming down the road can see this. Now, if you have a driverless truck, obviously a human can't get out and do that, right? So, that causes problems that I imagine it's like a weird little regulatory thing that would have held you back from being able to do fully driverless deployments. What's the update on that? Because I covered that when it happened and I let me tell you when I was surprised that this story got so many clicks, like so many people were interested in this. It's interesting, right? And I think what's neat about it is it translates, you know, something that a lot of people don't have direct contact with, right?
Like AI and machine learning, verifiable AI, and translates something very real, right? A person having to walk down the freeway and put a cone down. It's very loud. Um, it is. And it's it's grooving here at human. You do a great job paying attention to your own thoughts while that's happening in the background. It's pretty fun. Uh, you know, it's you know, it's an extra level of difficulty on the podcast. Uh but you talk about it being uh the this idea of dropping these uh triangles as a safety thing. Well, turns out there's no data that shows
that. All right. It was a rule that was put in place in the 70s and then I think it was in the '90s. Uh Nitsa actually studied should we require that for passenger vehicles and they said oh actually there's no reason like there's no evidence this is actually helpful. And so our approach has been like how you know ask how do other vehicles on the road tell drivers to look out? Yeah. you know, what does a police car do? What does a emergency vehicle do? What does a tow truck do? A construction vehicle, they all turn on flashing lights. And we're like, hey, we should turn on flashing lights and warn people.
And it felt very common sense. And it turns out at this point that the uh administration, the Department of Transportation has come back and said they'll give us a uh waiver to study this for a few months. And then, you know, that would potentially lead to an exemption that would last for 5 years. and uh you know we've been excited to have a good conversation uh with the federal regulators around this and you know they're smart people and they're listening to common sense and things are moving forward. Zooming back out a little bit. So you were part of Whimo. Um Whimo here in San Francisco is everywhere. But they had I don't like a similar but opposite trajectory to you guys, right? Both started thinking, okay, we're going to do highway autonomous vehicles,
autonomous trucks and robo taxis. Whimo ditched the trucking, stuck with the robo taxis. You guys have not, you know, ditched the robo taxis. It'll come back, I'm sure. But now you're focused fully on trucking. What made you made that decision? Yeah, like you said, we started wanting, you know, and we continue to believe we're building a driver that can drive all kinds of different things, but you have to pick somewhere. And for us, we picked trucking. We did that for two big reasons. So, one is we didn't start with trucking because we didn't think you could solve the problem. When you drive a truck, you have to look a long way down the road, right? And we're big believers that you use a combination of sensors to do that
robustly. Laser, radar, and camera. And there was no laser sensor that could see far enough. When you say laser, do you mean lighter? I mean LAR. Okay. You guys have your own lighter, right? We do. And in fact, this is a big part of why we can do trucking. Uh is that we uh when I started the company, I'd been at what's now Whimo, and we'd come to the conclusion you couldn't see far enough with lasers or with LAR to uh to drive safely at freeway speeds. When we started Roar, we thought, okay, how do we solve this? We spent a bunch of time trying to find a technology to do it. We found this great company in Bosezema,
Montana of all places. uh acquired them, brought them in, and that turned into what's now First Light, which is this special kind of LAR that because the way we do the measurement can see way further than a conventional LAR can. And so that technological unlock was one that said, "Okay, we can now go after trucking." And then as a business, we just see trucking as a better opportunity to start with. It's a trillion dollar market in the US, whereas ride hailing is a $50 billion market. And so, you know, it's gigantic to go and work with. There's a real need on the safety front. Uh 500,000 collisions with tractor trailers uh with trucks every year and 5,000 fatalities.
Um and the economics, the unit economics are stronger that we value a truck being driven three times the value of a Uber being driven. And so when you're bringing new technology to market, having something where you can be profitable sooner and where you have a huge market to grow into uh that's just ready for it just felt like a lot of make common sense to us. It reminds me of I mean and you partly reminded me of this before we started recording. You were talking about uh Serve Robotics. They do the little sidewalk robots. Um we've had uh their CEO on equity before Ali and there's some serve robots here.
Yeah. got running around handing out um uh little electrolyte packets which is exactly what I there were some in there. I was like that's exactly what I wanted. I thought they were luggage tags. I didn't look closely. Okay. They're delicious. Um excellent. One thing that Ali said was Yeah. The business is not delivering food. Like that's not where I want to end. Yeah. But it's a business that makes money today and it allows me to get the data I need for real world use cases of autonomy. I'm curious, what is your end goal? Is it trucking? Is it autonomy at
large? Like what's what's the what's your why? Yeah. So, what I'd say is if all we do is, you know, what we're doing with trucking, it will be the company be incredibly successful and be very proud of what we've done. That said, what we're building is really the vanguard of physical AI, right? This ability to understand the physical world and to be able to have something interact with it safely. Uh and so that capability is something where we think we can apply it to a lot of things and you know whether it is other elements of logistics so not just big trucks but box trucks or whether it's in adjacencies like you know personal mobility and robo taxis or whether it's in mining or agriculture or
aerial drones or humanoid robotics. Uh in a couple of years you'll start to see us explore where we can take this muscle that we've built and apply it other places. And so it'll be fun to see. I'm not sure exactly which of those places it goes, but what I do know is we're going to be very uniquely positioned, not just with the capability we've built internally, but with the scale that will come from the trucking business and the, you know, the cost down that comes with that scale. And so, I think it's going to be a lot of fun for the next few years. Yeah. Well, what have you learned, you know, in your however many dec How long's it been?
Oh, good gosh. Uh, it's been uh 24 years, something like that, right? In your 24 years of bringing AI from the lab to the physical world, what have you learned uh that other startup founders can take away? Yeah. And I say that about yourself, right? Because if you do another, you know, business unit, you're essentially a startup founder again. I will not be doing another company. This is like one public company. That's good for me. I'm I'm not that ambitious, I guess. Uh but do intend to be building this company for the next, you know, century, right? Um, I think I
think one of them is that there's just no shortcuts, right? We've believed this from day one in that if we are building something that people's uh lives depend on, right? If you're driving a 70,000lb thing down the freeway, you need to know that it works and that you build that trust slowly over time. And so, we have to be committed to that. We need to be thinking safety first. We need to be building the process and tools that allow us to do that and then grow from there. and it's baked right into our mission which is deliver the benefits of self-driving technology safely, quickly and broadly. Do it safely, move as quickly as we can, and then think about scale. I think that has really served us
well. Um, and it's an important part of how I believe we're going to be here for a longterm time doing exciting things. Yeah, it's interesting how safety is so front and center for physical AI, less so in a lot of ways or it feels like for um not physical AI like LLMs, right? You said on stage which I thought was interesting that like you know you have to focus on um that there's more safety implications for self-driving trucks than there are for LLMs. And I said, 'Well, I don't know if I agree with that because as we've seen, there's been a lot of mental health um poor mental health outcomes that have happened as a result of LMS and before that social media. But why do you think is it just
because you're operating in a realm that already has rules of the road? Pun intended. I think it's a little different than that. I think it's more obvious, right? Right. that it's clear that there are risks when you are driving a car down the road or driving um a truck down the road. The harm that can be done through an LLM is uh not as front and center. It's not as like cause and effect and it's not as direct, right? And I certainly agree that we're observing harm and uh I'll have to think about how I express this a little differently, but really when I think about an LLM,
there at least is a person intervening between the interaction that happens on the screen and whatever physical interaction happens in the world, right? And I talk about the kind of gaff that Gemini had back in the day, right? How do you keep the cheese on your pizza? You put glue. Uh right? And that we all look at that and say it's silly. And I don't expect many people would take that at face value and then glue cheese on their pizza, right? The equivalent in the truck is if for a moment it decides, geez, I should make a right turn in the middle of the freeway.
There's no human who can intervene. And so, yeah, I'm I'll try and be more a little more careful how I talk about that distinction, but I do see one being there that's that's material. Yeah. Well, do you have I guess like when we talk about safety and um for self-driving or the AI being safer than a human or a better driver than a human like who gets to decide that and how like is that something that you kind of self-certify or are you working closely with, you know, regulators or Yeah, at the end of the day, society gets to decide it, right? We have elected officials. They create a regulatory environment. They create a policy environment and that is the
decision about how and where these things should be used on a day-to-day basis. Um we make the decision uh whether we think the product down the road is safe. Uh and we do that using uh what we call a safety case. And you can think of this as an explanation for why we think it's safe. And it has five core pillars. The first is that the vehicle has to be proficient. So that means that it drives safely, right? that it behaves in the way you'd expect. The second is it has to be fail safe. So it has to understand if something breaks uh and then figure out how to mitigate risk associated with that and be safe. Uh the third is that it has to be resilient. So we have to think about how it might be
misused or think about how a cyber attack might impact it and make sure that we're thoughtful about that and responsive to that. Uh the fourth is that we need to be continuously improving. So we need to learn from our experience. you know, whether it's what we see from others out in the world or we see from our system behaving out in the world or our company operating and be constantly kind of making things better. And then the last is that we need to be trustworthy. And just kind of on the face value, if I tell you the other four things that I'm not trustworthy, it doesn't mean a whole lot, but it's also about how we engage with regulators and policy makers to make sure they're informed and can make informed decisions.
Yeah. And so we take those five pillars and we blow that out into about 450 something like that bits of evidence, right? These are things where we said this is what it means. This is why. And then when we get all that checked off, we say, "Okay, yeah, we feel good that this is not creating unreasonable risk on the road and that, you know, frankly, I would feel comfortable with my family on the road around it." Uh, and that feel comfortable that we're not putting other people at unreasonable risk. How much is that like that feeling of safety based on your AI approach, right? Like so there's um a debate I think in the industry right now
between like endtoend systems and then more structured verified approaches. I think that's what Aurora is doing. So why does that matter more for safety and for context I believe you know a company like Wabby one of your competitors they're doing more of an endto-end approach and yeah so I'm curious what you think about how that applies to safety. I think it's it is one part of it right and because there's a like I said there's a lot of things around safety and getting the software right is obviously an important part of it but it's one part of it for us we've taken this approach of verifiable AI and what that means is that we appropriately decompose the problem so that we can understand how
the system is behaving so we can have conviction that it's actually working well and is safe. Uh and so you know one of the major ways we decompose is we take an understanding of the world. So what's what's moving around us um and in the world around us and how do we react to that? And by decomposing that we can actually look at the things that matter and make sure that we're understanding the world correctly. Right? Um because if you just have an amorphous blob uh an end symbol system, I don't know why I'm making the decision I am.
Is it because I made a bad decision but I understood what was going on or is it that I didn't understand the world or is it some weird interaction between the two and or something else altogether and so we think breaking them apart allows you to actually better understand the system and makes the testing the verification or validation actually possible and some would say okay well you don't need to break it apart you can just have you know kind of what they call heads right these uh these version you know these outputs from the system that tell those things. Yeah. The challenge with that is that we're seeing in these reasoning models that the system is complicated enough that it
can lie or do two things at once. Right? So it's, you know, if you take one of these reasoning models, it'll tell you an explanation for why it did what it did. Right. What it's really doing is telling you an explanation for what it did that it thinks you will like. Yes. Right. And that may not actually be uh Yeah. And I don't even know I don't want to necessarily ascribe uh a personality to it whether it's devious or not but it's learned that's that's the right way to explain this thing and then it may be coming to that answer in a very different way. And so when you think about verifying our system we want to say okay let's not have that be a risk right
that it's telling us that it sees the red car there and so it's reacting in this way but in practice it's doing something else right and the complexity of these networks are such that it could very well be And so for us, this decomposition allows us to understand it and then we allowed to are able to explicitly express constraints and guard rails. So we don't just have to tell it please don't. We can actually put it in a box and say it can't, right? Um and for a safety critical system that seems really important to me. I think what you're starting to see with the LLMs is that as we are both moving to domains where the implications of bad actions are more serious and to your
point we're starting to understand more of the consequences that may happen even in what felt like innocuous domains. We're seeing um architectures that look more and more like what we've been doing for a while because it's you know it's kind of the way to limit the machine. Zooming out really quick. I would love to hear. I mean, you're probably so heads down on Aurora right now, but what are some other companies in the autonomy space that are exciting to you? Yeah, I there's a couple that I get a chance to check in on every once in a while. So, one of them is Ser Robotics, you mentioned earlier. Uh I think it's just cool to see like there's this wave of everybody thought, "Oh, we'll drive
on sidewalks and that'll be way easier." And it turns out it's a technically pretty hard. You know, interacting with people is not easy. and then building a real business there has been hard as well. And so to see a company that's got these things and the business seems to be starting to work and you know it's kind of fun and cool. Um and then the other one uh that I think is pretty cool is this company called Bedrock Robotics. Uh and they are doing automation for excavation and kind of at the construction site. Yeah. Um, and you know that one there's a couple of the people that founded it I've known for a long time and you know work with in the past and they're good people and it's just kind of cool like I'm still like there's a part of
me that's still like the 8-year-old boy who's like big truck cool excavator cool right and so seeing something where they are um both working with a physical world and doing something that actually is useful uh and like seeing the technology come to work and seeing it with good people doing it that I love you know you're talking about construction sites And it's reminding me of a company that I saw I think we wrote about them at TechCrunch maybe last year or the year before. I can't remember. But they their whole thing was we want to help um construction sites be more organized. So we're going to put cameras on all the hard hats and with that data we'll help you be more organized and
whatever. Are there still these is there still a priority on getting real world data or have the advancements in simulation um kind of made it so that it's a little bit obsolete to get out in the field and do that? So our approach I can speak to is you need to do both. Right. We have some incredible simulation capabilities. Um, but you have to ground the simulation, meaning you have to know is this actually behave? Is my simulation accurately reflecting the real world. Yeah. Um, otherwise you're playing a video game. Yeah. And you may be really good at the video game, but that, you know, anyone who's driven a Mario Kart knows that's quite different than driving a real car.
Yeah. And of course, these simulations are much more closer than Mario Kart, but if you haven't grounded it, you don't know how close they are. And so for us, there's a set of things that we know our simulation is very good at and we understand the limitations. And then there's other things where we go to a test track or we gather data from the real world and we use that both to improve the simulation capability but also because today we don't trust the sim that particular part of the simulation for that particular test. And so it's an and not an or in my mind. Got it. Okay. Well, we're just about out of time. Um, thank you so much for joining. Is there anywhere that our listeners can find us or find you online?
Uh yeah, please. Uh we're at uh aurora.te. Uh and even better yet, if you actually want to see our trucks in action, uh you can go to youtube.com Auror and we live stream our driver's trucks every day. Uh from I think 8 to 5 central time. So nice. Uh check it out. It's it I'll warn you it is super boring. Uh but we have some good background music. Surely it's not boring. Well, honestly, that's what we want, right? If you're catching an exciting moment in a big truck, like we just don't want that, right? We want it to be boring and smooth and easy.
Yeah. Awesome. Okay. Well, you can find me um I'm on Twitter, I'm on LinkedIn, I'm on Blue Sky. Um I'm on Substack and uh you can uh find Equity on Twitter and Blue SkyPod. Talk to you next time.