The AI companies are actually struggling to keep up with demand. they have actually been throttling access to some of their tools to be able to meet the demand. So, for example, anthropic, recently changed the terms of its service so that it dissuades a lot of its users from using capacity at peak times. OpenAI, which is another big AI company, shut down Sora, which is its video generation tool, because it wanted to allocate its scarce computing resources towards more lucrative ventures. So across the entire technology stack, AI companies are looking at different ways in which they can meet the demand with the amount of processing power that they have.
So you talk about processing power, but what is there actually a shortage of? AI models need really powerful processors which are called GPUs, graphic processing units to be able to run. these are placed in really large data centers where you have thousands of these GPUs lashed together using optical network and all kind of gives devices. Essentially the tech world is running out of processing power to be able to enable the kind of demand that we are seeing. And the reason this is important is because inference, which is when an AI model responds to a user query every time you are using an AI tool, it's using that processing power. So the more people that are using AI, the more computing resources you need.
And since AI demand is growing exponentially, processing power is struggling to keep up. So what are the AI firms doing about that? think most importantly, they just pouring vast sums of money to build out these AI data centers. To give you some numbers, this year, five of the largest cloud providers in the US, companies like Amazon, Meta, Microsoft, they are spending close to $700 billion in building out AI data centers. And it's not just these big companies, even model makers like anthropic and OpenAI, which are really well funded. They are splurging on different deals to be able to get access to these resources. So the answer for the tech company is actually build, build, build a lot more.
So throwing money at the problem seems to be a sensible solution. Will that work? think it will work eventually. The problem is building out this capacity takes time. especially now in the US. A lot of these data center construction are running into local opposition for various reasons. Citizens are concerned about its use of electricity, land usage, water usage, so on and so forth. So a lot of these data center constructions are running into opposition, which is delaying their bills. I think the more pressing problem is the tech industry is just running out of kit to equip these data centers, which seems really odd, but that is true.
So if you think about these really old school tech instruments like transformers and switches, which I think we have never even thought about literally, the tech industry is running out of them. And lead times for some of them stretching between 3 to 5 years. So that's one problem. The second problem is the processors themselves, the GPUs that I talked about, companies just can't make them fast enough or make enough of them to be able to supply these data centers. So if you go across the entire so-called tech stack, which is the different layers of technology that enable AI, there are shortages that are showing up everywhere. And that is what is making the this particular supply crunch more concerning.
So if there's a shortage in the supply chain, essentially what are those suppliers doing about it. So the suppliers are definitely starting to build a lot more. But I think we need to keep in mind that software and hardware move on very different timelines. We are all used to kind of basically at this point, great improvements in AI every few months, like the best model gets replaced by another model every few months. In hardware, it actually takes anywhere from 2 to 4 years to build out excess capacity, because you actually need to go in and build the physical plant that builds these things. So there is a fundamental disconnect between the pace of software and hardware. So that is one problem.
The other problem is particularly in the semiconductor industry, there are some really critical chokepoints where key components are controlled by a few companies. So the first one is of course in Vedere, which I'm pretty sure at this point everybody has heard of, which is the world's most valuable company. They account for over two thirds of the world's AI processing power. So a hugely influential company now chips from Nvidia essentially sold out. It's really hard to get your hands on these chips, and it's so severe that a lot of companies are actually resorting to using chips that are really, really old and were really, really old.
I mean, 2 or 3 years in tech terms, they're actually using those ships, which is unheard of. The other choke point in the semiconductor supply chain is actually making these chips. And they are made in manufacturing factories called fabs. And today there is only one company, which is called TSMC, a Taiwanese chip manufacturer that basically makes most of the AI chips used by the tech industry TSMC has been expanding capacity. Its own CapEx capital expenditures are increasing by 60 billion this year, which is a big number. But again, it's still not as much as companies would So presumably all of that is frustrating the AI firms who are seeing surging demand and now can't meet them.
Absolutely. I mean, they've been pretty vocal about it. So Sam Altman is the founder of OpenAI. He's just asked that. I mean, there's a quote from him where he says, the SMC so should just build more capacity, which kind of explains his frustration. I think a more interesting aspect is Elon Musk, the boss of this line space, his solution is to simply go out and build his own fab. And not surprisingly, he's called it Terra Fab. And his ambition is by 2030 to build a fab that will have more capacity than all the current fabrication plants put together. And to give you some numbers, some analysts ran the back of the envelope calculation. That would require anywhere between 5 to 13 trillion.
I mean trillion in capital expenditure. So clearly we are talking about huge sums of money and it's probably not going to happen. But that just tells you the scale of the problem and the frustration that a lot of the software makers are feeling right now. I'm interested, as you say, we've got used to AI improving so much every few weeks, every few months. And of course there are concerns about AI nicking our jobs. How will this supply crunch actually affect these developments of AI? I think there is a danger that the longer the supply crunch continues, the more pressure it will put on the firms to raise prices.
Because we have been used to a world where prices of inference keeps dropping basically every six months. But behind that is a fact that a lot of companies are actually burning cash to be able to enable that. Now, if the supply crunch continues and prices continue to rise, that's going to increase pressure on these firms to raise prices potentially. That could then slow down adoption as well. And so there have been those that have called the supply crunch a quote unquote natural break on this reckless AI spending. That's one view. The other view is this could potentially slow down and often in a very real way, if it continues.
Thank you very much for talking to me. Thank you for having me.