Literally Everyone Agrees We’re In A Bubble
OK, I’m using “literally” slightly hyperbolically … but only slightly! It’s not just outside analysts and skeptics, it’s:
Sam Altman: “When bubbles happen, smart people get overexcited about a kernel of truth … Are we in a phase where investors as a whole are overexcited about AI? My opinion is yes.”
Eric Schmidt: “This frenzy gives us pause … The belief in an A.G.I. or superintelligence tipping point flies in the face of the history of technology”
Jeff Bezos: “This is a kind of industrial bubble … investors have a hard time in the middle of this excitement, distinguishing between the good ideas and the bad ideas. And that’s also probably happening today.”
Mark Zuckerberg: “Most other infrastructure buildouts in history, the infrastructure gets built out, people take on too much debt, and then you hit some blip … a lot of the companies wind up going out of business, and then the assets get distressed and then it’s a great opportunity to go buy more … definitely a possibility that something like that would happen here.”
[his important caveat is explored further below; mine is that I currently work at Meta Superintelligence but in no way speak for them]
Entertainingly, AI skeptics and critics (here, Cory Doctorow and Ed Zitron) are thundering that … we’re in AI bubble. No one disagrees! This is absolutely, 100%, conventional wisdom.
The question is: is this a good bubble, or a bad bubble?
The Bear Case: Starve and Crash
Doomsayers prophesy the bubble is “going to burst and take the whole economy with it.” That’s a little histrionic. Right now AI investment is propping up an otherwise faltering economy. If and when that ends, well, the dot-com crash of 2001, to which endless comparisons have been made, led to a brief recession that mostly affected the tech industry, but hardly economic collapse (though if you were in tech it kinda felt that way at the time!)
More accurate and subtle concerns are threefold:
previous bubbles — railroads, telecoms, Internet — led to the build-out of infrastructure that eventually became extremely valuable and important; Nvidia GPUs go obsolete a whole lot faster than railroad tracks or fiber optic cables.
bubbles like these suck capital away from other sectors, leaving them starved of investment and stunting their growth, so when they do pop, the rest of the economy has lost steam as well, and there’s an especially painful period of gear-switching.
this AI bubble may become a more general credit bubble.
To which I might add a fourth one, unique to AI:
the perception that AI will soon replace entire fields of effort, such as animation, illustration, copyediting, junior software engineering, and, well, Hollywood, leads to a self-fulfilling prophecy effect where investors flee those sectors while the fleeing is good.
So, yeah, bending over backwards to be fair to the doomsayers, the at-least-plausible worst case is pretty bad! Not “the economy collapses” bad, but conceivably “a nasty and prolonged recession” bad.
Bull Case: If Nobody Builds It, Everyone Dies
…As is so often the case with AI, when you compare the bull case to the bear case, you don’t just get an inequality, you get a category error so massive that it’s difficult to articulate.
Let me pause to note the existence the mega-bear case, which is of course the complete extinction of humanity, per Yudkowsky’s and Soares’s new book, the title of which certainly leaves no room for misinterpretation of their views: If Anyone Builds It, Everyone Dies. It’s important to be recognize that those who believe this believe such a fate is not just inevitable but imminent — likely within the next decade or two.
…Its flip side is the mega-bull case: “transformative AI.” Believers here note that everyone dies is in fact already the status quo, but it doesn’t have to be. This vision is of transformative AI which doesn’t stop at curing cancer but goes on to extending our lifespans indefinitely (whether through some immortality drug or simply by every year making new discoveries which extend human lives by more than a year), outsources all manual labor to tireless robots, and brings on the utopia of fully automated luxury communism for the many, and something even better than that for the fortunate few.
The AI Horseshoe Spectrum
…You’d think there’d be quite a range between “end of the species” and “immortal luxury for all,” but no, those are the two main camps. (If you’re thinking, hey, those both sound insufficiently weird: you’re right.) Kind of hard to talk about those outcomes and also, in the same breath, include sober quantified analysis of investment growth by sector and how long a recession might last, you know?
But let’s try. Another way to frame AI beliefs is as a spectrum:
AI is a once-in-a-species tech that will utterly transform the world by 20301.
AI is useless and counterproductive, the technological equivalent of asbestos.
…There’s a sort of horseshoe theory here where people on either extreme end of this spectrum, despite having what are theoretically opposed views, sound a lot more like each other than they probably want to admit: evangelistic, hectoring, suspiciously disingenuous when it comes to ignoring countervailing evidence, convinced their opposite counterparts are scamming grifters, and ultimately thoroughly unconvincing.
The problem is that the reality is probably neither an endpoint or the midpoint on that spectrum, and where exactly will dictate just how ill-advised this bubble. The hyperscalers are making a conscious bet that AI is somewhere between 1 and 2 on that spectrum. The doomsayers are protesting that it’s somewhere between 2 and 3. Nobody really knows, yet, and anybody who says that they do is selling something.
But for better or worse, we’ll find out relatively soon, thanks to the hyperscalers…
My own view meshes pretty well with Epoch AI’s recent report on AI in 2030; call it a 1.5ish on the scale above. (Maybe more like 1.66 but who’s counting.) If so, then several hundred billion dollars is a completely reasonable investment! The world already spends at least $1 trillion/year on software, so if AI merely doubles the efficiency of software development — which seems at least plausible, we’re not there yet but we’re getting there, vibecoding has changed a lot (for the better) just over the last several months — then AI is a $500B/year technology on that alone.
(Note however this still isn’t anywhere near position 1. The contours of the jagged frontier still seem fundamentally unchanged, and we won’t get “transformative AI” without surmounting it.)
So How Bad Will The Pop Be?
Yeah I mean the problem with that question is that the answer depends entirely on where we are on that spectrum above.
If we do live in a 1.5ish world, then the focus on the depreciation of GPUs is myopic, and claims that we’re wasting money on useless infrastructure seem kinda ridiculous. Demand for inference will continue to grow; data centers full of old chips will still be useful, especially as investment in new GPUs drops post-bubble. Previous-generation chips aren’t useless, just relatively inefficient. The analogy to fiber optic cables holds.
Perhaps more importantly, the other infrastructure being built out is infrastructure for the production of new GPUs. That definitely retains value in a 1.5 world.
In that case, this is, in fact … relative to the long-term importance of AI … a boring, normal tech bubble. Sorry. Maybe we’ll get lucky and it will deflate not pop; but that seems unjustifiably optimistic. AI is sucking capital from other sectors, and that will make the pop pretty painful. Again, sorry. The long- and even medium-run gain will be more than worth it, but that won’t make it sting any less.
But if we live in a 2, or, AGI help us, a 2.5 world? Then this will go down as one of history’s great blunders. I’m pretty sure we don’t — but, for better or worse, we’re all likely to know that answer within the next few years.
Look, I’m being generous/conservative here, people actually argue by the end of 2027. I was loosely involved in AI 2027 - long story, maybe I’ll write about it sometime - and let’s just say that involvement did not increase my belief in its conclusions.