1. The Inexorable Rise
It has been almost exactly thirty months since the release of ChatGPT. Since then:
chatgpt.com has become the fifth most visited website in the world, behind only Google, YouTube, Facebook, and Instagram, surpassing Wikipedia and Reddit. openai.com is also in the top 50, above The New York Times.
OpenAI’s ‘active weekly users’ count rose from 300 million in December to 400 million in February to 500 million in March to “something like 10% of the world,” i.e. 800 million, in April. On March 31 there was reportedly a single hour in which 1 million new users signed up.
Similarly, according to Ramp, which gathers data on corporate spending, 32.4% of U.S. businesses were paying subscribers to OpenAI’s platform in April, up from 28% in March and 19% in January. This tracks OpenAI’s report of 2 million business clients in April, up from 1 million six months earlier.
Anthropic, the choice of AI hipsters everywhere, has similarly grown its business footprint from 4.6% of US businesses in January to 8% in April, on top of an estimated ~20 million monthly individual users. (Very respectable, but also so far behind OpenAI you can see why they don’t announce user numbers.)
Google recently reported their Gemini AI app has more than 400 million monthly active users. (Mind you Google has so many users across so many interlinked properties that what this number really means is questionable.)
Meta’s open-source Llama models have now been downloaded more than 1.2 billion times. Billion, with a B. Notable because you still have to be somewhat technical—at least a serious power user—to download and use LLMs yourself.
As of one year ago, the number of AI users in China exceeded 230 million. That number has surely grown vastly larger since.
Ethan Mollick, who tracks actual studies on AI use, summarizes: “A large percentage of people are using AI at work.”
70% of software engineers, per an excellent if informal survey by Wired
65% of marketers
64% of journalists
30% of lawyers
40% of all American workers.
2. The Reality
If and when someone says of AI: “nobody wants this,” “it’s being shoved down our throats,” “it’s a bubble,” “it’s just hype,” “I can’t wait until this fad goes away” — all sentiments I encounter nearly daily on Bluesky! — I beseech you: just show them those numbers above. Then show them again. Eventually, surely, reality will hit.
“OK well this is just because of hype and media saturation!” Nope. You are right to be skeptical of announcement numbers, but independent data from Ramp, Similarweb, Semrush, etc., all confirm the story. We know what the growth (or stagnation) of hype-driven tech looks like. It is the adoption ‘curve’ (or flatline) of cryptocurrencies or the AR/VR metaverse. It looks like a brief pulse of interest followed by a precipitous drop.
The adoption curve of AI is so not like that. Instead it is the hockey stick to end all hockey sticks. Again, ChatGPT launched thirty months ago! …And we still haven’t seen anything like a trough of disillusionment.
AI is the fastest growing technology in the history of humanity, and its growth is being driven by enormous, widespread, sustained demand by individuals and organizations alike. You may or may not want this to be true, but it is the stark reality in which we live.
3. The Skeptics
Even skeptics are coming around. Ed Zitron, for instance, perhaps the most vituperative of all OpenAI critics, has gone from “There might simply not be a viable business in providing these services” in October to (italics his) “Yes there is … some degree of utility … LLMs and their associated businesses are a $50 billion industry” in April. He’s still very critical, to be clear! But ‘might not be a viable business’ to ‘…OK, fine, $50 billion’ is quite a notable evolution in six months.
Skepticism is great. I strongly encourage skepticism. Transformative technologies are inherently political; profound changes inevitably bring both high costs and big benefits; it is good and healthy that different perspectives mean we see and balance these differently, and sometimes draw starkly different conclusions. Criticize! Even castigate! For any given topic the tension between pro- and anti- is often far more important than either.
But it should be informed skepticism. If you say “This is a useless fad nobody wants, being forced on us by the evil tech industry,” you are not a serious person. Worse yet, you are responding to the most transformative technology of this generation not by engaging with it; not by trying to shape its development; not by experimenting with how it might accelerate the kind of progress you want; not by trying to articulate a convincing vision of the future and a plausible theory of change … but by sticking your fingers in your ears and shouting angrily.
4. The Moralists
I’m dwelling on this because I myself am (mostly) a leftist progressive, and it is depressing to see people with whom I (usually) agree become knee-jerk anti-AI zealots. (You can tell the zealots from their reflexive attempts to angrily shout down anyone with the temerity to disagree with them.)
I don’t think most Bluesky progressives, the most anti-AI segment of the political spectrum AFAICT, really truly believe that AI is useless and nobody wants it. Instead what they really truly believe is that it is evil, because:
it comes from the evil tech industry
it evilly consumes scarce, environmentally catastrophic energy and water
it is built atop evil uncompensated strip-mining of the copyrighted work of writers and artists
They want to believe that because it is evil, therefore it is useless and no one wants it. (This is a general failure mode of the left; on some deep level many seem to believe it is impossible for effective outputs to emerge from immoral processes, so anything built in a way they consider impure must therefore be worthless.) That said I can respect those on the left who manage to simultaneously hold these two beliefs:
AI is important, transformative, wildly popular and enormously successful
AI is and will remain the evil work of evildoers
It certainly seems like a very bleak worldview. But at least it’s one which engages with reality!
5. The Counterarguments
I don’t at all agree, to be clear, just as I don’t agree with any of the abovementioned Three Claims Of Evil. Rather:
“The tech industry” is not inherently evil. Every time anyone says ‘tech bro’ they are blinding themselves with an ad hominem, which is sad to see. There are tens of millions of people in the tech industry and their views are wildly diverse. Yes, even at the executive level. You too might be shocked by how nuanced your differences with ‘tech bros’ can be!
But far more important than businesses, use cases, or revenues is that the rise of AI means humanity is exploring an alien frontier. Not on another world, but of information itself! A frontier that will likely have profound implications on both our world and how we conceive of ourselves. Even (or especially…) if you think the companies which have discovered that alien frontier are Weyland-Yutani, this discovery is something you should take very seriously, pay close attention to, and incorporate as a key element of your vision of the future, rather than reject.We are entering a world of clean energy abundance. Global AI energy usage is projected to grow from 55GW to 122GW by the end of 2030, largely driven by demand for AI computation. Sounds bad! Right?
Well, actually, no. That is but a small drop in a large sustainable bucket. Compare it to the 585 GW of clean energy which came online just in 2024, and the 5,500 GW(!) projected by 2030. That’s right: total data center energy usage today—not AI usage; all data centers—is a mere 1% of the projected growth in clean energy from 2024-2030, and projected to expand to all of 2.2%. Clean energy abundance is upon us, and we are at no risk of AI moving that needle.
As for water, come on. Google’s data centers use as much water as 41 golf courses. There are roughly 16,000 golf courses in the US alone! Yes, there are places where water is precious and golf courses are wasteful; but as a society we seem to have come to terms with the existence of golf.Regarding “strip-mining copyrighted work,” I’ve written about this at length, which I encourage the apoplectic to read, but briefly, copyright is about making copies, it’s right there in the name. Of course people, and companies, should pay for copies of what they read, and for clearly derivative works. But we don’t—and shouldn’t—restrict who or what is allowed to read published works, or whether they can be influenced … in a truly microscopic way, given the average book is ~100,000 words and modern LLMs are trained on more than 10,000,000,000,000 words … by what they have read.
Do you think AI companies should share a portion of their profits or revenues with creators? I think so too! Do you think they should be legally obligated to do so? I’m afraid it seems very unlikely that existing laws require it. So you should call for new laws, as all transformative new technologies ultimately require … and such calls should engage with the reality of the world around you, in which, see above, overwhelming numbers of people seem to be reaping huge benefits from AI. Maybe they’re not all wrong?
6. Join The Enthusiasts!
Yes, there are a lot of bad uses of AI, as has been true of every new technology ever. Yes, there’s a lot of eye-rollable hype. But this is the fastest-expanding new technology in history. It is an extraordinary alien frontier that we are just beginning to explore. It is likely to have profound impacts on not just our work but our geopolitics, our communities, and our understanding of ourselves.
Is that really something you want to turn your back on? The Internet too has had… mixed results, but if you were to travel back to the 1990s, would you advise friends and relatives to reject it as evil and treat it as irrelevant? Do you think that would have been good for them, their relationships, their communities, their careers, their understanding of the world, and/or their hopes for a better future?
Or do you think the more that the progressive, compassionate, theoretically-open-minded people of the world embrace and try to shape the future of a new technology which is clearly here to stay, the better it is for them, the world, and the technology?
You can run only 100% open-source, open-training-data models on your own home-solar-powered GPUs, if that’s what it takes to satisfy your need for purity. That is a real option. But I would recommend you just start playing around with the free tier of one of the frontier models: ChatGPT, Claude, or Gemini. Those models are much dumber and less informed than what you get if you start paying for them! …But unlike a year ago, they’re now good enough for you to at least see the possibilities.
And once you start seeing them, maybe then you can even start shaping them.