If you are a software person, or even software-adjacent, you have probably heard of “worse is better.” Did you know it was birthed from AI? No lie!
In 1989 it was clear that the Lisp business was not going well, partly because the AI companies were floundering and partly because those AI companies were starting to blame Lisp and its implementations for the failures of AI. One day in Spring 1989, I was sitting out on the Lucid porch with some of the hackers, and someone asked me why I thought people believed C and Unix were better than Lisp. I jokingly answered, "because, well, worse is better." We laughed over it for a while as I tried to make up an argument for why something clearly lousy could be good.
(The semilegendary Jamie Zawinski turns up too, because of course1.)
But the ‘clearly lousy’ thing was good, or, at least, good enough to conquer the world. The same author, Richard Gabriel, explains how, in “The Rise of Worse Is Better”:
Worse-is-better philosophy means that implementation simplicity has highest priority. Therefore, if the 50% functionality Unix and C support is satisfactory, they will start to appear everywhere. And they have, haven’t they? Unix and C are the ultimate computer viruses.
The lesson to be learned … is that it is often undesirable to go for the right thing first. It is better to get half of the right thing available so that it spreads like a virus. Once people are hooked on it, take the time to improve it to 90% of the right thing.
This seems like an excellent time to segue into the use of modern AI, and in particular, LLMs that write code for you, and especially, the rise of “vibecoding.”2
By any measure AI for coding has been an absolute smash success. A recent Wired survey of 730 software engineers found that nearly 75% use LLMs at least weekly. 75%! That is an astonishing rise of a technology that basically didn’t exist 25 months ago.
It’s easy to see why. Even critics agree “LLMs have definitely been useful for writing minor features or for getting the people inexperienced with programming/with a specific library/with a specific codebase get started easier and learn faster.” Even if you reject the approach of ‘Fix using Copilot,’ you learn from it. Everyone loves being able to ask Claude about some esoteric Kubernetes config thing, or some subtle React rendering bug, and get a knowledgeable answer, with sample code, in seconds.
But at the same time, criticism of LLM-written code is fervent and voluminous. Google’s 2024 DevOps Research and Assessment report says bluntly: “AI is hurting delivery performance” (i.e. code stability.) GitClear reports AI has caused code duplication and churn to skyrocket. A year ago “Devin, the first AI software engineer” was introduced; now we ask “WTF happened to them?”, while anecdotally, people tried them and promptly churned. Critics accuse “AI slopcode” of being bloated, duplicative, hard to understand, hard to maintain, hard to test, a security nightmare.
Are these criticisms valid? The surprising answer is: it doesn’t really matter.
It’s absolutely true that while LLM coding is simple to use, and widely available, it often falters or even fails at key aspects of software engineering. If you zoom out and consider concision, readability, maintainability, security, & testability, AI doesn’t (yet) write software the right way.
Instead it does, like, I don’t know, maybe half the right thing—
…wait. Hang on. That sounds familiar:
The lesson to be learned … is that it is often undesirable to go for the right thing first. It is better to get half of the right thing available so that it spreads like a virus. Once people are hooked on it, take the time to improve it to 90% of the right thing.
AI is Worse Is Better; and for better or worse, Worse Is Better wins.
This is not least because Worse Is Better will only get better and better. Eventually AI will simply be Better. But regardless of whether that takes months, years, or even decades, we’re already well past the point at which its ubiquity becomes inevitable.
Another interesting Wired data point is that early-stage and experienced engineers tend to be AI optimists; only the mid-career group trends pessimistic. That tracks. Juniors simply think AI is better; by mid-career, you can see how it’s worse; but us grizzled vets, we understand that Worse Is Better.
Today software, tomorrow the software-eaten world
Software is the most immediate widespread practical use of modern AI, but make no mistake, this applies, or will soon apply, almost everywhere. Law, science, medicine, research, home repair, travel planning, budgeting, name it. Yes, LLMs are unreliable and devoid of new insights. But they are also often right, always available, quick to reply, surprisingly knowledgeable, and infinitely patient.
Those two cons are nontrivial; but those five pros are pretty big deals! They mean that in almost any field, half the right thing is suddenly on tap, via the simplest of interfaces, for only twenty bucks a month. Is half the right thing worse? Sometimes! Maybe even often! But we know that it is also Worse Is Better.
Given the choice between medical advice from a doctor and from ChatGPT, you should of course choose the former. But you should also not lose sight of the fact that many, many people don’t have that choice, because medical professionals are inaccessible, or too expensive, or too harried. Is getting medical advice from an LLM a good idea? No. (Not yet.) But the reality is that many, many people will judge it—often correctly—the least bad idea. Repeat for all other fields of human knowledge.
So hearken not unto the lamentations of the anti-AI scolds. They are just like the lamentations of the Lispers in the eighties, who were Not Even Wrong. According to their design philosophy, a.k.a. their value system, AI is indeed bad. But:
Arthur: ”I think we have different value systems.”
Ford: “Well mine’s better.”
Arthur: “That’s according to your … oh, never mind.”
— Douglas Adams, Mostly Harmless
The essay also features a Satoshi Nakamoto-esque pseudonym “Nickieben Bourbaki” and references to further essays entitled “Worse Is Better Is Worse,” “Is Worse Really Better?”, “Is Worse (Still) Better?”, and “Worse (Still) Is Better!” It’s all very Gene Wolfe.
Popularized and I think also coined by the great Andrej Karpathy, an increasingly accomplished neologist; he was also the first to use the term “hallucination” to describe overly inventive LLM outputs.