Of Agents and Agency
AI can't just do things (yet.)
LLMs are tools that often masquerade as solutions, an unfortunate state of affairs which has led to remarkable amounts of confusion, frustration, rage, and opprobrium. Almost all of the criticisms aimed at their outputs (as opposed to their training) stem from this deceptive category error.
This is doubly true of LLM “agents,” ably defined by Simon Willison as LLMs “running tools in a loop to achieve a goal.” You might naïvely expect that goal, the output of an agent, to be a solution to the problem it was set! You would be wrong. Such outputs are also and only tools; first drafts, summaries, prototypes, sketches. Important, and often immensely time- and labor-saving, steps towards a solution! But not solutions themselves; not until you have verified … and almost invariably tweaked, rejigged, restructured, rewritten, and/or scrapped entire iterations of … whatever the LLM generated.
Verification is much faster than generation. Editing is faster than writing. It is far easier to critique than to create. This is why LLMs can save so much time and effort. But you cannot evaluate a tool if you do not understand it. In order to do any of these things well, you still need deep expertise in the subject.
But if you don’t have that expertise, LLM outputs almost always seem like complete solutions. And it’s human nature — from the student generating an essay on Thoreau, to the lawyer filing a submission laced with hallucinations, to the dev vibecoding a Gordian thicket of React — to want them to be. LLMs can explain Thoreau to that student in detail and nuance, or help structure that lawyer’s argument and first draft and boilerplate language, or empower that dev to craft an elegant front end in half the time! …But that’s so much more work than simply treating their outputs like solutions.
This is all so obvious to people who are good with LLMs that it goes without saying … and so unobvious to people who are not that it gets them into all kinds of trouble. Such people often make the category error of thinking of LLMs as computers. You don’t need to evaluate the output of a calculator or computer; while garbage-in-garbage-out is true, so too is that if you enter quality input, the deterministic output of a calculator or computer will be a reliable solution to the problem you gave it.
Put another way: LLMs, and especially LLM agents, can generate the illusion of agency.
Agency
Setting aside medical conditions, the unhappiest people tend to be those with the least agency. (You probably only need to glance at your social media platform of choice for rapid confirmation of this thesis.) Conversely, happy people have agency. Hence the Silicon Valley chorus of “You can just do things.” Which is true and important!
But when making … or, one might say, generating … things, agency requires expertise. You don’t get it by default. It has to be earned. This is not an entirely popular stance, and maybe one day it will be less true; but given the tools available today, very much including LLMs, it remains the case.
As such the dark side of “You can just do things” is “I can do anything.” LLMs can really accentuate this dark side. by masquerading as solutions. Note however that these false solutions are at least arguably symptoms of the larger problem of rejecting expertise, a problem which seems pretty widespread:
it has become cliche to observe that many people don’t just ignore experts, but proactively reject them, when it comes to news / facts / information
it is sadly also cliche to observe that this is increasingly true not just of people but of governments
as an example, educators report that (some!) students — not “this generation,” but suddenly, over just the last few years — are disinterested in knowing how to read, write, or even punctuate.
People want to procure the many rewards of expertise without earning them, and if you treat LLMs as solutions rather than tools, they seem to offer those rewards.
But the flip side is that for people who do have expertise, LLMs are (at least in a large and growing number of fields) a remarkable force multiplier for their agency … and, importantly, this is no less true for people who seek expertise.
Agents
None of this should be particularly surprising. Of course agents are tools; it’s right in the name, just as CIA agents are, at least notionally, tools of the US government. Very sophisticated tools, but tools nonetheless.
Right now, useful LLM agents are relatively thin on the ground:
deep research agents
software development agents
self-driving cars (Waymos use transformers, the same technical architecture that powers LLMs, extensively.)
But two interesting things are happening simultaneously. One is the expansion of LLM agents into other fields. One can imagine agents that would generate useful spreadsheets from slews of raw business data, for instance; of course transformer-driven robots are also agents … and those are just two random examples of many.
The even more interesting thing is the rise of “planning agents” which delegate tasks to other sub-agents. Again, such planning agents are just “LLMs running tools in a loop to achieve a goal” — but here those tools are LLM agents themselves.
The obvious question is: might we get to a point at which those planning agents are good and smart enough that their outputs become, in fact, solutions not tools? Might we one day be able to orchestrate suites of LLM agents so that collectively their outputs are as reliable as those of calculators?
…Maybe? …In certain limited fields and circumstances? …If and when the underlying models improve a lot? But I’m not holding my breath. You can rely on LLMs being accelerants for expertise, not replacements, for at least the foreseeable future. Which, perhaps contrary to some narratives you may have heard, makes all your hard-earned expertise more valuable, not less.



