Simulation, psychosis, and trajectory
Watch out for that Voight-Kampff-Dunning-Kruger test.
“You’re in a desert, walking along in the sand, when all of the sudden you look down, and you see a tortoise. It’s crawling toward you. You reach down and you flip the tortoise over on its back. The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over, but it can’t. Not without your help. But you’re not helping. Why is that?”
A question from Blade Runner’s famous Voight-Kampff test, designed to distinguish humans from replicants. We’re all running miniature internal versions of that test today, like resident background processes, activated whenever we interact with text and/or images and/or video, trying to answer: is this real, or AI-generated?
…Do we need to be running such tests on our thoughts, too? Or even … our realities?
Psychosis
Let’s address the moral panic first. It is very apparent that there are a growing number of cases of “AI psychosis,” where interactions with chatbots are directly associated with profound delusions — interestingly, mostly a fixed subset of delusions:
Morrin and his colleagues found three common themes among these delusional spirals. People often believe they have experienced a metaphysical revelation about the nature of reality. They may also believe that the AI is sentient or divine. Or they may form a romantic bond or other attachment to it.
‘Directly associated’ may sound like weasel wording, but there really is quite a lot of nuance here:
The technology might not introduce the delusion, but the person tells the computer it’s their reality and the computer accepts it as truth and reflects it back, so it’s complicit in cycling that delusion.
This seems to have been most prevalent with an infamously sycophantic iteration of OpenAI’s GPT-4o, the discontinuation of which is estimated to have reduced “responses that do not fully comply with desired behavior under our taxonomies for challenging conversations related to mental health issues” (OK, that is eyebrow-raising wording) by two-thirds. But the phenomenon predates that. Famously, a Google engineer began to believe in 2022, before GPT-4 finished training or ChatGPT launched, that their now-hilariously-crude-and-dumb LaMDA model was sentient.
Today, according to OpenAI,
our initial analysis estimates that around 0.07% of users active in a given week and 0.01% of messages indicate possible signs of mental health emergencies related to psychosis or mania.
Is that a lot? Well, 0.07% of OpenAI’s 900 million weekly users is 630,000 people, so as an absolute number, yes! (It even includes at least one prominent OpenAI investor.) Is that a lot relative to the prevalence of psychosis or mania in the population as a whole? On the one hand, “studies estimate that between 15 and 100 people out of 100,000 develop psychosis each year,” so it’s smack in-distribution of the population number without even considering mania, which is more common.
On the other, the extent of chatbot usage varies, and there must be casual users who experience psychosis without it ever coming up in their AI chats. Overall it is not currently obvious whether chatbot usage is associated with an increase, decrease, or no change in such episodes. Well-designed studies which could measure this would be very helpful!
In the interim, this must all be considered in light of the fact that every new form of media — novels, movies, comic books, video games, social media, and now AI chat — inevitably comes with a moral panic about how it’s causing madness, deviance, violence, and horrific tragedies, particularly among youth:
Moral panics often arise from concerns that are initially moderate and reasonable. But intense focus on the concerns turns them into fears, which are augmented and spread … Evidence tending to disconfirm the fear is overlooked or dismissed … The stories come to be understood as laws of nature … Data that contradict the stories must be wrong …
Some now-darkly-amusing examples:
If you wanted to reduce petty theft and teen pregnancy in Victorian times, all you had to do was stop the publication of “penny dreadfuls” in England or “dime novels” in America … these serialized dramas arguably did more than any other societal change to promote literacy among working-class kids … “penny dreadfuls diffuse subtle poison among tens of thousands of youthful readers. They bring wreckage and havoc … and ruination to hundreds of our brightest and best lads and lasses” … An example of how the press promoted the panic was a news story attributing a 14-year-old boy’s suicide to “a period of mental aberration caused by reading dime novels.”
As an author of, uh, fast-paced wild-ride novels, I take this a little personally!
In 1931, gangster movies, which were seen as most dangerous, were banned outright in [several towns in America] … Kids in reform schools were interviewed to understand how their movie watching might have landed them there … Dr. Mary Preston wrote that three-quarters of the children she studied were “addicted” to bad radio and movies: “This atrophy leaves scar tissue in the form of a hardness, an intense selfishness, even mercilessness”
(Much more in the quoted piece by Peter Gray, which I recommend.)
All that said, I am increasingly beginning to believe there are also more subtle and nuanced ways in which chatbots can … maybe not morph our realities, exactly, but subtly reshape the lenses through which we view that reality.
Of course, this too is not exactly original to them.
The Simulation Solution
I talked about my latest novel Exadelic at Stanford last week, at which I articulated something I hadn’t previously really put into words. It’s a book about a whole host of things (it has all the science fiction: time travel, aliens, space travel, parallel dimensions…) but more than anything else it’s about AI and the simulation hypothesis, the notion that our universe is software running on some ubercomputer in a higher reality.
I do not actually think we live in a simulation. But despite that I do think the simulation hypothesis is a fantastic metaphor for our world. We spend more than a third of our waking lives online. We know, understand, and interact with the world through our screens. Our reality isn’t software per se, but it very much is mediated by software — to the extent that you can even jump between different mediated realities! Go to a left-wing political discussion site and then a right-wing one, and you haven’t just spanned two sets of opinions; you have crossed between two different worlds.
There has been much wailing and gnashing of teeth about the fragmentation of our culture into de facto separate realities since well before the ChatGPT moment. Do LLMs make things worse? Well. To some nontrivial extent they make things better. They may hallucinate details nonstop, but on a higher level their collective reality at least tends to be a great deal more consistent than ours … at least to date.
But: my go-to one-line explanation of both the triumphs and tragedies of modern AI is: “AI is a tool that masquerades as a solution.” Interpreting AI outputs as complete and viable solutions tends to be extremely tempting … and also another example of choosing a reality that diverges from the actual ground-truth facts.
This brings us to a different, far milder, more entertaining, and in at least some cases, possibly long-term positive form of AI psychosis: delusions of AI technical grandeur.
Trajectory
“They thought they were making technological breakthroughs. It was an AI-sparked delusion,” warns CNN:
As James worked on the AI’s new “home,” – the computer in the basement – copy-pasting shell commands and Python scripts into a Linux environment, the chatbot coached him “every step of the way.” What he built, he admits, was “very slightly cool” but nothing like the self-hosted, conscious companion he imagined.
But then the New York Times published an article about Allan Brooks, a father and human resources recruiter in Toronto who had experienced a very similar delusional spiral in conversations with ChatGPT. The chatbot led him to believe he had discovered a massive cybersecurity vulnerability
The belief that one has achieved a technical breakthrough is a whole subgenre of ChatGPT delusion … and one not limited to the naive. Late in 2025 a former DeepMind engineer seemed pretty convinced he was about to solve the famous (in some circles) Navier-Stokes millennium problem …
…but not so much. Earlier last year, Travis Kalanick, billionaire founder and former CEO of Uber, reported
I’ll go down this thread with GPT or Grok and I’ll start to get to the edge of what’s known in quantum physics and then I’m doing the equivalent of vibe coding, except it’s vibe physics … And we’re approaching what’s known. And I’m trying to poke and see if there’s breakthroughs to be had. And I’ve gotten pretty damn close to some interesting breakthroughs just doing that.
Steve Yegge’s “Gas Town” project has led to warnings of agent psychosis:
The thing is that the dopamine hit from working with these agents is so very real. I’ve been there! You feel productive, you feel like everything is amazing, and if you hang out just with people that are into that stuff too, without any checks, you go deeper and deeper into the belief that this all makes perfect sense. You can build entire projects without any real reality check. But it’s decoupled from any external validation. For as long as nobody looks under the hood, you’re good. But when an outsider first pokes at it, it looks pretty crazy … you can see similar things in some of the AI builder circles on Discord and X where people hype each other up with their creations, without much critical thinking and sanity checking
I like Gas Town and think it’s really interesting, to be clear. And I think some of this … “Claude Code euphoria” … is good and healthy, in the same way that most-to-all great hackers went through a similarly quasi-crazed larval stage in their youth.
But I also think it’s instructive that Gas Town, like almost all of the “agent psychosis” projects, is very explicitly a work in progress. It seems that people subject to this particular AI fever don’t believe they have actually stormed the heavens and achieved greatness; at least, not yet. What they’re convinced of is more subtle; that they are on a trajectory to some kind of apotheosis.
To understate: we are not going to see less of this. We do love our trajectories. As agentic AI diffuses out into the world, for more and more people it will be a test: will you do the hard work to curate, edit, translate, and refactor the (often legitimately mindblowing!) results into the real world … and sometimes reject them, or accept they’re a dead end? Or will you succumb to the temptation to treat it as a solution, or at least a shining path towards a solution, and stay in the AI replicant reality?
We’re all going to have to watch out for that Voight-Kampff-Dunning-Kruger test. Asking yourself whether you’re failing it might help you pass! …But probably won’t guarantee it. As a species, as the cliché goes, we can resist everything except temptation.





