The interview is dead, long live the interview
Perhaps my most wildly implausible sci-fi concept yet: what if everyone didn't hate the hiring process?
My faith in the youth has been restored. A Columbia student, one Chungin ‘Roy’ Lee, built ‘Interview Coder,’ an AI assistant “invisible to all screen-recording softwares” which answers Big Tech-style ‘leetcode’ interview questions for you, in real time, undetectably. Lee used it to get internship offers from Meta, Amazon, and TikTok, among others, before revealing the extremely funny truth.
This is fantastic. (And perhaps the most punk rock thing an Ivy League CS undergrad has ever done.) Needless to say the companies and the university are apoplectic. Good. Serves them right. The software interview process is thoroughly broken and has been for a long time. One of my all-time most popular TechCrunch pieces, “Why The New Guy Can’t Code,” was about how such interviews are a horrifically poor proxy for actual software engineering; I wrote it fifteen years ago. Perhaps AI will finally be the last much-deserved nail in leetcode’s coffin.
Which raises the burning question: what’s next?
It’s not like anyone really thought leetcode was a good proxy for actual engineering. People and companies kept using it because, despite their distaste, it still seemed the least bad one that offered any efficacy and/or repeatability. Hiring seems like an awful process to those being interviewed; but after being hired you eventually get to the other side of the table … and realize it’s even worse than you thought. The current system is considered a hated but necessary evil by almost everyone trapped in it. The notion of a hiring process that either side, never mind both, actually likes and appreciates … is so divergent from our lived experience it’s almost unimaginable.
Might AI be both destruction and salvation?

(I mean. Of hiring as we know it. For now.)
First principles: have some principles
What are you really doing when hiring someone?
Perhaps you have a specific set of tasks that need doing, and need someone who will perform them in a timely and responsible way. Straightforward, but rare. Perhaps there exists a more general field of tasks that may need doing, so you need someone who can learn how to perform tasks across that field well and quickly. (This is most software jobs.) But when hiring someone more junior, what you’re really hiring is the willingness to do a lot of grunt work; enough intelligence to direct it in a vaguely productive direction; and the potential to learn how to do more skilled / nuanced work.
For an ever-growing number of fields, modern AI ticks off all three boxes in that last sentence, which is more than a little awkward, especially if you’re a junior engineer looking for a job. We might need to soon add a fourth and fifth box: “the ability to do work that AI can’t do,” and “the ability to manage/orchestrate AI to get tasks done.” But this isn’t a piece about AI replacement. This is a piece about AI allocation.
In some perfect kumbayah world, hiring would describe the process by which potential employers and potential employees happily work together to mutually identify those positions where every individual will flourish and do their best work. To put it mildly, we do not currently live in that world. Instead the hiring process is a toxic brew of rejection, impatience, mismeasurement, anxiety, greed, regret, and guesswork — a migraine-inducing Sorting Hat whose results often seem scarcely better than sortition.
This is not a law of nature! There exist fields where both employer and employee can proceed with considerable confidence, because both have a comprehensible body of work. Sports analogy: when the Golden State Warriors traded for Jimmy Butler, they knew exactly what they were getting—his skills, his stats, his capabilities—and paid accordingly. Yes, of course they had to talk to him too. Yes, there were plenty of intangibles to judge: his age, his attitude, “Playoff Jimmy.” But they already knew, to quite a precise degree, how—and how well—he would contribute.
Semi similarly, fifteen years ago, I wrote:
Let me offer a humble proposal: don’t interview anyone who hasn’t accomplished anything. Ever. Certificates and degrees are not accomplishments; I mean real-world projects with real-world users. There is no excuse for software developers who don’t have a site, app, or service they can point to and say, “I did this, all by myself!” in a world of [free hosting tiers].
It may seem like AI introduces a new problem—how do you know their accomplishments were theirs?—but in fact this is a very old problem; the world is full of people who took credit for work done by their team, or open-source contributors, or outsourcers. What AI may actually introduce is, instead, an entirely new solution.
I for one welcome our new LLM interrogators
From the hiring side of the table, the reason we do leetcode interviews, and HackerRank problems, and all the deep technical screening, is that even if someone does have a list of accomplishments, in order to go through them in detail, you’d need … like … a whole army of smart interns, able and willing to go through not just GitHub repos but individual GitHub commits, and resulting sites/services, and technical writeups, and blog posts and Substacks and READMEs, in great detail, and identify which contributions were actually theirs, and how good they were. Sure, NBA teams can afford entire divisions of analysts and scouts, but come on, that doesn’t scale, having a whole army of smart interns on tap for such a task is completely unrealis—
—hang on. Wait a second.
“A whole army of smart interns on tap,” you say? Isn’t that exactly what modern AI is?
Suppose that when someone—anyone—applied for a job, an AI-powered system analyzed their LinkedIn, their GitHub, their blog posts, their professional X/Bluesky/Substack if any, and identified and assessed their work. (“What if an LLM actually wrote the code they checked in?” Come on, it’s 2025, LLMs will write most of the code they check in for you, too. So assess whether they orchestrated and synthesized those LLM outputs into something high-quality or … not.) Generate an AI-powered assessment of their accomplishments, just as the Warriors’ scouts and quants analyzed Jimmy Butler and Kevin Durant, because we now live in an era where that kind of analysis scales.
I know, I know, you’re thinking “So let a biased AI system dictate applicants’ fate?” Nope. Stay with me. It’s a little more nuanced than that.
Point One: after the assessment is generated, the applicant can access it, and can write up their own commentary / response / explanation. Regardless of whether it’s convincing or not, it will be a million more times illuminating than a rote cover letter.
Point Two: a similar-but-different system does the exact same thing for the employer. Analyzes their accomplishments and body of work; their finances and funding, their officers’ backgrounds, their Glassdoor reviews, their media footprint, their blog posts, testimonials-or-criticisms from recent employees, outside assessments of their business model and technical quality from economists and engineers … and makes that available to all would-be employees.
Yes, a lot of employers will not like this one bit. Which is sort of the point. They, too, will be able to explain / clarify / make excuses for this assessment. But both sides will get a thorough, details, and not-unbiased-but-constently-biased AI assessment of the other … and can use that information to optimize their mutual outcome.
I previously co-founded a startup which did something which kind of rhymed with this; it issued AI-generated reports on software projects’ status, progress, and risks. We moved on because, depressingly, it was clear what the market really wanted was for us to tell companies which engineers to lay off, and that wasn’t something we wanted to devote years of our lives to.
But I think it’s only a matter of time before someone flips that script and begins to realize that the fundamental problem with hiring is information scarcity, and as with all such problems, it can now be automated away. Many people today are worried AI means no place for them in the future. Maybe. But maybe the reality is entirely the opposite; maybe AI can help them find the place where they can do their best work.