PURE SIGNAL February 15, 2026

The frontier AI community is grappling with a paradox. The tools that were supposed to liberate us are creating new forms of exhaustion and debt.

The Burnout Paradox: When AI Makes Work Harder

Steve Yegge has coined a term that's resonating across Silicon Valley—the AI Vampire. It's the phenomenon where AI productivity gains get completely captured by employers while workers face cognitive overload.

Here's the cruel math. You adopt AI and work at ten times productivity for eight hours a day. Your company captures one hundred percent of that value. You get nothing—certainly not nine times your salary. Meanwhile, everyone hates you for making them look bad. And you're exhausted.

Simon Willison connects this to what he calls "AI turning us all into Jeff Bezos." The easy work gets automated away. What's left? All the difficult decisions, summaries, and complex problem-solving. The cognitive load is intense.

Yegge reports needing more sleep due to agentic engineering demands. Four hours of agent work per day—that's the realistic sustainable pace. Even experienced practitioners find themselves comfortable working at that intensity only in short bursts.

This isn't just about productivity tools. It's about a fundamental shift in what human work looks like when machines handle the routine tasks.

Deep Blue: The Existential Crisis in Code

The Oxide and Friends podcast recently coined another term that's spreading—Deep Blue. It captures the psychological ennui leading to existential dread that software developers feel as AI encroaches on their field.

Willison experienced his first bout of Deep Blue back in early twenty twenty-three. He uploaded San Francisco police incident reports to ChatGPT Code Interpreter. Hundreds of thousands of rows. The AI did every piece of data cleanup and analysis on his roadmap for the next few years. With just a couple of prompts.

Two competing thoughts hit him simultaneously. As someone wanting journalists to do more with data, this felt like a breakthrough. But personally? What was he even for anymore?

The latest generation of coding agents—Claude Opus four point five and GPT five point three—can churn away for hours producing working, documented, fully tested software. "The code they write isn't any good" doesn't cut it anymore.

Becoming a professional software engineer is hard. Years of dedicated work to get good enough for people to pay you. It's well compensated, free from gatekeepers, rewards the nerds. The idea that a chatbot could strip this away is deeply upsetting.

But here's the thing—chess players and Go players went through this when Deep Blue defeated Garry Kasparov in nineteen ninety-seven. They came out stronger.

Cognitive Debt: The Hidden Cost of Going Fast

Margaret-Anne Storey has identified what might be the core issue—cognitive debt. Unlike technical debt, which lives in the code, cognitive debt lives in developers' brains.

Even if AI agents produce clean, understandable code, humans lose the plot. They don't understand what the program is supposed to do, how their intentions were implemented, or how to change it.

Storey coached a student team that hit a wall by week seven. They couldn't make simple changes without breaking something unexpected. The code wasn't the problem—their shared understanding had fragmented entirely. They'd accumulated cognitive debt faster than technical debt.

Willison has experienced this firsthand with his vibe-coding experiments. Prompting entire features into existence without reviewing implementations works surprisingly well initially. But eventually, he loses his mental model of what his own projects can do. Each additional feature becomes harder to reason about.

The Return to Curation

Interestingly, there's a counter-movement emerging. Andrej Karpathy—the mind behind vibe coding—is advocating for RSS feeds as an antidote to AI slop.

The internet is flooding with low-quality, AI-generated content designed to attract clicks rather than inform. Social media algorithms reward sensational content over thoughtful work. The solution? Direct subscription to trusted sources through RSS, cutting out algorithmic intermediaries.

Karpathy isn't anti-AI. His concern is about incentive structures. Any platform with the same engagement-driven model will eventually converge to the same attention-seeking black hole.

Meanwhile, browser vendors are collaborating on Interop twenty twenty-six—ensuring web standards reach cross-browser parity. It's a reminder that some progress still happens through careful, coordinated human effort.

The frontier isn't just about building more powerful AI. It's about learning to work with these tools without losing ourselves in the process.