(or, the Moore's Law of AI assistance)
"At first they couldn't believe a compiler could produce code good enough to replace hand coding. Then, when it did, they didn't believe anyone would actually use it."
— sentiment attributed to John Backus describing FORTRAN's reception in 1957
The pattern keeps repeating.
Independent work
I've been noticing something over the past few years working with AI tools: the amount of time they can work productively alone keeps expanding.
Not "alone" as in isolated, but rather: how long before it needs a human in the loop. Before it have to stop and ask me, "does this make sense?" or "how do I approach this?" or "can you review this?"
Last year, Claude could help me write a paragraph before I'd need to step back and reassess.
Around March this year, maybe a whole piece ... but I'd still be constantly course-correcting, re-prompting, fact-checking against my own knowledge.
Now ... I'm building entire projects, writing substantial pieces, researching topics I barely understand ... and the sessions are longer. Not just faster. Actually sustained, coherent work over hours instead of minutes.
The time horizon of effective independent work seems to double roughly every 10 months.
This is just observation, not science. I haven't measured this rigorously. But the pattern feels real enough to name.
The measurement isn't model size. It's what I'll call the 'Mean Time Between Interventions': how many uninterrupted minutes of work can you sustain...?
The FORTRAN parallel
When Backus and his team at IBM released the first FORTRAN compiler in 1957, the programming community went through two stages of disbelief:
First one: "This is impossible. No compiler can match hand-coded assembly."
Before FORTRAN, programming meant writing in machine code or assembly: what Backus called "hand-to-hand combat with the machine." The prevailing wisdom was that only human experts, intimately familiar with the hardware's quirks, could write efficient code. Compilers would produce slow, bloated garbage.
Then two: "Okay, it works... but no serious programmer would actually use it."
Even after FORTRAN proved it could generate code nearly as fast as hand-crafted assembly (reducing 1,000 assembly instructions to about 50 FORTRAN statements) many programmers resisted. They saw themselves as a "priesthood" (Backus's term) with specialized knowledge. Why give that up?
We're in a similar moment now with AI coding assistants. The arguments are nearly identical:
- "LLMs can't possibly understand the real problem"
- "It will just generate inefficient, buggy code"
- "Real programmers will always prefer to write code themselves"
(Some of this is true. Some of it was true about early compilers too. The question is: which parts stop being true, and how fast?)
The modern-day equivalent of the 'hand-coded assembly' argument is the 'you still need a senior engineer to guide the responses, verify ' argument. True, yes, but missing the point.
What's actually changing
The above seems a bit wishy-washy, and I want to start with my subjective experience. However, there are real concrete ways the model experience has been improving.
Context windows expanding: You can have an actual extended conversation rather than constantly re-explaining yourself.
(we've gone from ~4k (GPT-3.5) to ~128k (GPT-4/Claude 3) to ~1M-2M (Gemini 2.5 Pro/Claude 4) in roughly 24 months)
Memory persistence: Tools remember previous sessions. Less "who are you again?" tax.
Multi-modal orchestration: The tools can now hold code + docs + conversation + search results in play simultaneously without dropping threads.
Error recovery improving: When the model gets confused or hallucinates, it's easier to course-correct without restarting the entire session.
All of this adds up to: longer stretches of "flow state" where the AI is genuinely useful rather than just occasionally helpful.
Why this matters (maybe)
The original Moore's Law wasn't actually a law: it was an observation about exponential trends in transistor density that held for decades. Then it didn't.
This might be similar. An observed exponential that works until it doesn't.
The ceiling here isn't hardware-bounded: it's trust-bounded, cognition-bounded, epistemic-bounded. At some point, "effective solitude" might plateau not because the tools can't do more, but because humans need collaboration rhythms that don't scale infinitely.
Or maybe the pattern shifts the same way FORTRAN did: the tools become so reliable that the resistance fades, adoption accelerates, and the nature of the work itself changes. What felt like "giving up control" becomes "focusing on what matters."
(I don't have a neat answer here. We'll see.)
The naming problem
"Moore's Law of AI Assistance" is descriptive but clunky.
"The Solitude Curve" or "The Autonomy Gradient" feel better: they capture the phenomenon without overcommitting to mechanism.
The measurement isn't model size or parameter count. It's: how many uninterrupted minutes of meaningful work can you sustain with AI assistance before needing human feedback?
That number is going up exponentially (for now!)
On laziness and leverage
"Much of my work has come from being lazy. I didn't like writing programs, so I started work on a system to make them easier to write."
— John Backus
That's the common feature ... tools that let you work longer before hitting friction. Before needing to ask for help ... or before the system breaks down and forces you back to hand-coding reality 😄
The exponential here isn't about making computers faster. It's about making humans more capable of sustained independent thought, by removing the bookkeeping, the context-switching, the "wait, how do I do this again?" overhead.
Whether we're in 1957 looking at the first compiler, or 2025 looking at AI assistants, the dynamic is similar: automation that extends how long you can work productively without external dependencies.
We'll re-evaluate in 10 months and see if the curve holds.
Note: This is not about AGI. I don't think AGI is happening. I don't believe AGI is going to happen. All of this is utility independent of AGI (more on that later).