Prelude: The Illusion of Competence
There is something wrong with how we're using AI. Not the technology itself, but what it's doing to the people wielding it.
I noticed it first in myself. Then in junior developers I mentor. Then everywhere I looked.
The pattern is always the same. Someone uses AI to generate code they don't understand. The code works. They ship it. Then something breaks, and they have no idea why. They can't debug it because they never understood it in the first place.
We're witnessing a fundamental decoupling of "doing" from "understanding." For decades, the act of producing work (writing code, designing systems, debugging failures) was the primary engine of learning. The friction of the process was where the cognition happened. Today, that friction has been lubricated out of existence.
The prevailing narrative suggests AI is a "co-pilot" that frees engineers from drudgery to focus on "higher-order thinking."
I think this is a comforting lie. The evidence suggests we are not freeing minds. We are atrophying them. We are building the foundation for a brutal new divide, not between the haves and the have-nots, but between those who can think independently and those who cannot function without the machine.
The Orthodoxy: Why "AI as Co-pilot" is Misleading
The standard defence of AI coding assistants relies on a simplified view of how learning works. The argument goes like this.
Working memory is finite. Learning is hindered by unnecessary mental effort. AI removes that unnecessary effort. Therefore, AI frees up mental bandwidth for real understanding.
It is a seductive proposition. If a calculator frees you from long division to understand calculus, surely Copilot frees you from syntax to understand architecture?
Industry leaders push this narrative aggressively. They envision a world where AI acts as a "collaborative partner," democratising access to high-level output. By removing the "grunt work" of writing code, we are elevating developers to the role of architects.
This view assumes that the "grunt work" is separate from the learning process. It assumes that coding is merely the transcription of thought, rather than the mechanism of thought itself.
This assumption is demonstrably false. Anyone who has actually learned to code knows that understanding comes through the struggle, not despite it.
When Friction is the Feature
The cracks in the efficiency narrative appear not just in observation, but in what actually happens when you ship AI-generated code to production.
I've seen this play out repeatedly. A junior developer uses Claude to generate a database query. The query works in development. In production, under load, it brings the database to its knees. The developer has no idea why because they never understood the query plan, the indexing strategy, or why nested subqueries are catastrophic at scale.
They didn't learn. They just produced output.
The Evidence of Atrophy
Research from MIT found that while AI tools significantly increased the speed and quality of written outputs, they fundamentally altered the user's engagement with the task. The productivity gains came at the cost of the "human struggle" necessary for deep comprehension.
The participants were not "collaborating." They were supervising. And supervision requires a different, often shallower, set of cognitive muscles than creation.
This is what researchers call cognitive offloading. Normally, offloading is beneficial. Writing down a phone number frees mental space. But when the offloading encompasses the entirety of the cognitive process (ideation, structure, implementation, debugging) we enter the realm of cognitive atrophy.
Consider the difference in how understanding develops.
Traditional Learning Loop:
Input → Internal Processing (Struggle) → Synthesis → Output
↑
Schema Construction (Long-term Memory)
AI-Mediated Loop:
Input → Prompt Engineering → AI Processing → Output
↓
Surface Review (Does it run?)
In the second model, the "Internal Processing" phase, where long-term memory and critical analysis are forged, is bypassed entirely. The developer produces the result but builds no mental model. They have "done" the work, but they have "understood" nothing.
The Death of Understanding Through Shortcuts
The fundamental error in the "AI as Co-pilot" narrative is a misunderstanding of how expertise develops.
Expertise is not just "thinking hard." It is the specific type of mental effort required to create permanent connections in long-term memory. It is the friction. It is the confusion before clarity. It is the frustration of not understanding why your code doesn't work, which forces your brain to search through possibilities, test hypotheses, and strengthen those neural pathways.
When AI provides the answer, the structure, or the working code, it eliminates the frustration. But frustration was the learning.
Metacognitive Laziness
This creates a phenomenon I call metacognitive laziness.
The brain is a miser. It seeks to conserve energy. If an external agent can perform a high-energy task (like figuring out why your async code is racing) with lower energy expenditure (paste error into Claude), the brain will default to the path of least resistance.
This is not a moral failing. It is a biological imperative. But the consequences are severe.
By bypassing the struggle of debugging, developers fail to develop:
-
Metacognition. Knowing what they know and what they don't. I've met developers who cannot accurately assess their own skill level because they've never been forced to confront their limitations.
-
Pattern recognition. The ability to spot problems before they happen. This only comes from having seen those problems before, in the wild, when they bit you.
-
Systems thinking. Understanding how pieces fit together. AI gives you the piece, but not the understanding of where it belongs or why.
We are not creating a generation of architects. We are creating a generation of people who can paste together working code without understanding why it works or how it will fail.
The Production Reality
Here's what I've actually seen happen.
A developer uses AI to build a caching layer. The code is clean. The tests pass. In production, the cache invalidation logic has a subtle bug that causes stale data to persist for hours. The developer cannot fix it because they don't understand the invalidation strategy. They just asked AI for "a caching solution" and got one.
Another developer uses AI to write a retry mechanism for API calls. It works. Until the downstream service goes into a degraded state, and the retry logic creates a thundering herd that brings down both systems. The developer had no mental model of backoff strategies, circuit breakers, or distributed systems failure modes.
The code worked. The understanding was absent. Production told the truth.
The Debugging Deficit
The most telling sign is in debugging. Developers who learned through AI assistance often cannot debug effectively because debugging requires a mental model of what should be happening. If you never built that model, you cannot reason about where it broke.
I've watched developers stare at error messages with no idea where to start. Not because the error is cryptic, but because they have no map of the system in their heads. They generated the map. They never walked it.
# The AI-assisted developer's debugging process
def debug_issue():
# 1. Copy error message
# 2. Paste into AI
# 3. Apply suggested fix
# 4. If still broken, repeat
# 5. If still broken after 5 attempts, ask for help
# What's missing: Understanding WHY it broke
# What's missing: Building intuition for future bugs
# What's missing: Learning
This is not debugging. This is guess-and-check with AI as the guesser. No understanding is built. No expertise accumulates.
The New Class Divide
This leads to the most uncomfortable conclusion of this analysis. We are creating a stark divide, defined not by access to technology, but by the relationship with it.
The Wielders vs. The Replaced
The Wielders are individuals who possess high intrinsic capability before they engage with AI. They have domain expertise. They can code without Copilot, debug without Claude, and think without a chatbot. Because they possess fundamental understanding, they can use AI to accelerate execution. They use AI to handle the "doing" only after they have mastered the "understanding."
The Replaced are those who used AI to bypass the acquisition of fundamental skills. They cannot distinguish a hallucination from correct code because they lack the internal knowledge base to verify it. They are tethered to the machine not as a master, but as a dependent.
THE COGNITIVE DIVIDE
The Wielder (Augmented) The Replaced (Dependent)
┌─────────────────────────┐ ┌─────────────────────────┐
│ CORE COGNITION │ │ CORE COGNITION │
│ [Robust Mental Models] │ │ [Fragmented Knowledge] │
│ │ │ │
│ [Critical Filter] │ │ [Passive Acceptance] │
│ │ │ │ │ │
│ ▼ │ │ ▼ │
│ AI AMPLIFIER │ │ AI SUBSTITUTE │
│ (Expands Capacity) │ │ (Replaces Process) │
└─────────────────────────┘ └─────────────────────────┘
│ │
Result: 10x Output + Understanding Result: 1x Output, Zero Understanding
The Professional Consequence
In the professional world, this divide will be ruthless. The economy rewards scarcity and value.
If your value is "producing syntactically correct code," you are worth nothing. The AI does that for free. If your value is "implementing features from tickets," your value is dropping rapidly. If your value is "debugging production systems under pressure," you're increasingly rare and valuable.
Value will accrue solely to those who can direct, evaluate, debug, and design. Skills that can only be developed through the very struggle that AI invites us to skip.
The Way Forward: Artificial Friction
I am not arguing for smashing the servers. AI is here, and it is a powerful tool. But we must stop lying to ourselves about its impact on skill development.
The orthodoxy says AI makes work easier. The truth is that some work should not be easy. The kind of work that rewires brains and builds expertise is inherently difficult. It requires struggle.
The path forward requires intentional friction.
-
Learn first, accelerate later. Do not use AI for tasks you cannot do yourself. Once you can do them, use AI to do them faster.
-
Debug manually first. When something breaks, resist the urge to paste the error into AI. Sit with the confusion. Build the mental model. Then, if you're stuck, use AI as a hint, not a solution.
-
Understand before you ship. If you cannot explain what your code does without looking at it, you should not ship it. Full stop.
-
Treat AI output as untrusted. Every line of AI-generated code is a claim that needs verification. If you cannot verify it, you should not use it.
The choice facing every developer is stark. We can choose the comfort of cognitive offloading, producing output without understanding. Or we can choose the difficult path of cognitive discipline, wielding AI only when our own minds are strong enough to hold the reins.
One path leads to dependency. The other leads to mastery.
The tools are neutral. How we use them determines what we become.