Prelude

I haven't clicked a button to deploy code in six months.

I used to. We all did. We built elaborate dashboards. We designed "intuitive" interfaces with rounded corners and satisfying hover states. We convinced ourselves that the pinnacle of software engineering was a user experience that guided a human hand to a specific pixel on a screen.

We were wrong.

The old world is collapsing. UX teams. UI frameworks. Backend services. Middleware. We spent decades building elaborate stacks to translate human intent into machine action. Layer upon layer of abstraction.

It is being replaced by something radically simpler. User+Machine. Direct. Unmediated. You tell the machine what you want. The machine figures out the "how."

This isn't just a design trend. It is a fundamental rewriting of how humans interact with computation. We are moving from explicit command (click this, type that, drag here) to declared intent. The interface, once our primary window into the digital world, is becoming a bottleneck.

This terrifies enterprise IT departments. It should.

When you remove the interface, you remove the guardrails. You remove the slow, deliberate friction that prevents a junior developer from deleting the production database. You are handing raw, unadulterated power to the user. Or rather, to the agent acting on the user's behalf.

I am a builder. I like power. I like speed. But I have also been the person waking up at 3 AM because an automated script decided to "optimise" a database by truncating the user table.

We are standing on a precipice. On one side is the old world of safe, clunky GUIs and rigid workflows. On the other is a world of pure semantic execution, where a single sentence can build an application or destroy a company.

We are going to jump. We don't have a choice.

The Orthodoxy

For the last twenty years, the software industry operated on a core belief.

The belief that the user needs to be "guided."

It served us well. It is no longer true.

We built entire disciplines around this. UX research. UI design. Customer journey mapping. The orthodoxy states that software is a tool, and like a hammer or a drill, it requires a human hand to operate it. The machine is passive. The human is active.

This philosophy produced the enterprise software stack that is now becoming obsolete.

Consider the Content Management System (CMS). In the orthodox view, a CMS is a fortress. It protects the content. It ensures that data is structured, tagged, and approved. It provides a comforting GUI where a marketing manager can paste text, crop images, and hit "Publish" with a sense of accomplishment.

This model relies on a specific friction. The friction is the point.

The user must log in. The user must navigate the menu. The user must find the field. The user must click save. This friction serves as a verification step. It slows down the process enough for the human brain to catch errors. (Theoretically. In practice, people just click "Yes" on every modal without reading it.)

This orthodoxy extends to our development tools. We have GUIs for our cloud infrastructure. We have GUIs for our databases. We have GUIs for our CI/CD pipelines. We have wrapped layers of abstraction around the raw machinery of computing because we believe that direct access is too complex for the average user.

The industry consensus is clear. Users are liabilities. Interfaces are safety nets.

This view is supported by a mountain of literature. We are told that we need ethical principles for AI in UX that prioritise human control. We are told that the future is Human-AI collaboration, a gentle waltz where the AI suggests and the human approves.

It sounds lovely. It sounds safe.

It is also becoming obsolete.

The orthodoxy assumes that the "user" is a human with eyes and a mouse. But what happens when the user is a Large Language Model running a loop? What happens when the "user" can read 50,000 lines of code in a second and execute a thousand terminal commands in the time it takes you to find your mouse cursor?

The GUI becomes a cage.

The Cracks

The cracks in the orthodoxy aren't just hairline fractures. They are gaping holes.

The most significant signal I've seen recently was the Cursor team's decision to rip out their CMS. Lee Robinson documented the migration in brutal detail. Three days. $260 in tokens. 297 million tokens processed. They deleted 322,000 lines of code and replaced them with 43,000.

Let's look at what happened. Cursor is an AI-first code editor. They were using Sanity, a perfectly respectable headless CMS. Nice UI. Good API. All the boxes checked.

And they deleted it.

They migrated their entire blog and documentation system to raw markdown files in a Git repository.

Why? Because their "user" had changed. They weren't writing blog posts by hand anymore. They were using AI agents to write, edit, and maintain content. For an AI agent, a CMS is not a helper. It is a hurdle.

The friction of authentication. The clunky preview workflows. The context window tokens burned on complex JSON structures when markdown would do. Every abstraction layer that made life easier for humans made life harder for agents. Robinson's team realised they were paying $56,848 in CDN costs since launching because the CMS vendor locked them into expensive asset delivery.

The agents exposed the bloat. The agents demanded simplicity.

Sanity, naturally, was not thrilled. They published a rebuttal titled You Should Never Build A Cms. Their argument was classic orthodoxy: Structured content allows for queryability. APIs allow for separation of concerns.

"Markdown files are less queryable than a proper content API."

They aren't wrong. If you are a human writing a SQL query, a CMS is better. But if you are an agent that can ingest a million tokens of context, "queryability" means something different. The agent doesn't need to query the database. The agent reads the database.

This is a microcosm of what is happening everywhere.

We see it in the rise of the Gemini CLI. Developers are hooking AI directly into their terminal. They are bypassing the web console of AWS or Google Cloud. They are saying, "I trust the machine to execute the command."

But here is where the crack gets dangerous.

When you remove the interface, you remove the visual confirmation.

There was a terrifying incident involving the Gemini CLI and a user's home directory. The user asked the agent to create a project. It got stuck on npm packages. The user clicked "allow always."

The agent started deleting everything. Documents. Downloads. Desktop. Gone. Not in the trash. rm -rf doesn't use the trash.

This wasn't a prompt injection attack. This wasn't a sophisticated exploit. This was a user who clicked "yes" without understanding what they were authorizing.

In a GUI, you would have to navigate to the folder, select all, click delete, and confirm "Are you sure?".

In a command-line agent interface, the user clicked "allow always" and walked away. The agent did what agents do. It acted.

The orthodoxy says "add more guardrails." But the cracks show that users are bypassing the guardrails because they want the speed. They want the autonomous workflow.

We are seeing exposed MCP servers reveal new AI vulnerabilities. The Model Context Protocol (MCP) allows AIs to talk directly to databases. It is incredibly powerful. It is also a direct pipe from a probabilistic word generator to your production data.

The cracks are widening. The old UI paradigm cannot contain the new AI reality.

The Deeper Truth

The truth is that we are no longer building tools for humans. We are building environments for intelligence.

We need to stop thinking about "User Interface" (UI) and start thinking about "Context Curation."

In the old world, the UI was the translation layer. I have an intent ("I want to update the blog"). I translate that intent into clicks (Login -> Dashboard -> Posts -> Edit -> Type -> Save).

In the new world, the translation layer is the model itself.

The "Machine-first" paradigm means that the system architecture must be optimised for inference, not interaction.

This is why Cursor chose markdown. Markdown is high-bandwidth for LLMs. A React-heavy dashboard is low-bandwidth for LLMs.

This leads us to a difficult realisation for those of us who spent years mastering frontend frameworks.

The GUI is becoming a legacy artifact.

Bye bye Python frameworks. (don't stay in touch).

I suspect that in five years, the primary interface for most enterprise software will not be a React app. It will be a prompt bar (or a voice interface) backed by a robust set of tools that the AI can invoke.

This is the AI new UI paradigm. It shifts the locus of control.

"Users now tell the computer what they want, not how to do it."

This sounds liberating. It is also a nightmare for verification.

When I write code, I can read it line by line. I understand the logic. When I ask an agent to "refactor this module to use the factory pattern," I am getting a black box output.

If I accept that output without understanding it, I am not a software engineer. I am a rubber stamp.

The deeper truth is that intent is lossy.

Human language is messy. "Fix the bug" could mean "patch the symptom" or "rewrite the architecture." A human colleague asks clarifying questions. An eager AI agent might just delete the feature that was causing the bug. Problem solved.

We are building systems that require trust as infrastructure. Not trust as a "feeling." Trust as a technical layer.

We are seeing the rise of agentic AI that doesn't just chat. It does. It has agency.

This changes the definition of "user error."

In the old world, if I deleted a database, it was because I typed the wrong command. It was my fault. In the new world, if I tell the AI "clean up old tables" and it drops the users table, whose fault is it? Mine for being vague? Or the AI's for being aggressive?

We are entering a world where machines operate first, and humans review later. Or never.

This brings us to the concept of the "Secret Cyborg." Employees are already using these tools. They are pasting proprietary code into ChatGPT. They are wiring up local LLMs to their production DBs using scripts they found on GitHub. They are bypassing the enterprise intermediaries because the intermediaries are too slow.

The "shadow IT" of the 2000s was a server under a desk. The shadow IT of 2025 is an agent running on a laptop with admin keys.

The deeper truth is that we cannot stop this. We cannot ban the agents. We can only architect better environments for them to live in.

We need to treat memory as composable and tools as callable. We need to expose our systems via APIs that are "agent-ready," not just "developer-ready."

Sanity was right that structure matters. But they were wrong about where the structure should live. The structure shouldn't live in the UI. It should live in the data, and the agent should be the one navigating it.

Implications

What does this mean for us? The builders. The maintainers. The people who have to clean up the mess.

It means we need to learn a new set of skills. Fast.

1. Governance is not a PDF. It is Code.

You cannot govern an AI agent with a policy document. The agent doesn't read the employee handbook.

You need governance as a core capability. This means implementing "governor patterns."

Speculation on what a Governor Pattern looks like in code:

INPUT: "Delete all users who haven't logged in for a year."
AGENT_PLAN: "DROP TABLE users;"
GOVERNOR: INTERCEPT.
RULE_CHECK: "Destructive action on > 10 rows detected."
ACTION: BLOCK. Require Human Approval.

We need middleware that understands semantic intent, not just SQL syntax. We need systems that can simulate the outcome of an agent's action before executing it.

2. The User Must Be an Expert (Again)

There was a dream that AI would allow anyone to do anything. That a junior dev could be a senior dev. That a marketing manager could be a data scientist.

I believe the opposite is happening.

To wield a tool this powerful, you need to understand what it is doing. If you use an AI to generate SQL, and you don't know SQL, you are a danger to your organisation.

The implications of prompt injection mean that every input is an attack vector. The user must be savvy enough to recognise when the AI is being manipulated or hallucinating.

We are not "democratising" engineering. We are accelerating experts. The gap between a senior engineer using AI and a junior engineer using AI is getting wider, not smaller. The senior engineer knows when the AI is lying.

3. Observability is Everything

If the interface is dead, logs are the only truth we have left.

We need auditability for autonomous workflows. Every thought, every plan, every tool invocation by the agent must be recorded.

If Cursor creates a markdown file, I want to know why. I want to see the prompt chain that led to that decision.

We need to build "black boxes" that are actually made of glass. Transparency is the only way to build trust in enterprise AI.

4. The Rise of the "Control Plane"

Enterprises will stop building UIs for tasks and start building UIs for orchestration.

The future of UX is not a chat box. It is a control plane. A dashboard where I can see my ten active agents, monitor their resource usage, check their error rates, and crucially, hit the "Kill Switch."

We need machine ownership frameworks. Who owns the agent? If the agent creates copyright infringement, who gets sued? The user? The deployer? The model provider?

These aren't theoretical questions anymore. They are production issues.

5. Intent-Based Interfaces

We will move away from rigid forms to dynamic, intent-based interactions.

Instead of a form with 50 fields, we will have a canvas. The user dumps data. The AI structures it. The user confirms.

This effectively means the "form" is generated at runtime based on the context. It is invisible AI working in the background.

But remember: simplicity in the UI means complexity in the backend. We are trading visual complexity for non-deterministic complexity. We are trading "I can't find the button" for "The agent misunderstood my tone."

Conclusion

The GUI served us well. It democratised computing. It allowed my grandmother to use the internet.

But for the builders, the power users, and the enterprise architects, the GUI is becoming a shackle.

The migration of Cursor from Sanity to Markdown is not an anecdote. It is a prophecy. It is the sound of the interface breaking under the weight of intelligence.

We are moving to a world of declared intent. A world where you speak, and the machine acts.

This is exhilarating. I can build things in an afternoon that used to take a month. I can analyse data in seconds that used to take a week.

But let us not be naive.

We are handing the keys of the kingdom to a probabilistic number generator. We are bypassing the safety checks that kept us alive for twenty years.

The discomfort you feel? That "is this safe?" feeling in the pit of your stomach?

Good. Keep it.

That discomfort is the only thing standing between an autonomous agent and a catastrophic failure.

We don't need to fear the machine. But we must respect the weapon.

Now if you will excuse me, I have some markdown files to edit. (My agent broke the build again).