Prelude
I have spent the last eighteen months building glue.
Not the useful kind of glue that holds a cabinet together. I'm talking about the digital equivalent of duct tape, used to bind one proprietary AI API to another proprietary data source, wrapped in a framework that changes its syntax every three weeks.
It has been exhausting. It has been wasteful.
We have been building agents in a fragmented reality. If you wanted your agent to talk to Google Drive, you wrote a custom integration. If you wanted it to switch to Notion, you wrote another. If you wanted to swap the underlying model from GPT-4 to Claude because the pricing made more sense, you practically had to rewrite the orchestration layer.
We were not building agents. We were building translators.
That changed on December 9th.
The announcement was dry. It involved foundations and governance boards. It lacked the cinematic flair of a demo video showing a robot folding laundry. But for those of us who actually ship code, it was the most exciting news of the year.
The Agentic AI Foundation has arrived. And with it, the protocol wars are effectively over.
The Orthodoxy
To understand why this matters, we have to look at the mess we are currently wading through.
The prevailing orthodoxy in GenAI development has been one of "walled garden innovation." Every major model provider, every framework builder, and every cloud giant looked at the problem of Agentic AI and decided they needed to own the entire stack.
The logic seemed sound on paper. If you own the model, the orchestration, and the tool definitions, you can optimise performance. You can ensure security. You can charge for every step of the chain.
So we ended up with a landscape that looked like the Tower of Babel.
OpenAI had their specific way of defining tools and functions. Anthropic had a slightly different JSON schema. LangChain had its abstractions. LlamaIndex had theirs. Microsoft’s Semantic Kernel did it another way.
If you were a developer trying to build a production system, you had to make a bet. You picked a stack, and by doing so, you locked yourself into a specific way of describing the world to your AI.
If that stack deprecated a feature? You rewrote your code. If a new model came out that wasn't supported by your chosen framework? You waited.
The orthodoxy stated that this fragmentation was a necessary evil of a nascent industry. We were told that standardisation stifles innovation. We were told that it was "too early" to agree on how an agent should ask a database for a row of data.
This was a lie. It wasn't about innovation. It was about vendor lock-in disguised as technical necessity.
We were building distinct silos of intelligence that could not speak to one another. We were creating a world where an agent built on the OpenAI stack would look at a tool definition written for an Anthropic agent and see nothing but gibberish.
The result was friction. Massive, expensive, soul-crushing friction.
The Cracks
The cracks in this walled-garden approach have been visible for months to anyone paying attention.
The first crack was complexity. As systems moved from "chatbots" to "agents," the number of integrations exploded. Maintaining custom connectors for fifty different SaaS platforms for three different model providers is not engineering. It is janitorial work.
The second crack was enterprise adoption. Big companies—the ones with the actual money—do not like betting their infrastructure on a startup's proprietary JSON schema. They want standards. They want to know that the code they write today will run in 2030. They looked at the chaotic agent landscape and they hesitated.
The third crack was the "too many protocols" dilemma. We were seeing a proliferation of half-baked standards. Everyone was releasing their own "Agent Protocol."
"The chaos of competing standards... serves as a reminder of the pitfalls." — Ai Agent Communication Protocols
We were heading toward the classic XKCD situation: we have 14 competing standards, so let's create one more to unify them all. Now we have 15.
The industry hit a wall. To make agents actually useful—to make them capable of doing real work across different systems—we needed a common language. We needed a USB port for intelligence.
The Deeper Truth
The launch of the Agentic AI Foundation (AAIF) is the industry admitting that the "land grab" phase of infrastructure is over.
This is not just another GitHub repository with a manifesto. This is a structural shift in how the industry is organised.
The AAIF is being hosted by the Linux Foundation.
This detail is critical. The Linux Foundation is the adult in the room. They steward Kubernetes. They steward Node.js. They steward the Linux kernel itself. They know how to take a chaotic, rapid-growth technology and turn it into boring, reliable infrastructure.
But the real signal here is the membership list.
"The Agentic AI Foundation (AAIF) was launched by Block, Anthropic, and OpenAI, with significant support from Platinum members including Amazon Web Services, Bloomberg, Cloudflare, Google, and Microsoft." — Block Anthropic And Openai Launch The Agentic Ai Foundation
Look at that list. Google and Microsoft. AWS and Cloudflare. OpenAI and Anthropic.
These companies usually agree on nothing. They fight over talent, they fight over market share, they fight over compute. Yet here they are, aligning on a single set of open standards for agentic AI.
Why?
Because they realised that if the plumbing doesn't work, nobody buys the water.
They have collectively decided that the interface between an AI and the world should be a commodity. The value is in the model (the brain) and the application (the value), not in the connector cable between them.
This brings us to the donations. The AAIF isn't starting from scratch. It is launching with three substantial pieces of technology that define the new stack.
1. The Model Context Protocol (MCP)
This is the big one. Anthropic has donated the Model Context Protocol (MCP) to the foundation.
I have been skeptical of "universal protocols" in the past. Usually, they are abstract academic exercises that describe how software should work in a perfect world.
MCP is different. It is practical. It works right now.
Think of MCP as a "Language Server Protocol" (LSP) for AI. Before LSP, if you wanted to build a code editor (like VS Code or Vim) that supported Python, you had to write your own Python parser and autocomplete engine. LSP standardised the communication. Now, the Python team writes a "Language Server," and any editor that speaks LSP can instantly understand Python.
MCP does the exact same thing for AI tools.
"MCP... serves as a universal standard for connecting AI models to external tools, data, and applications." — Donating The Model Context Protocol And Establishing Of The Agentic Ai Founda...
In the MCP world, I don't write a "Google Drive integration for Claude." I write an "MCP Server for Google Drive."
Once that server exists, any agent—whether it's powered by GPT-4o, Claude 3.5 Sonnet, or Llama 3 running locally—can connect to it and read my files.
The model asks: "What tools do you have?" The MCP server replies: "I can list files, read files, and move files." The model says: "Read file X."
The protocol handles the handshake. The protocol handles the security context. I don't have to rewrite the glue code ever again.
Anthropic claims this is already seeing massive adoption.
"MCP, in just one year, has become a rapidly adopted open-source protocol... boasting over 97 million monthly SDK downloads and 10,000 active servers." — Donating The Model Context Protocol And Establishing Of The Agentic Ai Founda...
(97 million downloads seems remarkably high for a protocol I only started hearing buzz about recently, but even if that number includes dependencies, the trajectory is undeniable.)
2. AGENTS.md
OpenAI's contribution is deceptively simple, yet it solves a problem that has plagued every engineer trying to use AI for coding.
Context management.
When you point an AI agent at a codebase, how does it know what the code does? How does it know the conventions? How does it know that we use snake_case for variables but CamelCase for classes?
Until now, we stuffed this into the system prompt. We pasted giant blocks of text into the chat window.
AGENTS.md standardises this. It is effectively a README.md designed specifically for robots.
"OpenAI contributed
AGENTS.md, a universal standard for providing AI coding agents with project-specific context and instructions." — Agentic Ai Foundation
It allows a repository to self-document for an AI audience. It creates a standard way for a codebase to say, "Here is how you should interpret me."
This seems trivial until you try to automate code maintenance at scale. Without a standard, every agent guesses. With a standard, the agent knows.
3. Goose
Block (formerly Square) donated goose. This is an agent framework.
The inclusion of goose is interesting because it signals that the foundation isn't just about low-level plumbing (MCP) or documentation standards (AGENTS.md). It is also about the runtime.
goose is designed to be local-first. This matters. The future of agentic AI isn't just massive models running in a data centre. It is agents running on your laptop, managing your local files, interacting with your local shell.
By putting goose under the AAIF umbrella, Block is ensuring that there is a reference implementation for how these agents should behave.
The Architecture of Trust
I want to pause here and look at what this looks like structurally.
We are moving from a mesh of point-to-point integrations to a hub-and-spoke model, mediated by open standards.
The Old Way (The Pain):
My Application -> Custom OpenAI Integration -> Custom Postgres Tool
My Application -> Custom Anthropic Integration -> Custom Slack Tool
My Application -> Custom Llama Integration -> Custom Jira Tool
Every arrow represents code I have to write, test, and maintain.
The AAIF Way (The Promise):
My Application -> MCP Client -> MCP Protocol -> MCP Server (Postgres/Slack/Jira)
I write the client once. The community writes the servers. The model is just a swappable component in the middle.
If I want to switch from OpenAI to Anthropic? I change one line of configuration. The tools (the MCP servers) don't care. They just speak MCP.
This is how we get to production.
Production engineering is about removing variables. It is about reducing the surface area of things that can go wrong. By standardising the communication layer, we remove an entire category of bugs related to API mismatches and schema hallucination.
This also solves the security nightmare.
Right now, giving an agent access to your tools often involves pasting API keys into environment variables and hoping the prompt injection attacks aren't too sophisticated.
MCP allows for a more granular security model. The "Host" (the application) can control exactly what the "Client" (the agent) is allowed to access via the "Server." It provides a standard interface for user consent.
"The emphasis is on building minimum viable protocols that can evolve over time while addressing core needs of interoperability, security, and scalability." — Linux Foundation Agentic AI
"Minimum viable protocols." I love that phrase. It implies pragmatism. It implies shipping.
Implications for the Builder
So, what does this mean for those of us with IDEs open right now?
1. Stop Building Proprietary Connectors
If you are currently writing a complex, custom integration framework for your internal tools that is tightly coupled to a specific LLM provider's SDK... stop.
You are building technical debt.
Investigate MCP. Look at how you can expose your internal APIs as MCP servers. This future-proofs your work. Today you might be using GPT-4 via Azure. Tomorrow you might be using a fine-tuned Llama 4 on-prem. If your tools speak MCP, the migration is trivial.
2. The Rise of the "Agent Ready" Ecosystem
We are going to see a shift in how SaaS products market themselves.
Previously, they touted their "API." Soon, they will tout their "MCP Server."
"Does it integrate with Claude?" will be replaced by "Is it MCP compliant?"
This opens up a massive market for developers. There is now a clear, standard way to build plugins for the entire AI ecosystem. You write the MCP server once, and it works with every agent framework that adopts the standard.
3. The End of "Prompt Engineering" for Context
With AGENTS.md, we can stop treating context injection as a dark art.
We can formalise it. We can treat "Agent Instructions" as part of the codebase, version controlled and reviewed just like any other file.
This creates a new discipline: Context Architecture. It's not about tricking the model; it's about structuring the documentation so the model can consume it deterministically.
The Skeptic's View (And Why It's Wrong)
Now, I can hear the cynical voice in the back of my head. I've been in this industry long enough to see consortiums fail.
"Is this just big tech colluding to control the roadmap?" "Will this stifle innovation by forcing everyone into a lowest-common-denominator box?"
"Some perspectives suggest that the proliferation of protocols also reflects a 'healthy period of exploration,' and the key lies not necessarily in 'forcing premature convergence'..." — Exploring Ai Agent Communication Protocols For Scalable Systems
This is a valid concern. Premature standardisation can kill innovation. If we had standardised the web browser in 1994, we might never have got JavaScript.
However, we are not in 1994. We are in the phase where the lack of standards is hurting adoption more than the freedom is helping innovation.
The fragmentation is preventing real work from getting done.
Furthermore, the involvement of the Linux Foundation mitigates the "big tech collusion" risk. The governance model is open. It is designed to be vendor-neutral.
"Its mission is to ensure that agentic AI develops transparently and collaboratively, promoting innovation, sustainability, and neutrality." — Linux Foundation Announces The Formation Of The Agentic Ai Foundation
This isn't a closed room where Google and Microsoft decide your fate. It's an open forum. If you don't like the direction MCP is taking, you can fork it. You can contribute. You can vote.
Conclusion
We have spent the last year in a frenzy of experimentation. It was necessary. We needed to break things to understand what this technology was capable of.
But that phase is ending.
The launch of the Agentic AI Foundation marks the beginning of the "Industrial Era" of GenAI.
We are moving from hand-crafted artisanal agents to industrial-grade infrastructure. We are trading the excitement of the "new release" for the comfort of the "stable protocol."
For a theorist, this might be boring. Standardisation is never as sexy as revolution.
But for a builder? For someone who wants to ship code that works, scales, and survives the weekend?
This is the best news I've heard all year.
The plumbing is being laid. The standards are being agreed upon. The giants have ceased fire and agreed to speak the same language.
Now, if you will excuse me, I have some MCP servers to build.