In a recent post, my friend Chris Riccomini1 argues that Model context protocol (MCP) is an unnecessary and temporary kludge on the decline and facing an uncertain future. His charges are, roughly speaking, that tools, resources and prompts — the three “distinct” concepts of MCP — are arbitrary and confusing perturbations of tool calling. He argues that we already have OpenAPI, gRPC, and command line interface (CLI) for tool definitions, which LLMs have empirically mastered stitching together. When weighed against his perceived purpose of MCP — exposing static tools to an LLM in as few tokens as possible — the verdict is clear: the juice isn’t worth the squeeze of yet-another-tool-calling-protocol, especially as LLMs context constraints relax.
Chris’s observations are a fair critique of static tool servers—which are, to his credit, the plurality of what folks are rushing to market with. But MCP wasn’t built for static tool wrappers. It standardizes how agents have stateful, identity-aware conversations—and crucially, how to distribute that context across every AI client your organization uses. The protocol solves coordination problems that neither infinite context windows nor better API specs can address.
The first coordination problem: identity.
whoami and whatcaniuse
Production MCP servers filter capabilities based on who’s using them. When you authenticate, the server checks your permissions and exposes only what you’re allowed to touch. Two developers on the same team see different capabilities. Your workspace access determines what shows up. Your role shapes what’s available.
After you authenticate, the server notifies the client what just became accessible. A “delete database” tool appears only for admins. A “rollback” option shows up only after a deploy fails—and only if you have the right permissions. The toolset changes based on who you are and what state the system is in.
You can’t solve this with static API definitions. Stuff every possible capability into an OpenAPI spec and you still don’t know which ones are accessible to this user right now. The server needs to communicate what’s available based on identity and state at runtime. OpenAPI describes static endpoints. It doesn’t handle dynamic capability filtering based on who’s asking.
Infinite context doesn’t change this. Stuff every OpenAPI spec into a 10 million token context window. Your agent still doesn’t know which endpoints became available after you authenticated, when the rollback operation should appear after deploy fails, what changed in the available toolset since the conversation started. These are runtime coordination problems. Static documentation can’t fix them.
runtime, not readme
Solving identity means the server needs to notify clients when capabilities change. It means negotiating upfront what both sides support—can the server send notifications? Can it request user approval mid-operation? Can it stream progress for long-running work?
During execution, servers use notifications to signal when the toolset shifts. They trigger elicitation to request approval before destructive actions. They stream progress through. This is how MCP brokers the bidirectional communication that identity-aware orchestration requires.
These patterns aren’t unique to MCP. If you own the entire stack like OpenAI does with Codex, you don’t need a protocol at all—just build custom orchestration. Codex handles approvals and state management just fine through custom client-side logic.
But the moment you want these capabilities working across multiple clients, you need standardization. One MCP server works in Claude, ChatGPT, Cursor, and Claude Code. Your sales team gets the same identity-aware context in Claude that your ops team sees in ChatGPT. Write once, works everywhere, respects identity wherever it runs.
xkcd 927 but actually
Chris suggests we could “just write an agent-optimized OpenAPI.” Sure. Now get every client to agree on capability negotiation, runtime change notifications, approval patterns, and how to gate different capabilities based on identity. OpenAPI doesn’t define these.
You’d be inventing a protocol anyway—just a fragmented one you’d maintain forever, separately for each client your organization uses.
But for multi-tenant systems where permissions matter, for capabilities that need to change based on who’s using them and what state the system is in, you need protocol-level coordination that respects identity.
MCP standardizes these patterns. When an MCP server exposes different tools based on your OAuth scopes, shows only your accessible workspaces, or respects your organization’s access controls, every MCP client handles it the same way. You can build that ad-hoc for every client with custom integration.
Or you can use MCP.
MCP has real problems. The transport layer is messy. There are proposals to strip out state. And the spec’s permissiveness created a bootstrap trap—clients only implement basic features, so servers can’t use the advanced ones. I’ll dig into what’s broken next time.
Thanks to Chris for reading and giving feedback to an early draft of this.
"You’d be inventing a protocol anyway—just a fragmented one you’d maintain forever, separately for each client your organization us" is the mxn problem definition 😂
I think this article works great as a response and as a vision on how MCP can be developed as a standard on AI driven ecosystems. Really liked It!