In late November 2024, Anthropic quietly released a specification that many in the AI industry had been anticipating but few expected to land with such immediate force: the Model Context Protocol (MCP). At first glance, it looks like just another open standard—a way for AI models to connect to external tools and data sources. But for the dozens of startups that have been building proprietary “agent middleware” to solve exactly this problem, the announcement carries a much darker message. Some of them may not survive the next twelve months.
The quiet announcement of an open protocol is often the loudest signal that a market is about to be consolidated.
To understand the scope of the threat, consider the state of AI agent infrastructure before MCP. Over the past two years, a crowded ecosystem of companies—from early frontrunners like LangChain and AutoGPT to newer players such as Composio, Fixie, and Vercel’s AI SDK—has focused on building “glue” between large language models and the real world. Each solution required custom connectors, proprietary APIs, and significant engineering overhead to integrate databases, file systems, and third‑party services. A typical enterprise AI team could spend four to six weeks just wiring up a single CRM integration. This fragmentation created a lucrative market for middleware providers, who promised to abstract away the complexity.
Anthropic’s MCP aims to replace that proprietary glue with a single, server‑implemented protocol. Much like how USB standardized the connection between computers and peripherals in the late 1990s, MCP defines a universal interface between LLMs and external resources: any tool or data source that implements the MCP server specification can be accessed by any MCP‑compatible client—including Anthropic’s own Claude Desktop app, which launched integrated MCP support on the same day.
The effect on existing infrastructure startups is twofold. First, the protocol drastically reduces the value of proprietary connectors. If any model can talk to any tool through a common standard, the competitive moat built on exclusive integrations evaporates. For example, a company that spent two years building 50+ connectors to HR systems, CRMs, and databases suddenly finds its core differentiator replicable in a weekend by any developer who follows the MCP spec.
Second, the client‑side market may shrink as well. Several startups have raised significant capital—some Series A rounds above $30 million—to become the “operating system for agents.” Their pitch centers on orchestration, routing, and memory management. But if Anthropic (and soon, likely Google and OpenAI) ships its own orchestrator as a native component of the model, those startups may need to pivot to much narrower verticals or face irrelevance.
When the platform provider absorbs the middleware layer, the middleman’s path to profitability becomes a dead end.
The parallel to the early web is instructive. In the late 1990s, dozens of companies sold “web server software” before Apache commoditized the market. Later, “search middleware” businesses were decimated when Google made its own crawling pipeline a core product. Today, the agent stack is experiencing the same pattern: the largest AI labs see integration standards as a strategic necessity, not a third‑party opportunity. Anthropic’s MCP is designed to be adopted by the whole industry—it currently counts Google, OpenAI, and Meta as observers in its GitHub discussions—which further reduces the odds that any single middleware startup can sustain a unique position.
Of course, not everyone agrees with this dire assessment. Some argue that standards create new markets rather than destroy existing ones. For instance, the USB standard didn’t kill peripherals; it made them more abundant. Under this line of reasoning, MCP could unlock a wave of specialized tool‑making: if every AI agent can speak MCP, then companies that build the best MCP servers for niche verticals (e.g., legal document retrieval or medical device data) might thrive. But this view overlooks a key difference: USB standardized the physical layer while leaving room for vendor‑specific features; MCP, by design, standardizes the entire tool invocation and data access contract, leaving little room for proprietary value‑add.
A more realistic scenario for existing infrastructure teams is to pivot before the window closes. Based on industry conversations, several venture firms are already advising their portfolio companies to invest in either “application‑specific agents” (e.g., a dedicated AI assistant for tax compliance) or “on‑premises MCP‑compliant adapters” that package legacy enterprise systems behind the new protocol. Both paths require a faster iteration cycle than most middleware teams are used to—and both accept a significantly lower total addressable market than the dream of “agent infrastructure for everyone.”
When the protocol becomes the product, the old product must become a service, a niche, or a memory.
Beyond Anthropic, the broader AI ecosystem is watching closely. If MCP gains traction—and early adoption by tools like Zed, Replit, and Codeium suggests momentum—the next year will see a Darwinian shakeout among agent infrastructure providers. Startups that cannot demonstrate a clear, defensible value layer above the protocol will struggle to raise further rounds. Their employees and customers will likely migrate into the very platform they once sought to disintermediate.
The message for founders and engineering leaders considering a foray into agent infrastructure is clear: the days of selling generic “model‑tool glue” are numbered. The winners will be those who dominate a narrow, high‑value domain—not those who generalize horizontally. And for the teams that have already built on proprietary, closed middleware, there is no better time than now to begin implementing MCP compliance and finding a new reason to exist.