The Model Context Protocol (MCP): The New Standard Powering Long-Running AI Agents

The Model Context Protocol (MCP): The New Standard Powering Long-Running AI Agents
Published on 5/8/2025 by Gavin Bintz
Author avatar

The Model Context Protocol (MCP) – introduced by Anthropic in late 2024 – is quickly emerging as a future-defining standard for AI integration. Think of MCP as a “USB-C for AI applications”. Just as USB unified how devices connect to peripherals, MCP provides a universal interface for AI models (like GPT-4 or Claude) to connect with external tools, data sources, and services. For technical founders and developers building AI-driven systems, MCP is becoming foundational to the development and orchestration of long-running AI agents.

Why does this matter? Today’s AI agents – from customer support bots to autonomous coding assistants – need to perform real actions: query databases, send emails, process payments, and more. Traditionally, connecting an AI to each external service was a custom, ad-hoc project. Every new integration meant writing new code, dealing with different APIs, and struggling to maintain context. As AI agents become more long-running (persisting over extended sessions or continuously operating tasks), the integration challenge multiplies. MCP addresses this by standardizing how AI agents interface with the world, enabling richer, more persistent agent capabilities with far less hassle.

In this post, we’ll dive deep into what MCP is and how it works, then explore why it’s so crucial for long-running AI agents. We’ll look at real-world implementations from Vercel, Cloudflare, and Stripe – three companies at the forefront of the agent ecosystem – and see how each is leveraging MCP. We’ll also compare MCP’s momentum with other approaches (like Google's A2A) to understand why MCP is poised to become the standard protocol for next-generation AI agents.

What is MCP and Why Was It Created?

Modern AI models are incredibly powerful, but historically they’ve been isolated silos – they can only access whatever information is in their prompt or training data. If a chatbot or agent needed to pull live data from, say, a CRM or an e-commerce database, developers had to build a custom integration via APIs or plugins. Each AI application × each tool integration = a unique project. This M×N integration problem wasted effort and led to brittle, inconsistent agent capabilities.

MCP (Model Context Protocol) is an open standard (open-sourced by Anthropic) designed to solve this by acting as a common language between AI agents and external systems. Instead of writing one-off code for each integration, developers can implement MCP and immediately connect to a growing library of tools and data sources. In other words, MCP turns the M×N problem into an M+N solution: tool providers implement the MCP server once for their service, and AI applications implement an MCP client – then any AI agent can talk to any tool that speaks MCP.

At its core, MCP defines a simple client–server architecture:

  • MCP Servers (Tool providers) expose a set of capabilities in a standard way. These capabilities are categorized as Tools, Resources, or Prompts.
  • MCP Clients (inside AI applications or agents) connect to those servers and relay information to the AI model.

Tools are essentially actions or functions the AI can invoke (model-controlled operations, like “sendemail” or “create_calendar_event”). Resources are data sources the AI can query (application-controlled, read-only context like “database records” or “files”). Prompts are predefined prompt templates or guidelines the AI can use (user-controlled, to help the model interact with the tool effectively). By standardizing these, MCP lets an AI agent _discover what it can do on a new system and how to do it, all via a unified protocol.

Here’s a simple illustration of the problem MCP solves, turning a tangled web of bespoke integrations into a hub-and-spoke model:

Before MCP, each integration between an AI model and a service (Slack, Google Drive, GitHub, etc.) required a unique API interface. After MCP, the AI model talks to a single unified protocol (MCP), which in turn connects to all services. This drastically simplifies integration complexity.

MCP Diagram

In practical terms, using MCP means an AI agent can maintain richer context across systems. Rather than being “trapped” behind one app’s API, the agent can connect to many sources through MCP and fetch whatever context it needs in real-time. Anthropic’s Claude team put it this way: “MCP provides a universal, open standard for connecting AI systems with data sources, replacing fragmented integrations with a single protocol.” The result is better, more relevant responses and actions from the AI, because it’s no longer flying blind or limited to its initial prompt.

How Does MCP Work? (Under the Hood)

MCP’s design is straightforward. When an AI application (the Host) starts up and wants to use MCP, it will initiate a handshake with an MCP server (or multiple servers). Here’s a high-level flow of how an MCP client and server interact:

  1. Handshake & Capability Discovery: The MCP client connects to the server and they exchange info about protocol version and capabilities (this ensures both sides “speak the same MCP dialect”). The client asks, “What tools, resources, and prompts do you offer?” and the server responds with a list of available actions (functions it can perform), data endpoints, and any preset prompts.
  2. Context Provisioning: The host (AI app) can then present these capabilities to the AI model. For example, it might convert the server’s tool list into the format of the model’s function-calling API or provide descriptions so the model knows what each tool does. The model now has an expanded “skill set” it can tap into during conversation.
  3. Agent Decision and Tool Invocation: When the AI model decides it needs to use one of those tools (e.g., the user asks “What are the open issues in our GitHub repo?” and the model thinks “I should call the GitHub tool to fetch issues”), the host’s MCP client sends a request to the MCP server to invoke that specific tool.
  4. Execution & Response: The MCP server receives the tool invocation (e.g., fetch_github_issues(repo=X)), executes the necessary logic – such as calling GitHub’s API – and returns the result back over MCP. The MCP client passes the result to the AI model, which can then incorporate that data into its answer to the user.

MCP Diagram Flow

All of this happens in a secure, standardized JSON-based protocol (MCP is built on JSON-RPC 2.0 under the hood, with some extensions). The beauty for developers is that the AI model doesn’t need custom code for each tool. It just knows it has a suite of actions available via MCP, and it can call them as needed. The protocol handles the rest: discovering what’s possible and executing the actions.

From a technical standpoint, MCP supports multiple transport layers. For local integrations (running on the same machine as the AI), MCP can use standard I/O streams. For remote integrations (tools running in the cloud), MCP typically uses Server-Sent Events (SSE) streams for communication. The Vercel AI SDK team, for example, has implemented both stdio and SSE transport in their MCP client support. This means whether a tool is running locally alongside an AI (like a local database) or as a cloud service, the AI agent can talk to it via MCP seamlessly.

Security and permissions are also a key part of MCP’s design – after all, if your AI agent can now send emails or make purchases, you want controls in place. MCP itself includes hooks for authentication (OAuth flows, API keys) and permissioning. For instance, Cloudflare’s implementation allows an agent to prompt the user to grant access to an MCP server (say, an agent asks you to log in and authorize your Gmail MCP server before it can send an email on your behalf). We’ll touch more on Cloudflare’s auth solutions later, but it’s important to know MCP was built with secure, two-way connections in mind – the agent and the tool can mutually verify and trust each other.

In short, MCP turns an AI agent into a kind of universal remote control: the agent can dynamically plug into new “devices” (tools/services) without needing new programming for each. This standardization is what makes MCP so powerful, especially as we envision agents that run continuously and adapt over time.

Why MCP Is Foundational for Long-Running AI Agents

Long-running AI agents – those that persist over hours, days, or indefinitely – are an exciting frontier. Instead of a single Q&A exchange, imagine an AI agent that handles your email inbox autonomously, or a coding agent that continuously monitors a project repo and makes improvements, or a commerce agent that manages an entire shopping workflow. These agents need to maintain state, memory, and access across many interactions and tools. MCP provides the backbone to enable this in a robust, scalable way.

Here are a few reasons MCP is becoming the backbone of orchestration for these agents:

  • Seamless Orchestration of Multiple Tools: A long-running agent will often need to chain multiple tools and data sources. For example, a sales agent might check a CRM (one tool), send an email (second tool), and update a spreadsheet (third tool) as part of one continuous task. With MCP, the agent can connect to multiple MCP servers in parallel and use all their functionalities together. The protocol is designed to handle multi-step, multi-tool workflows gracefully – the agent can even automatically discover new capabilities if a tool upgrades or adds features. This dynamic orchestration is crucial for agents that aren’t just answering one question, but carrying out multi-step tasks over time.

  • Persistent Context and State: Long-running agents benefit from remembering context (conversation history, user preferences, intermediate results). MCP doesn’t by itself store the state (that’s up to the agent implementation), but it enables persistence by keeping connections open and consistent. For example, Cloudflare’s remote MCP servers can retain context between calls, so an agent session can have an ongoing, stateful experience. One user’s agent session might maintain an open MCP connection to a database tool, caching certain results or keeping a transaction open. Instead of each query being stateless, MCP allows continuity. This is amplified by platforms like Cloudflare’s Durable Objects (which we’ll discuss soon) that give each agent its own stateful process – the key is that MCP plugs into those long-lived processes so the agent is always “in the know” about what happened earlier.

  • Standardized, Reusable Integrations: For developers, MCP means you can build an integration once and use it for many agents or use-cases. This is foundational for any kind of complex agent orchestration. If every new agent required reinventing how it talks to email, or how it talks to a database, you’d never get beyond demos. MCP creates a shared ecosystem of connectors. Already, early adopters have built hundreds of MCP servers for everything from GitHub and Slack to Google Drive and Stripe’s API. A developer working on a long-running agent can leverage this ecosystem out-of-the-box – drastically reducing development time and bugs. It’s akin to how web developers today don’t write network socket code for every site; they rely on HTTP. Similarly, future AI developers might not write custom tool APIs; they’ll just rely on MCP.

  • Cross-Platform and Open: MCP being open-source and not tied to a single vendor is critical for its foundational role. It’s not an Anthropic-only club – OpenAI’s tools, Replit’s IDE, Cursor editor, GitHub Copilot X, and many others are embracing MCP. This broad support means long-running agents can roam across platforms. An agent running on Cloudflare Workers might call an MCP tool hosted on someone’s Vercel server, then use another tool hosted on a developer’s local machine – all without compatibility issues. For long-lived agents, which might need to operate in diverse environments over their lifespan, this portability and interoperability is huge.

  • Optimized for Agent Workloads: Because MCP was designed with AI usage in mind, it aligns well with how agents actually behave. For instance, Cloudflare’s implementation uses a clever WebSocket + SSE hybrid to keep agent sessions alive efficiently. If an agent is idle (not calling tools for a while), the backend can hibernate that session to save resources and resume when needed – without breaking the MCP connection. This means an agent can run for hours waiting for the next trigger, and when it comes, everything spins back up. Such features make it practical (and cost-effective) to run agents continuously. Cloudflare even made their Durable Objects free to use as of April 2025 because they see it as “a key component for building agents”. With stateful storage and compute available at low cost, developers are empowered to keep agents running long-term, orchestrating tasks via MCP.

In summary, MCP is to AI agents what APIs were to traditional software – a foundational layer of connectivity. For long-running agents that need to do things in the real world (not just chat), MCP provides the standard interface to all those things. It’s no surprise, then, that forward-thinking companies are racing to support MCP. Let’s look at how Vercel, Cloudflare, and Stripe are each integrating MCP and pushing the envelope for agent development.

Vercel’s AI SDK: MCP Support for Seamless Tooling

Vercel, known for its developer-first platform and Next.js framework, has jumped onboard with MCP to make AI integrations easier for web developers. In a recent update to their AI SDK (v4.2), Vercel announced full support for MCP clients. This means developers using Vercel’s AI SDK can now connect to any MCP-compatible tool server directly from their applications, no extra glue code needed.

What does this look like in practice? Vercel provides a lightweight MCP client within the AI SDK. With a few lines of code, you can point this client at one or more MCP server endpoints (local or remote), and then automatically fetch the list of tools those servers provide. For example, if you want your Next.js AI app to have GitHub and Slack capabilities, you might spin up the official GitHub MCP server (which exposes repo and issue management functions) and Slack MCP server (for messaging functions). The AI SDK can connect to both and retrieve their available “tools.” Then, when you call Vercel’s generateText or useChat functions for your AI model, you simply pass in these MCP tools. The model can now invoke, say, create_issue or send_slack_message as part of its output – and the SDK will route those calls via MCP to the right server.

Crucially, Vercel’s implementation supports both local and remote MCP servers. For local tools (maybe a custom script or a local database), the SDK can use standard I/O streams to talk to a local MCP server process. For remote tools, Vercel uses Server-Sent Events (SSE) to maintain a live connection. This means your web app can interact with cloud-based MCP services without worrying about managing long polling or websockets manually – the SDK handles it. The developer experience is quite streamlined: they even provided recipes and docs to walk you through connecting an MCP server and using its tools in your Next.js app.

Real-world example: Vercel highlighted popular MCP servers like GitHub, Slack, and Filesystem connectors. A founder could quickly prototype an agent that, say, watches for issues on GitHub and posts a summary to Slack. Without MCP, you’d need to individually integrate GitHub’s REST API and Slack’s API, handle auth for each, and parse responses. With MCP and Vercel’s SDK, your AI agent simply knows “there’s a list_issues tool and a post_message tool available.” The heavy lifting of authentication, calling the external API, and formatting the results is encapsulated in the MCP server implementations (often maintained by the community or the tool provider).

Vercel is also making it easy to deploy your own MCP servers on their platform. They released a template for running an MCP server with Vercel Functions (Serverless Functions) – effectively letting developers host custom MCP integrations on Vercel’s infrastructure. This complements the client-side support: if you have a proprietary database or service, you can implement a small MCP server for it (using the open SDKs Anthropic provided in various languages), deploy it to Vercel, and instantly your AI app (or any MCP-compatible agent) can hook in. Vercel even supports longer function timeouts (“Fluid compute”) for MCP servers and recommends using a Redis cache to maintain state between calls, acknowledging that some tool calls might be long-running.

For technical teams, Vercel’s adoption of MCP is a signal that MCP is ready for mainstream web development. It’s not just a Claude Desktop experiment; it’s now integrated in the tooling of one of the most popular web dev platforms. This lowers the barrier for frontend and full-stack developers to create AI agents that go beyond chat – because all the integrations are at their fingertips. As Vercel’s announcement put it, MCP opens access to a “growing ecosystem of tools and integrations” and hundreds of pre-built connectors that can add powerful functionality to your app. It empowers small teams to build AI features that previously only big tech companies with large integration teams could.

MCP Server support: Vercel has also recently announced support for deploying MCP servers. This new capability is facilitated by the @vercel/mcp-adapter package, which supports both SSE and HTTP transport. This enhancement simplifies the process of building MCP servers on Vercel, enabling developers to leverage Fluid compute for cost-effective and performant AI workloads. By using Vercel's infrastructure, developers can deploy MCP servers that integrate with various tools and services, further expanding the capabilities of AI agents. Learn more about Vercel's MCP server support.

Vercel Tweet

Cloudflare’s Durable Agents: MCP in a Serverless, Stateful World

If Vercel brings MCP to the frontend developer community, Cloudflare is bringing MCP to the cloud at massive scale. In April 2025, Cloudflare announced a suite of features centered on AI agents, positioning their global network as “the best place to build and scale AI agents”. A cornerstone of this announcement was support for MCP – in fact, Cloudflare touted offering the industry’s first remote MCP server hosting service. Let’s unpack what that means and why it’s important.

Originally, MCP servers (the tool implementations) were often run locally or in dev environments. Anthropic’s Claude Desktop, for instance, can spin up a local MCP server to let Claude read your files. That’s great for a single user, but not for a production agent accessible by many users. Cloudflare stepped in to make MCP servers a cloud-native concept – you can deploy an MCP server to Cloudflare Workers (their serverless platform), and have it accessible over the internet for any agent (with proper auth) to use. This effectively decouples the agent from the tools’ location. An AI agent running in one environment can call tools hosted anywhere. Cloudflare provided the hosting, security, and global low-latency access for those tools.

Not only that, Cloudflare combined MCP support with their unique strength: Durable Objects. In Cloudflare’s platform, a Durable Object (DO) is a singleton stateful instance that can be addressed by an ID – think of it like a tiny server that preserves state (variables, database, etc.) across invocations. When they built their Agents SDK, Cloudflare used Durable Objects to give each agent (or each user session) its own stateful back-end process. This is a game-changer for long-running agents: it means your agent can literally keep variables in memory or in an embedded SQLite database and run continuously. There are no hard 50ms or 30s limits like typical serverless; a Durable Object can run as long as needed and maintain persistent storage. Cloudflare even says agents can “run for seconds, minutes or hours: as long as the tasks need” in this model.

Now tie that with MCP. Cloudflare’s Agents SDK now has built-in support to act as an MCP client and to easily spin up MCP servers within an agent. An agent running on Cloudflare can connect out to external MCP servers (e.g., a Notion or Stripe MCP server) securely – they provided integrations with Auth0, Stytch, etc. to handle OAuth flows and permission scopes when an agent connects to a third-party MCP service. This means a Cloudflare agent can say, “Hey user, I need access to your Google Calendar (MCP). Please sign in and approve.” Once approved, the agent’s MCP client is authorized and it can start using the Google Calendar tools on the user’s behalf, with all the auth tokens managed. This is crucial for security, ensuring agents don’t become uncontrolled super-bots; Cloudflare’s framework enforces user consent and scope of access for each MCP integration.

On the flip side, Cloudflare makes it trivial to deploy your own MCP servers on their network. With a new McpAgent class, a developer can write what looks like a normal Cloudflare Worker that defines some tools and resources, and Cloudflare handles exposing it as a remote MCP endpoint (with an SSE stream, etc.). They even handle automatically hibernating idle MCP servers (shutting down the Worker/Durable Object when not in use, but keeping the connection alive via a parked WebSocket) to save costs – meaning you can have potentially thousands of agents/tools “alive” and only pay when they’re actively used.

In practice: Suppose you want to build a long-running DevOps agent that monitors your application and can deploy code, adjust configs, and send alerts. On Cloudflare’s platform, you could:

  • Use the Agents SDK to create a stateful agent that keeps track of the system state (using Durable Object storage for logs, etc.).
  • Implement or plug in various MCP servers: one for your CI/CD system (to deploy code), one for your cloud provider (to scale servers), one for PagerDuty (to send alerts), etc. Some of these might already exist as MCP servers; if not, you can write them and host on Cloudflare.
  • Your agent can schedule itself (Cloudflare agents can have cron-like scheduling and background triggers) and react to events (like a monitoring alert) persistently. When it needs to act, it will connect to the relevant MCP server(s) and invoke the needed tools. Because each agent runs near users on Cloudflare’s global network, the latency is low; because each has its own Durable Object, the state is kept safe and isolated.

Cloudflare’s CEO was pretty direct, saying “Cloudflare is the best place to build and scale AI agents. Period.” and backing it up with the pieces: global low-latency network, serverless compute, stateful storage, and now MCP to tie in external APIs. By open-sourcing MCP and collaborating with others, Cloudflare isn’t locking developers in – instead they benefit from the broader community of MCP integrations. In turn, their robust platform solves the other hard parts (scaling, state, uptime).

One more notable thing Cloudflare did: they opened up a free tier for Durable Objects specifically citing the need to let developers experiment with building agents. This removed a potential cost barrier, inviting even indie devs to try creating long-lived agents with MCP. They clearly view MCP-enabled agents as a key future workload (just like web hosting or Workers functions).

All told, Cloudflare’s adoption of MCP and their enhancements around it (auth flows, hibernation, workflows scheduling) strongly reinforce the notion that MCP is becoming the standard for agent tool interaction. When a cloud platform of Cloudflare’s scale bakes in support and even cites adoption by OpenAI and others, it signals that MCP’s approach – an open agent-tool protocol – is likely here to stay.

Stripe’s Order Intents: Bringing Payments into Agent Workflows

No discussion of real-world AI agent use-cases is complete without commerce – can AI agents not only retrieve information, but actually transact on our behalf? Stripe, the fintech developer darling, is embracing that vision and has started integrating with MCP and agent platforms to enable payments and commerce in AI workflows.

Order Intents API Announcement

At Stripe Sessions 2025 (Stripe’s annual product conference), Stripe unveiled an “Order Intents API” – a new API designed specifically for autonomous agents to complete purchases end-to-end. The idea is to let you “create a commerce agent in seconds”. This is huge: it means an AI agent could potentially search for a product, add it to a cart, decide on options (size, shipping), and execute the purchase, all through Stripe, without a human clicking through a checkout page. Stripe’s Order Intents is like a counterpart to their well-known Payment Intents API, but expanded to handle the entire order lifecycle (product selection, fulfillment, etc.) in a programmatic way.

To ensure trust and safety, Stripe didn’t do this alone – they launched Order Intents in partnership with Visa on a concept called “Agent Cards”. Essentially, Visa is providing a framework for issuing virtual payment cards that an AI agent can use with controlled permissions (like spending limits, merchant restrictions, etc.). Stripe’s Agentic API (as they call it) plus Visa’s Agent Cards mean you can empower an agent to spend money within defined bounds. This addresses a major concern (rogue AI racking up charges) by sandboxing the payment method. It’s a fascinating development that shows the ecosystem adapting: even credit card networks are creating products for AI agents!

Beyond Order Intents, Stripe is also making sure that MCP-based agents can easily integrate Stripe’s services. They published documentation on “Adding Stripe to your agentic workflows”, which includes an official Stripe MCP server and toolkit. The Stripe MCP server provides AI agents with a set of standardized Stripe Tools – like creating a customer, initiating a payment, checking an invoice, or querying their knowledge base. In fact, Stripe exposed all their developer docs in an easy plain-text format to be AI-friendly, and they even host an llms.txt file to guide AI agents on how to fetch those docs. This shows Stripe anticipates agents (maybe via MCP clients) reading documentation or executing API calls as part of their tasks.

Stripe’s agent toolkit goes a step further by integrating with Cloudflare’s Agents SDK. They offer wrapper functions that monetize tool calls on MCP servers. For example, they demonstrated using a special kind of MCP agent class (PaidMcpAgent) that automatically handles charging a user (via Stripe) when a certain tool is invoked. Imagine an AI service where certain actions cost money (like generating a complex image or pulling a premium dataset) – Stripe’s toolkit can prompt the user, handle payment via Stripe, and only execute the tool once payment is confirmed. This is a clever fusion of MCP (for the tool call) with Stripe’s payments infrastructure, giving developers a ready-made solution to bill for agent actions. In other words, not only is Stripe enabling agents to buy things on behalf of users, it’s also enabling developers to charge users for agent-driven services. This opens up viable business models around long-running agents.

To put it concretely, consider an AI travel agent. It could use Order Intents to book flights or hotels for you (charging your card via an Agent Card). If the travel agent itself charges a fee or uses some premium API for, say, seat selection, it could use Stripe’s toolkit to handle that payment in the workflow. All of these steps use MCP in the background to standardize the interactions – the travel agent calls a “BookFlight” tool (MCP to an airline integration) and a “ChargeCard” tool (MCP to Stripe), and so on, orchestrating a full transaction.

Stripe’s enthusiastic participation in the MCP ecosystem underscores that MCP is not just about tech integrations like GitHub or Slack – it’s about real-world business and money. By making their services MCP-friendly, Stripe signals that they expect AI agents to become real economic actors (with appropriate safeguards). And for developers, it means one of the trickiest parts of agent workflows – handling money – now has a clear pathway. You don’t have to cobble together a solution; you can use Stripe’s agent APIs, which are designed for this use case. Given Stripe’s reputation for developer experience, this lowers the friction for anyone building an agent that might need to charge a customer or make a purchase.

In summary, Stripe is helping ensure that AI agents can safely and effectively interact with financial systems, all through standardized protocols. It’s another vote of confidence for MCP’s approach, and it’s likely to spur new kinds of long-running agents (like autonomous shopping assistants, finance managers, etc.) that were previously out of reach.

Comparing Google's A2A Protocol with Anthropic's MCP

MCP’s momentum has outpaced A2A in just a few months: major AI platforms from OpenAI and Google DeepMind to Microsoft have plugged into Anthropic’s Model Context Protocol, while Google’s Agent‑to‑Agent (A2A) spec is only now emerging, largely in enterprise channels. Even though A2A was designed to complement MCP by enabling peer‑to‑peer agent collaboration, developers are defaulting to MCP’s simpler “tool access” use case, driving broader ecosystem support. To make matters worse, the term “A2A” already means “account‑to‑account” in payments—think Stripe’s A2A bank transfers—adding confusion that further slows uptake of Google’s new protocol.

MCP’s Rapid Rise

Anthropic open‑sourced MCP in November 2024, positioning it as a “USB‑C port for AI apps” to standardize model‑to‑tool integrations (Wikipedia). By late March 2025, OpenAI announced MCP support across its Agents SDK and ChatGPT desktop apps, with Sam Altman calling it a “rapidly emerging open standard” (Wikipedia). Days later, Google DeepMind CEO Demis Hassabis confirmed that Gemini models and SDKs would embrace MCP, praising it as “good protocol… rapidly becoming an open standard for the AI agentic era” (TechCrunch). Microsoft teamed up with Anthropic to build an official C# MCP SDK, reflecting enterprise demand for cross‑language client libraries (Microsoft for Developers). Community momentum is visible in dozens of third‑party MCP servers—covering Slack, GitHub, PostgreSQL, Google Drive, and even Stripe integrations—available in the modelcontextprotocol GitHub org by mid‑2025 (Wikipedia).

A2A’s Slower Start

Google unveiled the Agent2Agent (A2A) protocol only last month, aiming to let independent agents “discover each other, establish secure communication, and coordinate tasks” across enterprise systems (Google Developers Blog). Several big partners—Atlassian, PayPal, Salesforce, SAP, ServiceNow, Accenture, and others—joined the initial A2A working group, signaling enterprise interest (Google Developers Blog). Microsoft added A2A support to Copilot Studio and Azure AI Foundry, citing over 10,000 orgs using its Agent Service—but this announcement came weeks after MCP had already seen multiple cross‑industry adoptions (Computerworld, TechCrunch). Google’s own docs frame A2A as “complementary” to MCP—A2A handles inter‑agent choreography, while MCP provides internal tool/context access—but real‑world use tends to start with MCP’s simpler integration model (Google GitHub).

Naming Collision: A2A ≠ “Agent‑to‑Agent” in Payments

Stripe—and other fintech platforms—use “A2A” to mean “account‑to‑account” transfers (e.g., ACH debits) for faster, lower‑fee payouts (Swipesum). This preexisting financial lingo leads to ambiguity when enterprise developers search for “A2A” resources—are they looking for payment APIs or agent interoperability specs? Until Google & Stripe clearly differentiate their use‑cases, this naming overlap risks diluting A2A’s brand and slowing documentation discovery.

Conclusion: A Future-Defining Protocol for the AI Age

In the span of just a year or so, the Model Context Protocol has evolved from an open-spec proposal by Anthropic into the connective tissue of the AI agent ecosystem. The momentum behind MCP is striking: we see cloud platforms (Cloudflare) making it a core part of their offering, developer tools (Vercel) baking it in for instant access to integrations, and even fintech leaders (Stripe) building new APIs around it. Other AI platforms – from OpenAI’s Agents to independent projects like Agent One – are also converging on MCP as the standard way to empower agents with actions.

For technical founders and developers, MCP represents an opportunity akin to the early days of the web or mobile revolutions: an emerging standard that can dramatically accelerate development and unlock new product possibilities. By leveraging MCP, you can focus on your agent’s unique logic or intelligence, rather than spending weeks integrating yet another API for the umpteenth time. Your agent can be born “world-aware” and “action-capable” from day one, tapping into a rich toolbox of external capabilities. And because MCP is an open standard, improvements and new integrations benefit everyone in the network. It’s a virtuous cycle – more MCP servers built -> more tools available for agents -> more developers adopting MCP for their agents -> incentive to build even more MCP servers.

MCP also lowers the barrier to entry for creating long-running, autonomous services. As we saw, infrastructure like Cloudflare Agents plus MCP can handle the heavy lifting of persistence, scaling, and security. A small startup could build a sophisticated AI agent that, for example, manages entire e-commerce operations, by stringing together MCP integrations: inventory management, customer emails, payment processing via Stripe’s Order Intents, etc. This agent could run 24/7, only escalating to humans when necessary. Five years ago, that kind of integration project would be massive; today, with MCP and modern agent platforms, it’s becoming realistic to prototype in a hackathon.

Importantly, MCP’s design aligns with developer happiness and best practices. Its “USB-C for AI” approach means less custom glue code and more plug-and-play components. The fact that companies are providing SDKs, templates, and guides (like Vercel’s recipes, Cloudflare’s templates, Stripe’s toolkit) makes it even easier to jump in. If you’re comfortable with APIs and webhooks, you’ll find MCP’s JSON-RPC style quite familiar. And if you’re building a tool, offering an MCP server for it can immediately broaden its reach to any AI client supporting the protocol – a compelling distribution channel in the age of AI-first software.

Will MCP “win” as the de facto standard for AI-tool interaction? All signs point to yes. Alternatives (proprietary plugin formats or siloed marketplaces) are giving way to this more universal model, much like open protocols tend to win out in the long run. Even OpenAI, which initially pushed its own plugin system, is now working with (or at least not against) MCP – as evidenced by mentions of OpenAI’s support and the OpenAI Agents SDK extension for MCP. The ecosystem is rallying around a common approach because it benefits everyone: developers, service providers, and end users who get more capable AI applications.

In conclusion, the Model Context Protocol is future-defining because it has set the foundation for AI agents to truly become part of the software fabric. An AI agent that can orchestrate complex tasks across many systems – reliably and securely – is no longer science fiction. It’s being built today on top of MCP. For any team looking to build the next generation of AI-driven applications, keeping an eye on MCP and its growing adoption is a must. Adopting MCP now isn’t just a bet on a promising technology; it’s aligning with the direction of the industry. As we’ve seen through Vercel, Cloudflare, Stripe, and others, MCP is making AI agents more powerful, more practical, and more ubiquitous. In the coming years, when we interact with AI agents that handle everything from DevOps to personal finances, we’ll have protocols like MCP to thank for making it all possible.

Sources: The insights and examples above are drawn from official announcements and documentation by Anthropic, Vercel, Cloudflare, Stripe, and others. Key references include Anthropic’s MCP introduction, Vercel’s AI SDK release notes, Cloudflare’s press release and developer blog on Agents/MCP, and Stripe’s Sessions 2025 highlights, among others. These sources collectively paint the picture of MCP’s rapid rise and its central role in the future of long-running AI agents.

Citations:

Agent One

Agent One

© 2025. All rights reserved.