Why Business Leaders Should Care About MCP and A2A?
Why Business Leaders Should Care About MCP and A2A?
Created on 2025-06-23 10:17
Published on 2025-06-23 14:50
AI agents have rapidly evolved beyond simple chatbots—they are becoming sophisticated collaborators capable of executing complex workflows, integrating seamlessly with enterprise systems, and coordinating intelligently with other agents.
At the core of this transformation are two key emerging protocols: the Model Context Protocol (MCP) and the Agent-to-Agent (A2A) protocol. These standards are shaping the next generation of agent-based applications by enabling interoperability and orchestration at scale.
Although AI agents are still in the early stages, they already offer significant and actionable return on investment—value that will only grow as adoption increases. In a previous article, I explored how Microsoft can help you build generative AI-based agents and multi-agent applications. In this follow-up, I’ll outline Microsoft’s strategy and planned investments to support two key emerging protocols—at a time when “Agentic AI” is becoming a major industry trend. I’ll also examine how Microsoft's approach stacks up against that of other leading players in the market.
Why It Matters?
If you can't see the numbers behind a strategy, ambition, or North Star—then what's the point of investing? Before diving deeper, I’d like to share a few key figures that, in my view, clearly justify why AI agents truly matter.
$1.8 trillion: Estimated global productivity gains from AI agents by 2030 (PwC).
3x faster: Enterprises using multi-agent systems report up to 3x faster process automation cycles.
70%: Of AI leaders say interoperability between agents and tools is a top priority for 2025 (IDC).
90%: Of enterprise AI applications will use agent-based architectures by 2027 (Gartner).
What Are MCP and A2A?
Model Context Protocol (MCP)
A major challenge with current AI applications is the lack of standardization, which complicates and fragments AI integration efforts. The Model Context Protocol (MCP) is an open standard designed to simplify and streamline these integrations by promoting interoperability and reducing fragmentation. It enables seamless connections between LLM applications and your tools and data sources.
MCP allows AI agents to discover, invoke, and interact with tools and data sources in a consistent, secure, and scalable way. It uses a client-server architecture, where a host application (the MCP client) connects to one or more MCP servers that expose various tools and capabilities.
By providing a structured way for agents to access and utilize external capabilities, MCP empowers them to perform meaningful, real-world tasks. Adoption of MCP is growing rapidly across the industry, signaling its importance as a foundational protocol for intelligent agent ecosystems.
In my opinion, MCP delivers two main benefits:
Data providers can expose their data via MCP servers, making it accessible to any MCP-compatible AI client. This allows organizations to leverage the data regardless of which AI application they use.
Developers can create AI applications (MCP clients) that seamlessly tap into a broad ecosystem of data sources and tools, enhancing the context and functionality of their apps without additional effort.
Agent-to-Agent (A2A)
A2A is a protocol that allows AI agents—created by different teams, using various technologies, and owned by separate organizations—to communicate and collaborate smoothly. Its main goal is to enable seamless interaction and coordination between these independent agents. Its primary purpose is to facilitate effective interaction and coordination among these independent agents.
A2A supports:
Orchestration of third-party agents
Delegation of tasks between agents
Multi-agent workflows with shared context and goals
These capabilities are essential for building scalable, modular AI systems where specialized agents handle different parts of a business process.
Microsoft’s Offerings
Microsoft is making significant investments in enhancing both MCP and A2A capabilities across its AI stack.
For MCP, support is already available in the Azure OpenAI Service via the Responses API. The Foundry Agent Service will add MCP support by end of July 2025, and both the Agent Framework and Copilot Studio are also expected to incorporate MCP capabilities this summer.
Regarding A2A, at the latest Microsoft Build event, Microsoft announced plans to add support this summer across multiple components. As an example, Asha Sharma announced that Azure Foundry Agent Service will enable A2A scenarios through third-party integrations, the Agent Framework will support bidirectional A2A interactions, and Copilot Studio will introduce orchestration capabilities to drive complex A2A workflows.
"Open standards are essential—but they’re only part of the equation. Microsoft is committed to shaping the future of agentic AI by combining open interoperability with the enterprise-grade capabilities that organizations need to deploy agents responsibly and at scale. We welcome the announcement of A2A as a neutral nonprofit project and look forward to collaborating as we help lead this next chapter of open standards for developing agents.” said Yina Arenas VP of Product, Azure AI Foundry.
Here are my personal highlights from Microsoft's recent announcements:
Azure AI Foundry
Azure AI Foundry features a rapidly growing catalog of over 10,000 models and agents—now enhanced with support for MCP tool definitions. To ensure reliable performance at scale,
Microsoft has introduced reserved capacity for leading models such as Grok 3, Mistral, Flux Schnell, and DeepSeek R1, alongside Azure OpenAI models. This allows organizations like yours to maintain consistent performance, even during peak demand. Models hosted and offered by Microsoft come with:
SLA and enterprise-grade compliance
Reserved capacity through provisioned throughput units (PTUs)
Cost efficiency, as PTU quotas can be shared across supported models
Azure AI Foundry also supports open interoperability standards—such as A2A communication and the MCP—to enable agents to collaborate seamlessly across Azure, AWS, Google Cloud, and on-premises environments. Behind the scenes, Microsoft has unified the Semantic Kernel and AutoGen frameworks to power this streamlined agent orchestration.
One of my favorite announcements was the introduction of the Azure MCP Server, which enables AI agents and other clients to interact with Azure resources using natural language commands and offering several key features:
MCP support: By implementing MCP, the Azure MCP Server is compatible with MCP clients like GitHub Copilot agent mode, the Azure OpenAI Agents SDK, and Semantic Kernel.
Microsoft Entra ID support: It integrates with Entra ID via the Azure Identity library to ensure adherence to Azure authentication best practices.
Service and tool support: The server supports various Azure services and tools, including the Azure CLI and Azure Developer CLI (azd).
Microsoft Fabric
Microsoft Fabric supports MCP connectors through GraphQL, enabling seamless two-way access to enterprise data.
Additionally, with the introduction of a data agent in Microsoft Fabric, we can now create conversational AI experiences that provide answers based on data stored in Lakehouses, Warehouses, Power BI semantic models, and KQL databases managed within Microsoft Fabric. Check out this presentation of Amir H. Jafari from Microsoft demonstrating how to create Data Agents in Fabric for Multi-Agent AI Solutions.
Microsoft Copilot Studio
Microsoft Copilot Studio now integrates MCP to enable seamless tool discovery and real-time data access.
But does this mean Copilot connectors are no longer relevant? Is MCP replacing them? Not at all. In fact, MCP servers are made available to Copilot Studio through the existing connector infrastructure. This allows organizations to continue leveraging enterprise-grade security and governance controls—such as Virtual Network integration, Data Loss Prevention (DLP) policies, and multiple authentication methods—while enabling real-time data access for AI-powered agents.
In my view, MCP and connectors are better together. MCP enhances what connectors can do by enabling more dynamic, context-aware integrations, while connectors provide the trusted, secure foundation that enterprises depend on.
Why Industry-led Scenarios Matter? From Technology to Tangible Value
As I often say, the real value of AI doesn’t come from the models—it comes from how users apply them. That’s why I always emphasize the importance of improving existing business processes and focusing on real-world use cases.
When we talk about MCP and A2A, we’re not just discussing protocols—we’re unlocking the potential for AI agents to transform how industries operate. Below are three practical scenarios for each of five key industries, showing how these technologies can drive measurable impact today and in the near future.
Banking
Fraud Detection: An agent detects anomalies in transactions and delegates investigation to a compliance agent via A2A.
Loan Processing: MCP enables agents to pull credit scores, income data, and risk models from internal systems.
Customer Service: Agents orchestrate across CRM, KYC, and chatbot systems to resolve queries in real time.
Retail
Inventory Optimization: Agents use MCP to access supply chain APIs and forecast demand collaboratively.
Personalized Marketing: A2A allows marketing agents to sync with recommendation engines and loyalty systems.
Returns Management: Agents coordinate across logistics, finance, and customer support to automate returns.
Media
Content Curation: Agents use MCP to pull metadata and user preferences to recommend content.
Ad Targeting: A2A enables coordination between audience segmentation and bidding agents.
Rights Management: Agents validate licensing terms across contracts and distribution platforms.
Telecommunications
Network Optimization: Agents analyze traffic and reroute dynamically using MCP-exposed telemetry tools.
Customer Onboarding: A2A enables seamless handoff between provisioning, billing, and support agents.
Churn Prediction: Agents collaborate to analyze usage patterns and trigger retention workflows.
Gaming
Live Ops Management: Agents use MCP to monitor player behavior and deploy real-time game updates.
Matchmaking: A2A allows coordination between skill-ranking agents and latency-optimization agents.
Player Support: Agents integrate with forums, ticketing, and in-game chat to resolve issues contextually.