Industry analyst firm Gartner points out that while the MCP protocol simplifies how AI apps, agents, and data sources connect, it also introduces security and governance risks.
The rapid evolution of AI presents both immense opportunities and new challenges for organizations striving to become truly data-driven. Analyst firm Gartner has always promoted a holistic approach to data management, stressing the need for accessible, trusted, and contextually rich data to power intelligent apps. With this perspective, Gartner addresses the recently released Model Context Protocol (MCP), an open standard developed by Anthropic. MCP aims to standardize communication between generative AI (GenAI) apps, AI agents, and the diverse data sources they rely on.
Gartner's recent Innovation Insight report on MCP highlights a critical juncture in the development of AI. While there's an escalating need for GenAI apps, the current reality is that each GenAI platform employs its own proprietary methods for connecting to external data sources. This inconsistency creates silos, complicates integration efforts, and hinders the smooth flow of context that is needed for generating accurate responses from foundational LLM models.
MCP addresses this lack of consistency with a unified approach. Its purpose is to enable Large Language Models (LLMs) to effortlessly discover and interact with a standardized ecosystem of data sources, tools, and capabilities. Think of it as establishing a common language for GenAI apps to communicate with external sources, leveraging LLM function calling.
According to Gartner’s 2025 Software Engineering Survey, building GenAI apps is a top priority for software engineering teams but current approaches for connecting GenAI to enterprise data sources are inconsistent, with each platform providing proprietary ways to register tools and resources. The survey results indicate that by 2028, 33% of enterprise software is expected to include agentic RAG, up from less than 1% today.
Based on these findings, Gartner makes the following strategic assumptions concerning MCP adoption:
By 2026, 75% of API gateway vendors, and 50% of iPaaS vendors, will have MCP features.
The MCP standard introduces new security, stability, and governance risks similar to earlier API technologies.
Anthropic’s continued involvement is essential for MCP’s ongoing evolution to ensure stability and address new requirements.
The Gartner report breaks down MCP into 3 main components:
Client
The MCP client communicates with the MCP server. Optional features include roots (file sharing between client and server) and sampling (server-initiated requests for LLM completions).
Server
The MCP server exposes data sources, functions, and services. For remote access, it mediates between MCP clients and backend services.
Communication
The protocol itself defines how clients and servers exchange information. MCP communicates via the JSON-RPC 2.0 protocol over local (stdio) and remote (streaming HTTP POST) as the 2 primary transport mechanisms. GenAI apps retrieve tool definitions from the MCP servers, which the LLM can then use to respond to user prompts.
Check out the most awesome MCP servers for 2025
The potential benefits of MCP are perfectly aligned with the vision of empowering organizations with contextualized data:
Enhanced contextual awareness
Imagine AI agents who could tap into a treasure trove of relevant data – from real-time console logs, for debugging, to auto-generated MCP servers, based on existing documentation. This enhanced context is foundational to more autonomous agentic RAG systems that decide and act on behalf of users.
Simplified interface design
For developers, MCP streamlines integration efforts. By defining data sources and services as standardized MCP servers, LLM-based apps can automatically discover and use them whenever needed – eliminating the need for one-off integrations and ensuring consistent context management across various tools. Gartner highlights compelling use cases, from Retrieval-Augmented Generation (RAG) assistants to natural language control of CAD platforms and AI-powered e-commerce interactions.
Emerging ecosystems and marketplaces
The rapidly expanding ecosystem around MCP is particularly exciting. Platforms like Compsio, OpenTools, and Zapier are paving the way for easier discovery, sharing, and building of MCP servers. This layer of standardization promises to unlock a marketplace of high-quality, secure services accessible to LLM agents, as evidenced by the growing collection of community-built servers in the modelcontextprotocol/servers GitHub repository.
While the promise of MCP is significant, the Gartner report also cites the inherent risks that organizations must proactively address:
Implementation consistency
The optional nature of several MCP client and server features could lead to inconsistencies in functionality across different implementations. Developers need to be mindful of these variations to ensure interoperability.
Stability
As a relatively new standard, MCP is still evolving. The recent update incorporating OAuth 2.1 and other significant changes underscores the need for organizations to be prepared for ongoing iterations and potential breaking changes in SDKs.
Application security
Introducing custom MCP clients and servers inherently expands the attack surface. Robust security measures, including strict rate limiting, input/output sanitization, TLS requirements, and OAuth authorization, are paramount.
Software supply chain security
The reliance on third-party MCP servers and tools introduces new supply chain risks. Treating these as potentially hostile assets and diligently validating inputs and sanitizing outputs is crucial.
Competing standards
The AI landscape is dynamic, and MCP is not the only contender in the realm of AI agent communication. Emerging standards like AGNTCY’s ACP and AGP, Google’s A2A, and others could potentially fragment the market or necessitate a multi-protocol approach in the future.
Reliance on LLM function calling
The effectiveness of MCP is intrinsically linked to the underlying LLM's ability to accurately interpret and utilize the provided tool descriptions. Limitations in the LLM's context management or instruction-following capabilities could still lead to errors, even with MCP standardization.
Based on Gartner's insights and our own experience in navigating complex data integrations at enterprise scale, we at K2view suggest a careful approach to MCP adoption. For software engineering leaders venturing into GenAI, we support the following MCP Gartner recommendations:
Encourage exploration and demand vigilance
Empower your teams to experiment with the MCP client features within their development tools. While understanding the potential benefits and inherent risks firsthand is essential, this exploration must be coupled with a strong emphasis on mitigating security, stability, and governance risks.
Prototype internal MCP services
Begin by implementing prototype MCP services that expose your internal tools and data sources. This step allows you to streamline AI agent development within a controlled environment and assess the real-world benefits for your organization.
Restrict external exposure
At this stage, we strongly advise against implementing externally facing remote MCP servers for your internal services. Inbound usage of external MCP servers should be tightly controlled and limited to evaluation purposes, not production deployments.
Budget for evolution
Recognize that MCP is a moving target. Allocate sufficient time and resources for your teams to track the standard's evolution and time their implementations accordingly. Be prepared to pivot away from MCP if its initial advantages diminish.
Enforce stringent security
Implement mandatory security requirements for all MCP servers, including the exclusive use of HTTPS endpoints and robust OAuth implementation. Harden configurations for both client and server description files to minimize potential vulnerabilities.
The emergence of the model context protocol represents a significant step towards a more interconnected and standardized GenAI ecosystem. By providing a common language for AI applications to interact with the world, MCP holds the potential to unlock new levels of automation, intelligence, and efficiency.
At K2View, we believe that the key to realizing this potential lies in a balanced approach – one that embraces innovation while remaining extremely aware of the inherent risks. By thoughtfully exploring, securing, and implementing MCP, organizations can pave the way to a future where GenAI and enterprise data converge, to drive meaningful insights and satisfying outcomes. The journey is just beginning, and we're excited to see how the MCP standard evolves and shapes the future of intelligent applications.
Discover K2view GenAI Data Fusion, the universal MCP server for the enterprise.