This comprehensive MCP directory for 2025 rates 15 awesome MCP servers in terms of how they access AI tools, inject enterprise data, enable RAG, and more.
With the rise of LLM-powered apps, it’s become clear that feeding LLMs with structured, contextual information at runtime is critical for accuracy and personalization – and MCP AI has quickly emerged as the standard to make that possible. An MCP directory helps enterprises compare the features of different MCP servers at a glance.
Within the model context protocol, an MCP server acts as the hub between generative AI (GenAI) apps (MCP clients) and enterprise data sources. Its primary function is to receive data requests from clients, securely retrieve the relevant data and information from various backend systems (databases, APIs, documents, files, etc.), enforce data privacy and security policies (like masking or filtering), and then deliver the processed data back to the requesting client in a structured manner and conversational latency.
The MCP server orchestrates the complex data retrieval process, leveraging the metadata of the underlying sources along with an LLM to understand which and how sources should be queried. The MCP server is typically required to combine data from multiple sources, and ensures that only authorized data is returned to the AI application.
This crucial function enables a GenAI app to ground its responses in live, enterprise-specific data, enhancing accuracy and personalization, while maintaining data governance.
I've spent the past few months exploring and testing dozens of MCP servers – open-source and commercial; production-grade and experimental.
In this MCP directory, I’ve pulled together a list of the 15 most awesome MCP servers across a range of use cases, from enterprise data and knowledge, to dev tools, public APIs, and more.
Whether you're looking to enable Retrieval-Augmented Generation (RAG) to integrate internal docs, fetch CRM and billing data for your RAG chatbot, or feed structured multi-source enterprise data to an LLM through Table-Augmented Generation (TAG), this directory includes a variety of MCP servers that are robust, well-documented, and already being used in the field.
Below, you’ll find a comparison table covering features, open-source status, hosting options, and best use cases for each of these awesome MCP servers.
Name | Features | Open-source | Hosting | Best use |
1 |
Real-time, entity-based data access; secure, silo-spanning virtualization | No | On-prem, Cloud | Enterprise data |
2 Vectara |
Semantic search, RAG-ready, embeddings out-of-the-box | Yes | Cloud | Knowledge, notes |
3 Zapier |
6,000+ app automations, live integration context | No | Cloud | Dev tools, integrations |
4 Notion |
Workspace data (pages, tasks), context for team AI agents | Yes | Self-hosted, Cloud | Knowledge, notes |
5 Supabase |
Serverless, Postgres-based context, edge function support | Yes | Self-host, Cloud | Dev tools, infra |
6 Pinecone |
Fast vector-based retrieval, optimized for similarity search | Yes | Cloud | Knowledge |
7 OpenAPI (HF) |
Community server, OpenAPI-based context injection | Yes | Self-hosted | Public APIs |
8 |
Thread & channel context for bots and assistants | No | Cloud | Enterprise data |
9 Salesforce |
CRM context for LLMs (leads, tasks, history) | No | Cloud | Enterprise data |
10 LangChain MCP |
Agent framework with MCP server adapters | Yes | Self-hosted | Dev tools, infra |
11 LlamaIndex |
Index builder + context retriever with custom data loaders | Yes | Self-hosted | Knowledge |
12 Databricks (Mosaic) |
AI/ML-ready, Delta Lake integration, enterprise-scale | No | Cloud | Enterprise data |
13 Weather MCP |
Reference MCP implementation for time-series APIs | Yes | Self-hosted | Public APIs |
14 OKX MCP Server |
Crypto price feeds & market data delivery to LLMs | Yes | Self-hosted | Public APIs |
15 Google Calendar MCP |
Context from calendars, schedules, availability | Yes | Self-hosted | Dev tools |
K2view provides a high-performance MCP server designed for real-time delivery of multi-source enterprise data to LLMs. Using entity-based data virtualization tools, it enables granular, secure, and low-latency access to operational data across silos.
Main features:
Real-time data delivery from multiple systems
Granular data privacy and security
Built-in data virtualization and transformation
On-prem and cloud-ready deployments
Resources:
Installation intro
Setup guide
Vectara offers a commercial MCP server designed for semantic search and retrieval-augmented generation (RAG). It enables real-time, relevance-ranked context delivery to LLMs using custom and domain-specific embeddings.
Main features:
RAG framework with semantic search
Automated generation of embeddings
Supports multi-language queries
API-first and open-source reference MCP server
Resources:
Vectara MCP server (Github)
MCP server overview
Zapier’s MCP server enables LLMs to interact with thousands of apps, ranging from Google Sheets to simple CRMs. It exposes Zapier workflows, triggers, and automations to GenAI systems.
Main features:
Access to 6,000+ integrated apps
Trigger actions by MCP clients
No-code automation builder
Hosted cloud-based context delivery
Resources:
Zapier MCP server overview
Blog intro
This MCP server exposes Notion data (pages, databases, tasks) as context to LLMs, allowing AI agents to reference workspace data in real-time. It’s a practical tool for knowledge assistants operating within productivity tools.
Main features:
Access pages, databases, and tasks via MCP
Contextual snapshot of teams’ workspace
Self-hosted server with OAuth integration
Ideal for multi-user knowledge management
Resources:
Notion MCP server
GitHub repository
The Supabase MCP Server bridges edge functions and Postgres to stream contextual data to LLMs. It’s built for developers who want server-less, scalable context delivery, based on user or event data.
Main features:
Postgres-native MCP support
Edge Function triggers for live updates
Integration with RLS and auth
Open-source and self-host
Resources:
Supabase blog intro
GitHub repository
Docs
Built on Pinecone’s vector database, this MCP server supports fast, similarity-based context retrieval. It’s optimized for applications that require LLMs to recall semantically relevant facts or documents.
Main features:
Fast vector search, optimized for similarity
Scalable retrieval
Embedding-based document indexing
Production-grade latency and reliability
Resources:
GitHub repository
A community-built OpenAPI MCP server designed to enable transparent, standardized access to LLM context. It demonstrates interoperability between LLM tools and open data resources.
Main features:
Standardized interface for OpenAPI-based APIs
Lightweight demo implementation
Supports HuggingFace Spaces deployment
Ideal for community experimentation
Resources:
Install guide / blog
The Slack MCP Server captures real-time conversation threads, metadata, and workflows, making them accessible to LLMs. It’s used in enterprise bots and assistants for enhanced in-channel responses.
Main features:
Thread and channel context injection
Contextual memory for assistant responses
Integrated with Slackbot and slash commands
Enterprise-ready, no self-hosting required
Resources:
Slack MCP server guide
Salesforce’s MCP integration enables CRM data (accounts, leads, conversations) to be injected into LLM workflows. It supports AI use cases in marketing, sales enablement, and service automation.
Main features:
CRM entity access (leads, opportunities, tasks)
Role-based context customization
Integration with Service Cloud AI
Secure, enterprise-grade deployment
Resources:
Marketing, cloud, connect, and install docs
Setup guide
LangChain includes support for building full-featured MCP servers that allow AI agents to dynamically query knowledge bases and structured data. It includes out-of-the-box integrations and adapters.
Main features:
Agent-ready framework with MCP adapters
Plug in external tools with ease
Extensible for autonomous workflows
Powered by composable chains and tools
Resources:
MCP agent setup guide
Beginner tutorial