Model Context Protocol (MCP) Explained (2026)
MCP (Model Context Protocol) is an open standard from Anthropic that connects AI models to external tools and data sources. Think of it as USB-C for AI — one standard protocol that lets any AI client talk to any data source or tool.
The Problem MCP Solves
Before MCP
Every AI integration was custom:
Claude ↔ Custom code ↔ GitHub
Claude ↔ Custom code ↔ Slack
Claude ↔ Custom code ↔ Database
ChatGPT ↔ Different custom code ↔ GitHub
ChatGPT ↔ Different custom code ↔ Slack
N AI clients × M data sources = N×M custom integrations.
With MCP
One protocol, universal compatibility:
Claude ↔ MCP ↔ GitHub MCP Server
Claude ↔ MCP ↔ Slack MCP Server
Claude ↔ MCP ↔ Database MCP Server
Any AI ↔ MCP ↔ Any MCP Server
N + M implementations instead of N×M.
How MCP Works
Architecture
┌──────────────┐ ┌─────────────┐ ┌──────────────┐
│ MCP Client │────▶│ MCP Server │────▶│ Data Source │
│ (AI app) │◀────│ (adapter) │◀────│ (GitHub, │
│ │ │ │ │ DB, API) │
└──────────────┘ └─────────────┘ └──────────────┘
│
│ MCP Protocol (JSON-RPC)
│
▼
┌─────────────┐ ┌──────────────┐
│ MCP Server │────▶│ Another │
│ (adapter) │◀────│ Data Source │
└─────────────┘ └──────────────┘
MCP Client: The AI application (Claude Desktop, Cursor, your custom app). Sends requests to MCP servers.
MCP Server: A lightweight adapter that translates between MCP protocol and a specific data source/tool. Each server exposes capabilities through a standard interface.
Transport: Communication happens over stdio (local) or SSE/HTTP (remote).
What MCP Servers Expose
MCP servers can provide three types of capabilities:
1. Resources (Data)
Static or dynamic data the AI can read:
{
"resources": [
{ "uri": "file:///docs/readme.md", "name": "README", "mimeType": "text/markdown" },
{ "uri": "db://users/recent", "name": "Recent Users", "mimeType": "application/json" }
]
}
2. Tools (Actions)
Functions the AI can call:
{
"tools": [
{
"name": "create_issue",
"description": "Create a GitHub issue",
"inputSchema": {
"type": "object",
"properties": {
"title": { "type": "string" },
"body": { "type": "string" },
"repo": { "type": "string" }
}
}
}
]
}
3. Prompts (Templates)
Pre-built prompt templates for common tasks:
{
"prompts": [
{
"name": "code_review",
"description": "Review a pull request",
"arguments": [{ "name": "pr_url", "required": true }]
}
]
}
Building an MCP Server
TypeScript Example
import { Server } from '@modelcontextprotocol/sdk/server/index.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
const server = new Server({
name: 'my-tool-server',
version: '1.0.0',
}, {
capabilities: { tools: {} },
});
// Define a tool
server.setRequestHandler('tools/list', async () => ({
tools: [{
name: 'get_weather',
description: 'Get current weather for a city',
inputSchema: {
type: 'object',
properties: { city: { type: 'string' } },
required: ['city'],
},
}],
}));
// Handle tool calls
server.setRequestHandler('tools/call', async (request) => {
if (request.params.name === 'get_weather') {
const city = request.params.arguments.city;
const weather = await fetchWeather(city);
return { content: [{ type: 'text', text: JSON.stringify(weather) }] };
}
});
// Start server
const transport = new StdioServerTransport();
await server.connect(transport);
Python Example
from mcp.server import Server
from mcp.types import Tool, TextContent
server = Server("my-tool-server")
@server.list_tools()
async def list_tools():
return [Tool(
name="get_weather",
description="Get current weather for a city",
inputSchema={"type": "object", "properties": {"city": {"type": "string"}}},
)]
@server.call_tool()
async def call_tool(name: str, arguments: dict):
if name == "get_weather":
weather = await fetch_weather(arguments["city"])
return [TextContent(type="text", text=str(weather))]
Available MCP Servers
Official (Anthropic)
| Server | Connects To |
|---|---|
| Filesystem | Local files and directories |
| GitHub | Repos, issues, PRs |
| Slack | Channels, messages |
| Google Drive | Documents, spreadsheets |
| PostgreSQL | Database queries |
| Brave Search | Web search |
| Memory | Persistent knowledge graph |
Community
Hundreds of community-built MCP servers:
- Notion, Linear, Jira
- Docker, Kubernetes
- Stripe, Shopify
- Gmail, Calendar
- Custom databases and APIs
Find servers: mcp.so, GitHub search "mcp server"
Using MCP
In Claude Desktop
Add servers to claude_desktop_config.json:
{
"mcpServers": {
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": { "GITHUB_TOKEN": "ghp_..." }
},
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/project"]
}
}
}
Now Claude can: read your files, create GitHub issues, search the web — all through natural conversation.
In Cursor
Cursor supports MCP servers for enhanced code context. Add servers to access databases, APIs, and external tools while coding.
In Custom Applications
import { Client } from '@modelcontextprotocol/sdk/client/index.js';
const client = new Client({ name: 'my-app', version: '1.0.0' });
await client.connect(transport);
// List available tools
const tools = await client.listTools();
// Call a tool
const result = await client.callTool({
name: 'get_weather',
arguments: { city: 'Tokyo' },
});
Why MCP Matters
For Developers
- Build once, use everywhere. Your MCP server works with Claude, Cursor, and any MCP-compatible client.
- Standard protocol. No learning custom APIs for each AI platform.
- Composable. Combine multiple MCP servers for complex workflows.
For Organizations
- Controlled access. MCP servers define exactly what the AI can access and do. Security boundaries are clear.
- Internal tools. Expose internal databases, APIs, and systems to AI assistants through MCP.
- Vendor flexibility. Switch AI providers without rewriting integrations.
For the Ecosystem
- Network effects. Every new MCP server benefits every MCP client. Every new client benefits from every server.
- Open standard. Not controlled by one company (despite Anthropic creating it). Open specification and open-source reference implementations.
FAQ
Is MCP only for Claude?
No. MCP is an open protocol. Any AI application can implement an MCP client. Claude Desktop, Cursor, and many other tools support it. The protocol is provider-agnostic.
How is MCP different from function calling?
Function calling is a feature of specific LLMs. MCP is a protocol for connecting AI clients to external tools. MCP servers can be used with any AI model through an MCP client — the server doesn't care which model is calling it.
Is MCP secure?
MCP defines the protocol, not the security model. Security depends on: what your MCP server exposes, authentication mechanisms, and transport security (TLS for remote servers). You control what the AI can access.
Do I need MCP for simple integrations?
For one AI app talking to one tool: direct integration is simpler. MCP shines when: you have multiple tools, use multiple AI clients, or want standard, reusable integrations.
How mature is MCP?
Early but growing rapidly. The specification is stable. Hundreds of servers exist. Major AI tools support it. Adoption is accelerating.
Bottom Line
MCP is becoming the standard way to connect AI to external tools and data. For developers: build MCP servers instead of custom integrations. For organizations: expose internal systems through MCP for controlled AI access. For the ecosystem: MCP creates a composable layer between AI and the world's tools.
Get started: Install one MCP server in Claude Desktop (e.g., filesystem or GitHub). See how natural it feels when Claude can access your actual data and tools. Then build a custom MCP server for your specific needs — the SDK makes it straightforward.