If you've used AI coding assistants like Claude Code, Cursor, or Windsurf, you've probably noticed they can do more than just generate text — they can read files, search the web, query databases, and interact with APIs. But how do these AI agents connect to external tools? The answer is the Model Context Protocol (MCP) — an open standard that's quickly becoming the universal plugin system for AI.
What is MCP?
The Model Context Protocol is an open protocol (created by Anthropic) that standardizes how AI applications connect to external data sources and tools. Think of it as USB for AI — a universal interface that lets any AI agent plug into any tool, without custom integrations for each combination.
Why MCP Matters
Before MCP, every AI tool had to build custom integrations for every data source. If you wanted Claude to access your database, Slack, and GitHub, you'd need three separate integrations — each with its own protocol, auth, and error handling. MCP solves this with a single standard:
- For AI developers: Build one MCP client, connect to any MCP server. No custom integration per tool.
- For tool developers: Build one MCP server, and every AI agent can use it. Write once, work everywhere.
- For enterprises: Control exactly what data AI agents can access. MCP servers enforce permissions at the protocol level.
MCP Architecture
MCP follows a client-server architecture with three main components:
What Can an MCP Server Expose?
MCP servers can provide three types of capabilities:
- Tools: Functions the AI can call — like querying a database, sending a Slack message, or creating a GitHub issue. The AI decides when to use them.
- Resources: Data the AI can read — like files, database records, or API responses. Similar to GET endpoints in REST.
- Prompts: Reusable prompt templates that the AI or user can invoke. Useful for standardized workflows.
Building Your First MCP Server (Python)
Let's build an MCP server from scratch that gives AI agents access to a SQLite database. The AI will be able to list tables, describe schemas, and run read-only SQL queries.
# Install the MCP Python SDK
pip install mcp
# Project structure
my-db-server/
server.py # MCP server implementation
database.db # SQLite database
# server.py — A complete MCP server for SQLite
import sqlite3
import json
from mcp.server import Server
from mcp.types import Tool, TextContent
from mcp.server.stdio import stdio_server
# Create the MCP server
server = Server("sqlite-explorer")
# Connect to the database
DB_PATH = "database.db"
def get_db():
conn = sqlite3.connect(DB_PATH)
conn.row_factory = sqlite3.Row
return conn
# ── Tool 1: List all tables ────────────────────
@server.tool()
async def list_tables() -> list[TextContent]:
"""List all tables in the SQLite database."""
conn = get_db()
cursor = conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' ORDER BY name"
)
tables = [row["name"] for row in cursor.fetchall()]
conn.close()
return [TextContent(
type="text",
text=json.dumps({"tables": tables}, indent=2)
)]
# ── Tool 2: Describe a table's schema ──────────
@server.tool()
async def describe_table(table_name: str) -> list[TextContent]:
"""Get the schema (columns and types) of a specific table."""
conn = get_db()
cursor = conn.execute(f"PRAGMA table_info({table_name})")
columns = [
{"name": row["name"], "type": row["type"], "nullable": not row["notnull"]}
for row in cursor.fetchall()
]
conn.close()
return [TextContent(
type="text",
text=json.dumps({"table": table_name, "columns": columns}, indent=2)
)]
# ── Tool 3: Run a read-only SQL query ──────────
@server.tool()
async def query(sql: str) -> list[TextContent]:
"""Execute a read-only SQL query and return results.
Only SELECT statements are allowed for safety."""
# Safety check — only allow SELECT queries
if not sql.strip().upper().startswith("SELECT"):
return [TextContent(
type="text",
text="Error: Only SELECT queries are allowed for safety."
)]
conn = get_db()
try:
cursor = conn.execute(sql)
rows = [dict(row) for row in cursor.fetchall()]
return [TextContent(
type="text",
text=json.dumps({"results": rows, "count": len(rows)}, indent=2)
)]
except Exception as e:
return [TextContent(type="text", text=f"SQL Error: {str(e)}")]
finally:
conn.close()
# ── Run the server ──────────────────────────────
async def main():
async with stdio_server() as (read_stream, write_stream):
await server.run(read_stream, write_stream)
if __name__ == "__main__":
import asyncio
asyncio.run(main())
Connecting Your MCP Server to Claude
To use your MCP server with Claude Desktop or Claude Code, add it to your configuration:
# For Claude Desktop — edit ~/Library/Application Support/Claude/claude_desktop_config.json (macOS)
# or %APPDATA%/Claude/claude_desktop_config.json (Windows)
{
"mcpServers": {
"sqlite-explorer": {
"command": "python",
"args": ["/path/to/my-db-server/server.py"],
"env": {
"DB_PATH": "/path/to/database.db"
}
}
}
}
# For Claude Code — edit ~/.claude/settings.json or project .mcp.json
{
"mcpServers": {
"sqlite-explorer": {
"command": "python",
"args": ["server.py"],
"cwd": "/path/to/my-db-server"
}
}
}
Once configured, Claude can now use your tools naturally: "Show me all the tables in the database" or "Find all users who signed up this week" — and it will call your MCP server functions automatically.
Building an MCP Server (TypeScript / Node.js)
The TypeScript SDK is equally powerful. Here's a GitHub MCP server that lets AI agents interact with repositories:
# Install the MCP TypeScript SDK
npm install @modelcontextprotocol/sdk
// github-server.ts — MCP server for GitHub
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
const GITHUB_TOKEN = process.env.GITHUB_TOKEN;
const server = new Server(
{ name: "github-explorer", version: "1.0.0" },
{ capabilities: { tools: {} } }
);
// Tool: List repositories for a user/org
server.setRequestHandler("tools/list", async () => ({
tools: [
{
name: "list_repos",
description: "List GitHub repositories for a user or organization",
inputSchema: {
type: "object",
properties: {
owner: { type: "string", description: "GitHub username or org" },
sort: { type: "string", enum: ["updated", "stars", "name"], default: "updated" }
},
required: ["owner"]
}
},
{
name: "get_issues",
description: "Get open issues for a repository",
inputSchema: {
type: "object",
properties: {
owner: { type: "string" },
repo: { type: "string" },
state: { type: "string", enum: ["open", "closed", "all"], default: "open" }
},
required: ["owner", "repo"]
}
},
{
name: "create_issue",
description: "Create a new issue in a repository",
inputSchema: {
type: "object",
properties: {
owner: { type: "string" },
repo: { type: "string" },
title: { type: "string" },
body: { type: "string" },
labels: { type: "array", items: { type: "string" } }
},
required: ["owner", "repo", "title"]
}
}
]
}));
// Handle tool calls
server.setRequestHandler("tools/call", async (request) => {
const { name, arguments: args } = request.params;
const headers = {
"Authorization": "Bearer " + GITHUB_TOKEN,
"Accept": "application/vnd.github.v3+json",
"User-Agent": "mcp-github-server"
};
if (name === "list_repos") {
const res = await fetch(
"https://api.github.com/users/" + args.owner + "/repos?sort=" + (args.sort || "updated"),
{ headers }
);
const repos = await res.json();
const summary = repos.map((r: any) => ({
name: r.name, stars: r.stargazers_count,
language: r.language, updated: r.updated_at
}));
return { content: [{ type: "text", text: JSON.stringify(summary, null, 2) }] };
}
if (name === "get_issues") {
const res = await fetch(
"https://api.github.com/repos/" + args.owner + "/" + args.repo + "/issues?state=" + (args.state || "open"),
{ headers }
);
const issues = await res.json();
return { content: [{ type: "text", text: JSON.stringify(
issues.map((i: any) => ({ number: i.number, title: i.title, state: i.state, labels: i.labels.map((l: any) => l.name) })),
null, 2
) }] };
}
if (name === "create_issue") {
const res = await fetch(
"https://api.github.com/repos/" + args.owner + "/" + args.repo + "/issues",
{ method: "POST", headers, body: JSON.stringify({ title: args.title, body: args.body, labels: args.labels }) }
);
const issue = await res.json();
return { content: [{ type: "text", text: "Created issue #" + issue.number + ": " + issue.html_url }] };
}
return { content: [{ type: "text", text: "Unknown tool: " + name }] };
});
// Start the server
const transport = new StdioServerTransport();
server.connect(transport);
MCP Communication Protocol
Under the hood, MCP uses JSON-RPC 2.0 messages over stdio (standard input/output) or HTTP with Server-Sent Events (SSE). Here's what the messages look like:
// What the JSON-RPC messages actually look like:
// 1. Client discovers available tools
// Request:
{"jsonrpc": "2.0", "id": 1, "method": "tools/list"}
// Response:
{
"jsonrpc": "2.0", "id": 1,
"result": {
"tools": [{
"name": "query",
"description": "Execute a read-only SQL query",
"inputSchema": {
"type": "object",
"properties": {
"sql": {"type": "string", "description": "The SQL query to execute"}
},
"required": ["sql"]
}
}]
}
}
// 2. Client calls a tool
// Request:
{
"jsonrpc": "2.0", "id": 2,
"method": "tools/call",
"params": {
"name": "query",
"arguments": {"sql": "SELECT * FROM users WHERE created_at > date('now', '-1 day')"}
}
}
// Response:
{
"jsonrpc": "2.0", "id": 2,
"result": {
"content": [{
"type": "text",
"text": "{"results": [{"id": 1, "name": "Jane", ...}], "count": 3}"
}]
}
}
Transport Methods
Popular MCP Servers You Can Use Today
The MCP ecosystem is growing rapidly. Here are production-ready servers you can plug into any MCP-compatible AI agent:
| Server | What It Does | Language |
|---|---|---|
| GitHub | Read/write repos, issues, PRs, code search | TypeScript |
| PostgreSQL | Query databases, inspect schemas, run analysis | TypeScript |
| Slack | Send messages, read channels, search history | TypeScript |
| Filesystem | Read/write/search files with permission controls | TypeScript |
| Puppeteer | Browser automation, screenshots, web scraping | TypeScript |
| Sentry | Query error tracking, analyze stack traces | Python |
| Brave Search | Web search with AI-friendly results | TypeScript |
Security Best Practices
- Principle of least privilege: Your MCP server should only expose the minimum operations needed. A database server should be read-only unless writes are explicitly required.
- Input validation: Always validate and sanitize tool arguments. SQL injection through an MCP tool is still SQL injection.
- Authentication: For remote MCP servers (HTTP transport), use bearer tokens or OAuth to authenticate clients.
- Rate limiting: AI agents can call tools rapidly. Implement rate limiting to prevent runaway usage.
- Logging: Log every tool call with arguments and results for auditing. You need to know what the AI did with your data.
- Scoping: Use environment variables or config files to control what the server can access. Don't hardcode database URLs or API keys.
Building MCP Servers — Best Practices
The Future of MCP
MCP is still young, but adoption is accelerating. Every major AI coding tool — Claude Code, Cursor, Windsurf, Cline — now supports MCP. The protocol is becoming what HTTP was for the web: the standard that makes everything interoperable.
- For developers: Learning to build MCP servers is one of the highest-leverage skills in AI right now. You're building the tools that AI agents use.
- For companies: MCP lets you give AI agents controlled access to internal systems — databases, APIs, documentation — without exposing raw credentials or building custom integrations.
- For the ecosystem: As more MCP servers are published, AI agents become more capable. A single MCP server for Jira means every AI tool can manage Jira tickets.
MCP is to AI agents what REST was to web services — a universal language that unlocks an ecosystem. Start building your MCP server today, and you'll be ahead of the curve when every application needs an AI-compatible interface.