AI & Machine Learning
MCP Servers: The New Standard for AI Tool Integration
Last updated: April 14, 2026
TL;DR
MCP (Model Context Protocol) is the open standard that lets AI models call your tools, read your data, and interact with external systems through a unified protocol. Instead of writing custom integrations for every AI provider, you build one MCP server and any MCP-compatible client — Claude Code, Claude Desktop, Cursor, Windsurf — can use it. I've built multiple MCP servers for my own workflow: one that queries my Supabase databases, another that manages deployments, and a Stripe integration that handles billing operations. This guide walks through the full architecture and includes a complete TypeScript MCP server you can adapt. If you're building AI-powered tools or products, MCP is the integration layer you should be building on in 2026.
What MCP Actually Is
MCP stands for Model Context Protocol. Anthropic released it as an open standard in late 2024, and by early 2025 it had become the dominant way AI models interact with external tools. Think of it as a USB-C for AI integrations — a single, standardized interface that replaces dozens of bespoke tool-calling implementations.
Before MCP, every AI integration was hand-rolled. If you wanted Claude to query your database, you wrote a custom tool definition, built the handler, parsed the response, and wired it all together in your application layer. Then if you wanted the same capability in a different AI client, you rebuilt it from scratch. Multiply that by every tool in your stack and you're drowning in glue code.
MCP inverts this. You build a server that exposes your capabilities through a standard protocol. Any MCP client connects to that server and discovers what it can do. The client handles the AI model interaction. The server handles the domain logic. They communicate through a well-defined JSON-RPC protocol over stdio or HTTP.
The protocol defines three core primitives:
- Tools — Functions the AI model can call. Like "query the database" or "create a Stripe invoice." The model decides when to call them based on the conversation.
- Resources — Data the AI model can read. Like files, database records, or API responses. These are pulled into the model's context on demand.
- Prompts — Reusable prompt templates that clients can surface to users. These let you package domain-specific instructions with your server.
When I first encountered MCP, my reaction was "finally." I'd been maintaining separate tool definitions across Claude API integrations, custom scripts, and IDE extensions. MCP collapsed all of that into a single server per capability. One Supabase MCP server. One Stripe MCP server. One deployment MCP server. Any client can use any of them.
Why It Matters — Before vs After
Let me show you the difference with a real example. I manage several client projects that use Supabase. Before MCP, here's what querying a database through Claude looked like:
Before MCP:
// Custom tool definition in my API route
const tools = [
{
name: "query_database",
description: "Run a SQL query against the project database",
input_schema: {
type: "object",
properties: {
query: { type: "string", description: "SQL query to execute" },
project: { type: "string", enum: ["europarts", "heavenly", "uvin"] },
},
required: ["query", "project"],
},
},
];
// Custom handler in the same API route
function handleToolCall(name: string, input: Record<string, unknown>) {
if (name === "query_database") {
const client = getSupabaseClient(input.project as string);
return client.rpc("execute_sql", { sql: input.query });
}
}
// Wire it into the Claude API call
const response = await anthropic.messages.create({
model: "claude-sonnet-4-20250514",
tools,
messages,
});
// Parse tool use blocks, call handlers, send results back...
// 50+ lines of orchestration code per integrationThat's one tool in one application. Now imagine maintaining this across five tools and three different clients (API, Claude Desktop, IDE extension). It's a maintenance disaster.
After MCP:
// supabase-mcp-server.ts — one file, works everywhere
server.tool("query_database", {
description: "Run a SQL query against the project database",
parameters: z.object({
query: z.string().describe("SQL query to execute"),
project: z.enum(["europarts", "heavenly", "uvin"]),
}),
execute: async ({ query, project }) => {
const client = getSupabaseClient(project);
const result = await client.rpc("execute_sql", { sql: query });
return { content: [{ type: "text", text: JSON.stringify(result.data) }] };
},
});That server works with Claude Code, Claude Desktop, Cursor, and any future MCP-compatible client. No glue code. No orchestration. No maintenance burden per client.
The real power is composability. I run multiple MCP servers simultaneously. Right now, in my Claude Code setup, I have Supabase, Stripe, Cloudflare, and my custom deployment server all connected. Claude sees all their tools and uses them contextually. When I say "check the latest orders in Europarts and verify the Stripe payments match," Claude calls tools across two different MCP servers in a single conversation. That kind of multi-system orchestration used to require a custom agent framework. Now it's configuration.
MCP Architecture
The architecture follows a client-server model with a clear separation of concerns:
┌─────────────────────────────────────────────┐
│ MCP Host Application │
│ (Claude Code, Cursor, etc.) │
│ │
│ ┌──────────────────────────────────────┐ │
│ │ MCP Client │ │
│ │ - Discovers server capabilities │ │
│ │ - Routes tool calls from the model │ │
│ │ - Returns results to the model │ │
│ └──────────┬───────────────────────────┘ │
└─────────────┼────────────────────────────────┘
│ JSON-RPC over stdio or HTTP
│
┌─────────────┼────────────────────────────────┐
│ ┌──────────▼───────────────────────────┐ │
│ │ MCP Server │ │
│ │ - Exposes tools, resources, prompts │ │
│ │ - Handles execution │ │
│ │ - Returns structured results │ │
│ └──────────┬───────────────────────────┘ │
│ │ │
│ ┌──────────▼───────────────────────────┐ │
│ │ Your Domain Logic │ │
│ │ - Database queries │ │
│ │ - API calls │ │
│ │ - File system operations │ │
│ │ - Business logic │ │
│ └──────────────────────────────────────┘ │
└──────────────────────────────────────────────┘The protocol lifecycle works like this:
- Initialization — The client starts the server process and sends an
initializerequest. The server responds with its capabilities (which primitives it supports). - Discovery — The client calls
tools/list,resources/list, orprompts/listto discover what the server offers. - Execution — When the AI model decides to use a tool, the client sends a
tools/callrequest with the tool name and arguments. The server executes the logic and returns a result. - Shutdown — The client sends a shutdown signal when done. The server cleans up.
All communication uses JSON-RPC 2.0. Every message has a method, params, and an ID for request-response correlation. This is important because it means the protocol is language-agnostic. You can build MCP servers in TypeScript, Python, Go, Rust — anything that can read stdin and write stdout, or handle HTTP requests.
Building Your First MCP Server in TypeScript
Let's build a complete MCP server. I'll use the official @modelcontextprotocol/sdk package, which is the TypeScript reference implementation. This server will manage a project task list — simple enough to understand, complex enough to demonstrate all the patterns.
mkdir task-mcp-server && cd task-mcp-server
npm init -y
npm install @modelcontextprotocol/sdk zod
npm install -D typescript @types/node tsxSet up your tsconfig.json:
{
"compilerOptions": {
"target": "ES2022",
"module": "Node16",
"moduleResolution": "Node16",
"outDir": "./dist",
"rootDir": "./src",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true
},
"include": ["src/**/*"]
}Now the server itself. This is a complete, working implementation:
// src/index.ts
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
// In-memory task store (swap for a database in production)
interface Task {
id: string;
title: string;
status: "todo" | "in-progress" | "done";
priority: "low" | "medium" | "high";
createdAt: string;
updatedAt: string;
}
const tasks = new Map<string, Task>();
// Initialize the MCP server
const server = new McpServer({
name: "task-manager",
version: "1.0.0",
});
// --- Tools ---
server.tool(
"create_task",
"Create a new task with a title and priority",
{
title: z.string().describe("The task title"),
priority: z
.enum(["low", "medium", "high"])
.default("medium")
.describe("Task priority level"),
},
async ({ title, priority }) => {
const id = crypto.randomUUID();
const now = new Date().toISOString();
const task: Task = {
id,
title,
status: "todo",
priority,
createdAt: now,
updatedAt: now,
};
tasks.set(id, task);
return {
content: [
{
type: "text" as const,
text: `Task created: ${task.title} (${task.id})\nPriority: ${task.priority}\nStatus: ${task.status}`,
},
],
};
}
);
server.tool(
"list_tasks",
"List all tasks, optionally filtered by status",
{
status: z
.enum(["todo", "in-progress", "done", "all"])
.default("all")
.describe("Filter tasks by status"),
},
async ({ status }) => {
const filtered = Array.from(tasks.values()).filter(
(t) => status === "all" || t.status === status
);
if (filtered.length === 0) {
return {
content: [{ type: "text" as const, text: "No tasks found." }],
};
}
const summary = filtered
.map(
(t) =>
`[${t.priority.toUpperCase()}] ${t.title} — ${t.status} (${t.id})`
)
.join("\n");
return {
content: [{ type: "text" as const, text: summary }],
};
}
);
server.tool(
"update_task",
"Update a task's status or priority",
{
id: z.string().describe("The task ID"),
status: z
.enum(["todo", "in-progress", "done"])
.optional()
.describe("New status"),
priority: z
.enum(["low", "medium", "high"])
.optional()
.describe("New priority"),
},
async ({ id, status, priority }) => {
const task = tasks.get(id);
if (!task) {
return {
content: [
{ type: "text" as const, text: `Error: Task ${id} not found.` },
],
isError: true,
};
}
if (status) task.status = status;
if (priority) task.priority = priority;
task.updatedAt = new Date().toISOString();
return {
content: [
{
type: "text" as const,
text: `Task updated: ${task.title}\nStatus: ${task.status}\nPriority: ${task.priority}`,
},
],
};
}
);
server.tool(
"delete_task",
"Delete a task by ID",
{
id: z.string().describe("The task ID to delete"),
},
async ({ id }) => {
const task = tasks.get(id);
if (!task) {
return {
content: [
{ type: "text" as const, text: `Error: Task ${id} not found.` },
],
isError: true,
};
}
tasks.delete(id);
return {
content: [
{ type: "text" as const, text: `Task deleted: ${task.title}` },
],
};
}
);
// --- Resources ---
server.resource(
"task-summary",
"tasks://summary",
{
description: "A summary of all tasks by status and priority",
mimeType: "application/json",
},
async () => {
const all = Array.from(tasks.values());
const summary = {
total: all.length,
byStatus: {
todo: all.filter((t) => t.status === "todo").length,
"in-progress": all.filter((t) => t.status === "in-progress").length,
done: all.filter((t) => t.status === "done").length,
},
byPriority: {
high: all.filter((t) => t.priority === "high").length,
medium: all.filter((t) => t.priority === "medium").length,
low: all.filter((t) => t.priority === "low").length,
},
};
return {
contents: [
{
uri: "tasks://summary",
text: JSON.stringify(summary, null, 2),
mimeType: "application/json",
},
],
};
}
);
// --- Start the server ---
async function main() {
const transport = new StdioServerTransport();
await server.connect(transport);
console.error("Task Manager MCP Server running on stdio");
}
main().catch(console.error);Add the run script to package.json:
{
"scripts": {
"build": "tsc",
"start": "node dist/index.js",
"dev": "tsx src/index.ts"
}
}That's a fully functional MCP server. Four tools, one resource, proper error handling, Zod validation on every input. Let's break down the key patterns.
Tool Definitions
Tools are the most important primitive in MCP. They define what the AI model can do. A well-designed tool definition has three qualities:
Clear names. The model reads the tool name to decide when to use it. create_task is unambiguous. do_thing is not. I use verb_noun naming consistently.
Descriptive parameters. Every parameter gets a .describe() call in Zod. The model uses these descriptions to decide what values to pass. Vague descriptions lead to wrong arguments.
Structured errors. When a tool fails, return isError: true with a descriptive message. The model uses this to decide whether to retry, ask the user for clarification, or try a different approach.
Here's a pattern I use for database-backed tools in production:
server.tool(
"search_products",
"Search the product catalogue by name, category, or OEM number",
{
query: z.string().describe("Search term — product name, category, or OEM part number"),
category: z.string().optional().describe("Filter by product category"),
limit: z.number().min(1).max(50).default(10).describe("Maximum results to return"),
},
async ({ query, category, limit }) => {
try {
const results = await db.product.findMany({
where: {
AND: [
{
OR: [
{ name: { contains: query, mode: "insensitive" } },
{ oemNumber: { contains: query, mode: "insensitive" } },
{ description: { contains: query, mode: "insensitive" } },
],
},
category ? { category: { name: category } } : {},
],
},
take: limit,
include: { category: true },
});
if (results.length === 0) {
return {
content: [
{
type: "text" as const,
text: `No products found for "${query}". Try a broader search term.`,
},
],
};
}
const formatted = results
.map((p) => `${p.name} | ${p.oemNumber} | ${p.category.name} | $${p.price}`)
.join("\n");
return {
content: [{ type: "text" as const, text: formatted }],
};
} catch (error) {
return {
content: [
{
type: "text" as const,
text: `Database error: ${error instanceof Error ? error.message : "Unknown error"}`,
},
],
isError: true,
};
}
}
);The key lesson: treat tool descriptions like API documentation. The AI model is your consumer. If the description is ambiguous, the model will misuse the tool.
Resource Providers
Resources give the AI model read access to data without requiring a tool call. The difference matters: tools are actions the model chooses to take, while resources are context the client can pull in proactively.
Use resources for:
- Configuration data — project settings, environment details, feature flags
- Reference material — documentation, schemas, catalogues
- Aggregated views — dashboards, summaries, status reports
Resources use URI-based addressing. You define a URI template and a handler:
// Static resource — always available
server.resource(
"project-config",
"config://project",
{ description: "Current project configuration", mimeType: "application/json" },
async () => ({
contents: [
{
uri: "config://project",
text: JSON.stringify(projectConfig, null, 2),
mimeType: "application/json",
},
],
})
);
// Dynamic resource with URI template
server.resource(
"task-detail",
"tasks://{id}",
{ description: "Detailed view of a specific task", mimeType: "application/json" },
async (uri) => {
const id = uri.pathname.slice(2); // Extract ID from URI
const task = tasks.get(id);
if (!task) {
throw new Error(`Task ${id} not found`);
}
return {
contents: [
{
uri: uri.toString(),
text: JSON.stringify(task, null, 2),
mimeType: "application/json",
},
],
};
}
);In my Supabase MCP server, I expose database schemas as resources. When Claude Code connects, it can read the schema to understand the data model before writing queries. This eliminates an entire class of errors where the model guesses at column names.
Transport Layers — stdio vs HTTP
MCP supports two transport mechanisms, and choosing the right one matters for your use case.
stdio Transport
The server runs as a subprocess of the client. Communication happens through stdin/stdout. This is what Claude Code and Claude Desktop use.
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
const transport = new StdioServerTransport();
await server.connect(transport);Advantages:
- Zero network configuration. No ports, no CORS, no TLS.
- Process isolation. The client manages the server lifecycle.
- Simple deployment. Distribute as an npm package or binary.
When to use: Local development tools, IDE integrations, CLI workflows. This is the default for most MCP servers.
Streamable HTTP Transport
The server runs as an HTTP endpoint. Clients connect over the network. This is essential for shared servers, cloud deployments, and multi-user scenarios.
import express from "express";
import { StreamableHTTPServerTransport } from "@modelcontextprotocol/sdk/server/streamableHttp.js";
const app = express();
app.use(express.json());
app.post("/mcp", async (req, res) => {
const transport = new StreamableHTTPServerTransport({
sessionIdGenerator: () => crypto.randomUUID(),
});
await server.connect(transport);
await transport.handleRequest(req, res);
});
app.listen(3001, () => {
console.log("MCP server listening on http://localhost:3001/mcp");
});Advantages:
- Remote access. Connect from anywhere.
- Shared state. Multiple clients can connect to the same server instance.
- Standard deployment. Deploy on Vercel, Railway, Fly.io, or any HTTP host.
When to use: Team tools, production services, remote servers, anything that needs authentication or multi-user access.
I use stdio for my personal workflow tools and HTTP for shared team servers. The protocol is the same — only the transport differs.
Testing MCP Servers
Testing MCP servers is straightforward because the protocol is well-structured. I use three layers:
Unit Tests for Tool Logic
Extract your domain logic into pure functions and test them independently:
// src/tasks.ts — pure domain logic
export function createTask(title: string, priority: Task["priority"]): Task {
return {
id: crypto.randomUUID(),
title,
status: "todo",
priority,
createdAt: new Date().toISOString(),
updatedAt: new Date().toISOString(),
};
}
// src/tasks.test.ts
import { describe, it, expect } from "vitest";
import { createTask } from "./tasks";
describe("createTask", () => {
it("creates a task with correct defaults", () => {
const task = createTask("Build MCP server", "high");
expect(task.title).toBe("Build MCP server");
expect(task.status).toBe("todo");
expect(task.priority).toBe("high");
expect(task.id).toBeDefined();
});
});Integration Tests with MCP Inspector
The MCP Inspector is a visual testing tool that connects to your server and lets you call tools interactively:
npx @modelcontextprotocol/inspector tsx src/index.tsThis opens a web UI where you can browse tools, call them with test inputs, and inspect responses. I use this during development to verify tool behavior before connecting to Claude.
End-to-End Tests with the Client SDK
For automated testing, use the MCP client SDK to simulate what Claude Code does:
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
const transport = new StdioClientTransport({
command: "tsx",
args: ["src/index.ts"],
});
const client = new Client({ name: "test-client", version: "1.0.0" });
await client.connect(transport);
// List tools
const { tools } = await client.listTools();
console.log("Available tools:", tools.map((t) => t.name));
// Call a tool
const result = await client.callTool("create_task", {
title: "Test task",
priority: "high",
});
console.log("Result:", result);
await client.close();This test starts the actual server, connects to it over stdio, and exercises the full protocol flow. It catches issues that unit tests miss — serialization bugs, protocol compliance problems, transport edge cases.
Real-World Use Cases
Here are MCP servers I've built and use daily:
Supabase Database Server
Exposes SQL query execution, table listing, schema inspection, and migration management. I use this constantly in Claude Code — instead of switching to Supabase Dashboard, I ask Claude to check data, run queries, or analyze schema issues directly in my terminal.
Stripe Billing Server
Handles customer lookup, invoice creation, payment verification, and subscription management. When a client reports a billing issue, I can investigate and resolve it without leaving my editor. Claude reads the customer's payment history, identifies the problem, and suggests a resolution.
Deployment Server
Wraps my Vercel and Cloudflare deployment workflows. I can ask Claude to check deployment status, trigger a new deployment, rollback to a previous version, or inspect build logs. This saves dozens of context switches per day.
File Processing Server
Handles PDF parsing, image optimization, and document conversion. When I need to extract data from a client's PDF invoice or resize a batch of images, the MCP server does it through Claude's tool calling.
The pattern is always the same: identify a workflow that requires context-switching, wrap it in an MCP server, and let Claude handle the orchestration. After a few weeks of building these servers, my workflow has fundamentally changed. I stay in one context — my editor — and Claude reaches into every system I need.
The MCP Ecosystem in 2026
The MCP ecosystem has exploded since Anthropic released the spec. Here's where things stand:
Official MCP servers. Anthropic and partners maintain servers for Supabase, Stripe, Cloudflare, GitHub, Slack, and dozens more. These are production-quality and actively maintained. Most of them ship as npm packages you can install and configure in minutes.
Client support. Claude Code, Claude Desktop, Cursor, Windsurf, Cline, and several other AI coding tools support MCP natively. The protocol has become the default way AI tools extend their capabilities.
Community servers. The open-source community has built MCP servers for almost everything — Notion, Linear, Jira, PostgreSQL, MongoDB, Docker, Kubernetes, AWS. If a tool has an API, there's likely an MCP server for it.
Server registries. Centralized registries have emerged where you can discover and install MCP servers. This is similar to how npm transformed JavaScript package distribution — MCP registries are doing the same for AI tool capabilities.
Enterprise adoption. Companies are building internal MCP servers that expose their proprietary systems to AI coding assistants. Instead of training engineers on complex internal CLIs, they wrap the functionality in an MCP server and let the AI handle the interface.
The trajectory is clear: MCP is becoming the standard integration layer between AI models and the tools they need to be useful. If you're building developer tools, SaaS products, or internal platforms, exposing an MCP interface is as important as having a REST API. Your users' AI assistants will be your new API consumers.
For my own projects and client work, MCP has become the first integration I build. Before a REST API, before a dashboard, I build the MCP server. It lets me use the tool immediately through Claude, and it gives clients a modern AI-native interface from day one.
Key Takeaways
- MCP is a protocol, not a framework. It defines how AI models communicate with tools. Build one server, connect any compatible client.
- Tools are actions, resources are context. Use tools for operations the model triggers. Use resources for data the client loads into context.
- Start with stdio. For local development and personal tools, stdio transport is simpler and requires zero configuration. Move to HTTP when you need remote access or multi-user support.
- Validate inputs with Zod. Every tool parameter should be typed and described. The model uses descriptions to decide what to pass.
- Handle errors explicitly. Return
isError: truewith descriptive messages. The model uses error responses to decide its next step. - Test at three levels. Unit test domain logic, integration test with MCP Inspector, end-to-end test with the client SDK.
- Build your own servers. The ecosystem has great official servers, but the real power is wrapping your own workflows, databases, and APIs.
The AI integration landscape has been fragmented for years — every provider with their own tool-calling format, every client with its own extension mechanism. MCP unifies all of it. If you're building anything that interacts with AI models, this is the protocol to bet on.
About the Author
I'm Uvin Vindula — a Web3 and AI engineer based between Sri Lanka and the UK, building production systems at the intersection of AI, blockchain, and modern web development. I build and ship MCP servers, smart contracts, and full-stack applications. You can see my work at iamuvin.com or reach out about a project at contact@uvin.lk↗.
If you're looking to integrate AI into your product or build custom MCP servers for your team, let's talk about your project.
Working on a Web3 or AI project?

Uvin Vindula
Web3 and AI engineer based in Sri Lanka and the UK. Author of The Rise of Bitcoin. Director of Blockchain and Software Solutions at Terra Labz. Founder of uvin.lk — Sri Lanka's Bitcoin education platform with 10,000+ learners.