Building Your First MCP Server

A practical guide to creating custom Model Context Protocol servers that supercharge AI tools

By

Building Your First MCP Server

Ever wished your AI assistant could peek into your actual data instead of making educated guesses? The Model Context Protocol makes this possible. Think of MCP as creating a direct pipeline between your information and AI tools. No more copy-pasting context or explaining what you’re working on.

Released by Anthropic in late 2024, MCP has already gained momentum with major players like OpenAI adopting it by March 2025. But here’s the thing: most developers are still treating it like some distant, complex specification. It’s not.

What Makes MCP Worth Your Time

MCP servers act as translators between your data sources and AI applications. Instead of Claude or Cursor operating in a vacuum, they can query your databases, read your files, or pull from your APIs in real-time.

The protocol handles three core capabilities:

Resources provide file-like data that AI tools can read. Think configuration files, documentation, or any static content your AI needs to understand your project.

Tools are functions your AI can execute. Want Claude to create a GitHub issue? Deploy code? Query your metrics dashboard? Tools make it happen.

Prompts are reusable task templates. Rather than typing the same complex instructions repeatedly, you can package them into callable prompts.

The Setup Process

Building an MCP server is surprisingly straightforward. You’ll need Node.js and basic TypeScript knowledge, nothing exotic.

First, initialize your project and grab the MCP SDK:

npm init -y
npm install @modelcontextprotocol/sdk

Your server needs three things: a name, version, and at least one capability. Here’s the skeleton:

import { Server } from '@modelcontextprotocol/sdk/server/index.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';

const server = new Server(
  {
    name: 'my-custom-server',
    version: '1.0.0',
  },
  {
    capabilities: {
      tools: {},
    },
  }
);

Building Something Useful

Let’s create a server that connects AI tools to your project’s issue tracker. Instead of manually checking tickets or explaining context, your AI can pull this information directly.

server.setRequestHandler('tools/call', async (request) => {
  if (request.params.name === 'get_active_issues') {
    const response = await fetch(`https://api.github.com/repos/${process.env.REPO}/issues`);
    const issues = await response.json();
    
    return {
      content: [{
        type: 'text',
        text: JSON.stringify(issues.slice(0, 10), null, 2)
      }]
    };
  }
});

server.setRequestHandler('tools/list', async () => {
  return {
    tools: [{
      name: 'get_active_issues',
      description: 'Fetch current GitHub issues for the project',
      inputSchema: {
        type: 'object',
        properties: {}
      }
    }]
  };
});

The magic happens in the transport layer. MCP supports multiple communication methods: stdin/stdout for local processes, HTTP for web services, or custom transports. For local development, stdin/stdout works seamlessly:

const transport = new StdioServerTransport();
await server.connect(transport);

Connecting to Claude Desktop

Once your server runs, Claude Desktop can use it through a simple configuration file. Create claude_desktop_config.json in your system’s config directory:

{
  "mcpServers": {
    "issue-tracker": {
      "command": "node",
      "args": ["/path/to/your/server.js"],
      "env": {
        "REPO": "your-org/your-repo"
      }
    }
  }
}

Restart Claude Desktop, and your AI assistant now has direct access to your project’s issues. Ask it to “review my open GitHub issues” and watch it pull real data instead of asking you to provide context.

Connecting to Claude Code CLI

Claude Code CLI offers even more flexibility for MCP server integration. You can add servers directly from your terminal using simple commands.

Add a local server that runs via stdio:

claude mcp add github-tracker node /path/to/your/server.js

For servers requiring environment variables:

claude mcp add airtable --env AIRTABLE_API_KEY=your-key -- npx -y airtable-mcp-server

Claude Code also supports remote servers. Add an SSE (Server-Sent Events) server:

claude mcp add --transport sse linear https://mcp.linear.app/sse

Or connect to HTTP-based servers:

claude mcp add --transport http notion https://mcp.notion.com/mcp

Manage your servers with these commands:

# List all configured servers
claude mcp list

# Get details for a specific server
claude mcp get github-tracker  

# Remove a server
claude mcp remove github-tracker

The real advantage of Claude Code CLI is server scoping. Configure servers at different levels:

Once connected, use the /mcp command within Claude Code to authenticate with OAuth providers and check connection status.

Beyond the Basics

The real power emerges when you chain multiple data sources. Imagine an MCP server that combines your analytics dashboard, recent deployments, and error logs. Your AI can correlate patterns across systems without you manually gathering information.

Consider authentication carefully. MCP servers often need API keys or database credentials. Use environment variables and follow standard security practices. Your server runs locally, but it’s handling sensitive data.

Error handling matters too. Network requests fail, APIs change, and data sources become unavailable. Build resilience into your tools:

try {
  const data = await fetchExternalData();
  return formatResponse(data);
} catch (error) {
  return {
    content: [{
      type: 'text',
      text: `Unable to fetch data: ${error.message}`
    }],
    isError: true
  };
}

The Bigger Picture

MCP represents a shift from AI as a standalone tool to AI as an integrated part of your development environment. Instead of context switching between your AI assistant and actual work, everything flows together.

Early adopters are building servers for everything: connecting to Notion databases, monitoring server metrics, managing cloud resources, even controlling IoT devices. The protocol’s flexibility means if you can build an API for it, you can create an MCP server.

What excites me most? We’re moving past the era of “AI that knows nothing about my actual work” toward assistants that understand your specific context, constraints, and goals. This shift changes everything about how we work with AI tools.

Tagged with:

MCP AI Development TypeScript Integration