Applied Intelligence
Module 10: MCP and Tool Integration

Building Custom MCP Servers

When building makes sense

The earlier advice stands: exhaust existing options before building custom servers. Over 10,000 servers exist in the ecosystem. Most common integrations already have implementations.

That said, building your own becomes the right call in a few situations.

Proprietary systems. Internal tools, custom databases, legacy applications that no public server supports. An aerospace company built a custom MCP server to wrap Siemens Polarion ALM, a legacy system with terrible integration points that had resisted every other approach.

Sensitive data. When compliance requirements (HIPAA, SOC2, financial regulations) demand complete control over data flows. Block built all their MCP servers in-house rather than using public servers for exactly this reason.

Business logic. Workflows specific to your domain that require custom orchestration. Razorpay built a custom server to convert Figma designs into production code using their internal design system. No general-purpose server could encode those conventions.

Control requirements. When you need OAuth integration with your IdP, custom audit logging, or organization-specific policy enforcement.

The decision matrix:

FactorUse ExistingBuild Custom
Standard SaaS integrationYesNo
Public API accessYesNo
Sensitive data involvedConsider carefullyYes
Proprietary internal systemNoYes
Custom business logicNoYes
Time to value criticalYesDepends

Building a custom MCP server takes two to three weeks for simple connectors, longer for complex orchestrations. Weigh that against grabbing an existing server today.

Choosing an SDK

MCP has official SDKs in multiple languages. Pick based on your stack and deployment environment.

TypeScript SDK

The TypeScript SDK is the most mature implementation: 5.65 million weekly npm downloads, over 21,000 dependent projects.

npm install @modelcontextprotocol/sdk

Basic server structure:

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

const server = new McpServer({
  name: "my-custom-server",
  version: "1.0.0",
});

server.registerTool(
  "get_customer",
  {
    description: "Retrieve customer details by ID",
    inputSchema: {
      customerId: z.string().describe("The customer identifier"),
    },
  },
  async ({ customerId }) => {
    const customer = await database.getCustomer(customerId);
    return {
      content: [{ type: "text", text: JSON.stringify(customer, null, 2) }]
    };
  }
);

async function main() {
  const transport = new StdioServerTransport();
  await server.connect(transport);
}

main();

The SDK uses Zod for schema validation. Define input parameters as Zod schemas; the SDK generates JSON Schema automatically.

Python SDK

The Python SDK has equivalent functionality with Pythonic patterns:

pip install mcp

Basic server structure:

from mcp.server.mcpserver import MCPServer

mcp = MCPServer("my-custom-server")

@mcp.tool()
def get_customer(customer_id: str) -> dict:
    """Retrieve customer details by ID."""
    customer = database.get_customer(customer_id)
    return customer

@mcp.resource("customer://{customer_id}")
def customer_resource(customer_id: str) -> str:
    """Customer data as a resource."""
    customer = database.get_customer(customer_id)
    return json.dumps(customer, indent=2)

if __name__ == "__main__":
    mcp.run()

The Python SDK generates schemas from type hints. Function signatures become input schemas.

Other SDKs

SDKStatusUse Case
C# (Microsoft)Production.NET enterprise environments
Kotlin (JetBrains)ProductionJVM backends, Android
Java (Spring AI)ProductionSpring Boot applications
Rust (community)MaturingHigh-performance servers

TypeScript and Python SDKs track the specification most closely. For production servers, prefer these unless your organization standardizes on another runtime.

The stderr logging rule

This is the single most common mistake in MCP server development.

MCP servers using stdio transport must write only JSON-RPC messages to stdout. All logs, debug output, and status messages must go to stderr.

The protocol uses stdout as its communication channel. Any non-JSON-RPC output corrupts the message stream and causes parsing failures.

Wrong (breaks the protocol):

# BAD: Writes to stdout, corrupts JSON-RPC
print("Processing request...")
print(f"Customer ID: {customer_id}")
// BAD: console.log writes to stdout
console.log("Server starting...");
console.log(`Processing ${request.id}`);

Correct (stderr is safe):

import sys
import logging

# Configure logging to stderr
logging.basicConfig(
    level=logging.INFO,
    stream=sys.stderr,
    format='%(asctime)s - %(levelname)s - %(message)s'
)

logging.info("Processing request...")  # Goes to stderr
// console.error writes to stderr
console.error("Server starting...");
console.error(`Processing ${request.id}`);

For structured logging in production:

import json
import sys
from datetime import datetime

def log_structured(level: str, method: str, **kwargs):
    entry = {
        "timestamp": datetime.utcnow().isoformat(),
        "level": level,
        "method": method,
        **kwargs
    }
    print(json.dumps(entry), file=sys.stderr)

log_structured("INFO", "tools/call",
    tool_name="get_customer",
    duration_ms=45,
    status="success")

Output goes to stderr. Stdout stays clean for JSON-RPC.

FastMCP for rapid development

FastMCP is a Python framework that simplifies MCP server development. It powers roughly 70% of MCP servers across all languages and has been incorporated into the official Python SDK.

pip install fastmcp

FastMCP reduces boilerplate through decorators:

from fastmcp import FastMCP

mcp = FastMCP("customer-server")

@mcp.tool
def get_customer(customer_id: str) -> dict:
    """Retrieve customer by ID."""
    return database.get_customer(customer_id)

@mcp.tool
def update_customer(customer_id: str, name: str = None, email: str = None) -> dict:
    """Update customer details."""
    updates = {k: v for k, v in {"name": name, "email": email}.items() if v}
    return database.update_customer(customer_id, updates)

@mcp.resource("customers://all")
def list_customers() -> str:
    """All customers as JSON."""
    return json.dumps(database.list_customers())

@mcp.prompt
def customer_analysis(customer_id: str) -> str:
    """Generate a customer analysis prompt."""
    customer = database.get_customer(customer_id)
    return f"""Analyze this customer's profile and suggest engagement strategies:

Customer: {customer['name']}
Account Age: {customer['account_age_days']} days
Total Orders: {customer['order_count']}
Lifetime Value: ${customer['lifetime_value']}"""

if __name__ == "__main__":
    mcp.run()  # Defaults to stdio transport

Resource templates

FastMCP supports parameterized resources using URI templates:

@mcp.resource("order://{order_id}")
def get_order(order_id: str) -> str:
    """Retrieve order details."""
    order = database.get_order(order_id)
    return json.dumps(order)

@mcp.resource("orders://customer/{customer_id}")
def customer_orders(customer_id: str) -> str:
    """All orders for a customer."""
    orders = database.get_orders_by_customer(customer_id)
    return json.dumps(orders)

URI template parameters become function arguments.

Progress reporting

For long-running operations, FastMCP provides context for progress updates:

from mcp.server.mcpserver import Context

@mcp.tool
async def process_batch(file_ids: list[str], ctx: Context) -> dict:
    """Process a batch of files with progress reporting."""
    results = []
    for i, file_id in enumerate(file_ids):
        await ctx.report_progress(progress=i, total=len(file_ids))
        result = await process_file(file_id)
        results.append(result)
    return {"processed": len(results), "results": results}

Clients see progress updates as the operation runs.

Lifecycle management

Initialize and clean up resources properly:

from contextlib import asynccontextmanager

@asynccontextmanager
async def app_lifespan(server):
    # Startup
    db = await Database.connect()
    try:
        yield {"db": db}
    finally:
        # Shutdown
        await db.disconnect()

mcp = FastMCP("customer-server", lifespan=app_lifespan)

The lifespan context manager runs at server start and handles cleanup on exit.

Testing with MCP Inspector

MCP Inspector is the official tool for testing servers during development. It has a web UI and CLI for invoking tools, reading resources, and debugging protocol messages.

# Test your server
npx @modelcontextprotocol/inspector python customer_server.py

# Access UI at http://localhost:6274

Inspector launches your server and connects as a client, displaying available tools, resources, and prompts.

CLI mode for automation

For scripted testing:

# List available tools
npx @modelcontextprotocol/inspector --cli python server.py \
  --method tools/list

# Call a specific tool
npx @modelcontextprotocol/inspector --cli python server.py \
  --method tools/call \
  --tool-name get_customer \
  --tool-arg customer_id="cust_12345"

# List resources
npx @modelcontextprotocol/inspector --cli python server.py \
  --method resources/list

CLI mode integrates into CI pipelines.

Configuration file

For complex server configurations:

{
  "mcpServers": {
    "customer-server": {
      "command": "python",
      "args": ["customer_server.py"],
      "env": {
        "DATABASE_URL": "postgresql://localhost/customers",
        "DEBUG": "true"
      }
    }
  }
}
npx @modelcontextprotocol/inspector --config mcp.json --server customer-server

Environment variables and arguments pass through to your server.

Protocol debugging

Inspector shows raw JSON-RPC messages. Protocol issues become visible immediately:

  • Request/response pairs with timing
  • Schema validation errors
  • Malformed messages (usually from stdout logging mistakes)
  • Tool execution failures with stack traces

When your server does not appear in Claude Code, test it with Inspector first. Most problems become obvious in the message log.

Deployment considerations

Local development

For development and testing, stdio transport works fine:

if __name__ == "__main__":
    mcp.run()  # stdio by default

Configure in Claude Code:

{
  "mcpServers": {
    "customer-server": {
      "command": "python",
      "args": ["/path/to/customer_server.py"]
    }
  }
}

Production deployment

For team or production use, consider Streamable HTTP transport:

if __name__ == "__main__":
    mcp.run(transport="streamable-http", port=8000)

Configure in Claude Code:

{
  "mcpServers": {
    "customer-server": {
      "type": "http",
      "url": "https://mcp.internal.company.com/customers"
    }
  }
}

Streamable HTTP enables horizontal scaling, load balancing, and centralized deployment.

Security checklist

Before deploying a custom server:

  • All logging goes to stderr, never stdout
  • Input validation on all tool parameters
  • Authentication required for sensitive operations
  • Rate limiting to prevent abuse
  • Error messages do not leak sensitive information
  • Dependencies scanned for vulnerabilities
  • Container image if deploying remotely

The security boundaries discussed in the previous section apply equally to custom servers. Defense in depth remains the principle.

On this page