Applied Intelligence
Module 10: MCP and Tool Integration

Transport Mechanisms

Protocol and transport are separate concerns

MCP separates protocol from transport. The JSON-RPC 2.0 messages described in the previous page work identically regardless of how they travel between client and server. A tools/call request has the same structure whether sent through a local process pipe or across the internet.

Different deployment scenarios have different requirements. Local development needs low latency and simple setup. Remote services need authentication, scaling, and network traversal. No single transport optimizes for both.

The MCP specification defines two official transports: stdio for local servers and Streamable HTTP for remote servers. Each transport handles message framing, connection management, and error recovery differently. The choice determines deployment characteristics but not protocol capabilities.

stdio transport: local server communication

The stdio transport runs MCP servers as local subprocesses. Communication happens through standard input and output streams—the same mechanism shells use for piping commands together.

The flow is simple:

  1. The client launches the server as a child process
  2. The client writes JSON-RPC messages to the server's stdin
  3. The server writes JSON-RPC responses to its stdout
  4. Messages are newline-delimited—one complete JSON object per line
Client                              Server Process
   |                                      |
   |---- Launch subprocess -------------->|
   |                                      |
   |---- Write to stdin ----------------->|  {"jsonrpc":"2.0","method":"initialize"...}
   |<--- Read from stdout ----------------|  {"jsonrpc":"2.0","result":{...}}
   |                                      |
   |---- Close stdin, terminate --------->|

The server process is bound to the client's lifecycle. When the client terminates, the server process ends.

Stdout is reserved for protocol messages only. Servers must write only valid JSON-RPC messages to stdout. Anything else—debug output, logging, error messages—corrupts the communication channel. All non-protocol output goes to stderr, which clients may capture for debugging but do not parse as protocol messages.

// Correct: debug output to stderr
console.error("Debug: processing request");

// Wrong: this breaks the protocol
console.log("Debug info");  // Never do this

Claude Code stores server stderr output in log files at ~/Library/Logs/Claude/mcp-server-SERVERNAME.log on macOS.

Performance:

MetricTypical Value
Latency< 1ms
Throughput10,000+ ops/sec
Memory per connection~10MB

Sub-millisecond latency comes from eliminating network overhead. Messages move through OS pipes, not TCP sockets.

Good for: local development, desktop applications (Claude Desktop, Cursor, Claude Code), command-line tools, and anywhere client and server share a machine.

Limitations: single client per server instance, no remote access, and security relies entirely on OS-level process isolation rather than explicit authentication.

stdio transport provides natural security through process isolation. The server runs with the same permissions as the user who launched it, nothing more. No network ports are opened, eliminating entire categories of remote attacks.

Streamable HTTP transport: remote server communication

For servers running on different machines, MCP uses Streamable HTTP. Introduced in protocol version 2025-03-26, this transport replaced the earlier HTTP+SSE approach.

Streamable HTTP exposes a single endpoint (e.g., https://mcp.example.com/) that handles both sending and receiving messages. The server runs independently and can serve multiple clients simultaneously.

Sending messages (client to server):

Clients POST JSON-RPC messages to the MCP endpoint:

POST /mcp HTTP/1.1
Host: mcp.example.com
Content-Type: application/json
Accept: application/json, text/event-stream
Mcp-Session-Id: abc123

{"jsonrpc":"2.0","id":1,"method":"tools/call","params":{...}}

The Accept header includes both JSON and SSE because the server may respond either way. Simple requests get immediate JSON responses. Long-running operations may open an SSE stream.

Receiving messages (server to client):

Servers can initiate communication—sending notifications, requesting sampling—via Server-Sent Events. Clients open an SSE stream with a GET request:

GET /mcp HTTP/1.1
Host: mcp.example.com
Accept: text/event-stream
Mcp-Session-Id: abc123

The server pushes JSON-RPC messages as SSE events:

event: message
data: {"jsonrpc":"2.0","method":"notifications/resources/updated","params":{...}}

event: message
data: {"jsonrpc":"2.0","id":5,"method":"sampling/createMessage","params":{...}}

Session management:

Servers may assign session IDs via the Mcp-Session-Id header during initialization. Clients include this ID in all subsequent requests. Sessions terminate via HTTP DELETE or server-side timeout.

DELETE /mcp HTTP/1.1
Host: mcp.example.com
Mcp-Session-Id: abc123

Session IDs must be cryptographically secure—UUIDs or similar—to prevent session hijacking.

Performance:

MetricTypical Value
Latency10-50ms
Throughput100-1,000 ops/sec
Memory per connection~50MB
Concurrent clientsMany

Higher latency than stdio reflects network round-trips. That's the tradeoff for remote access and horizontal scaling.

Good for: cloud-hosted services, multi-client scenarios, enterprise deployments with centralized servers, web application integrations, and anything requiring network access.

Remote MCP servers require authentication. Without it, anyone who discovers the endpoint can invoke tools. OAuth 2.1 with PKCE is the recommended standard; API keys are acceptable for simpler deployments.

The deprecated HTTP+SSE transport

Earlier MCP versions used a different HTTP transport requiring two endpoints: /sse for the event stream and /sse/messages for sending requests. This caused problems: two endpoints to manage and secure, long-lived SSE connections consuming persistent resources, no mechanism to resume interrupted streams, and conflicts with HTTP/2 and HTTP/3.

Streamable HTTP consolidates everything into a single endpoint. Long-running operations still use SSE for streaming responses, but the architecture is simpler.

Backward compatibility:

When connecting to unknown servers, clients should:

  1. Try POSTing an initialize request to the server URL
  2. If successful: use Streamable HTTP
  3. If it fails with 4xx: fall back to GET expecting the old SSE endpoint

Most new deployments should use Streamable HTTP exclusively.

WebSocket transport (proposed)

The MCP community is developing a WebSocket transport via SEP-1288. WebSockets provide persistent, bidirectional connections that may work better for real-time scenarios.

The proposal offers native bidirectionality (both parties send messages without request-response patterns), mature cloud platform support (Cloudflare, AWS Lambda), reduced per-message framing overhead compared to HTTP, and well-established reconnection patterns.

Key design decisions: sessions are mandatory (optional in Streamable HTTP), one connection per session (prevents connection proliferation), and session IDs live in JSON-RPC messages rather than headers (since WebSocket APIs don't expose headers post-connection).

As of early 2026, the proposal remains under review by the MCP steering committee under the Agentic AI Foundation. Use stdio or Streamable HTTP for production until WebSocket transport is officially standardized.

Choosing the right transport

The decision comes down to one question:

Is the server on the same machine as the client?

  • Yes → Use stdio
  • No → Use Streamable HTTP

More detailed considerations:

RequirementTransportRationale
Lowest latencystdioNo network overhead
Local filesystem accessstdioProcess shares user permissions
Remote team accessStreamable HTTPNetwork traversal required
Multiple concurrent clientsStreamable HTTPstdio supports only one
Horizontal scalingStreamable HTTPCan run multiple server instances
Maximum securitystdioNo network exposure

For Claude Code specifically:

  • Local MCP servers (filesystem, Git, local databases) use stdio
  • Remote MCP servers (cloud APIs, team services) use Streamable HTTP

When building MCP servers, implement stdio first for development and testing. Add Streamable HTTP when remote access becomes necessary. The protocol messages are identical; only the transport layer changes.

Transport-level security

Each transport has different security characteristics.

stdio inherits OS process isolation: the server runs with user permissions, no network ports open, but malicious server binaries can access local files.

Streamable HTTP requires explicit authentication (OAuth 2.1 or API keys), Origin header validation to prevent DNS rebinding attacks, localhost binding when running locally (never 0.0.0.0), cryptographically unpredictable session IDs, and TLS for production.

A vulnerability (CVE-2025-66414) demonstrated DNS rebinding attacks against local HTTP servers. The official SDKs now include protection—enableDnsRebindingProtection in TypeScript, hostHeaderValidation() middleware in Python—but older servers may remain vulnerable.

Configuration differences

Transport choice affects how you configure servers in Claude Code.

stdio configuration:

{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/dev/projects"],
      "env": {
        "DEBUG": "true"
      }
    }
  }
}

The command and args fields specify how to launch the server process. Environment variables pass configuration without command-line arguments.

Streamable HTTP configuration:

{
  "mcpServers": {
    "remote-api": {
      "type": "http",
      "url": "https://mcp.example.com/api",
      "headers": {
        "Authorization": "Bearer ${API_TOKEN}"
      }
    }
  }
}

The url field specifies the remote endpoint. The headers field includes authentication credentials. Environment variable expansion (${API_TOKEN}) prevents hardcoding secrets.

Claude Code detects transport type automatically: if command is present, it uses stdio; if url is present, it uses HTTP.

Same messages, different pipes

Despite transport differences, the messages themselves are identical. A tools/call request looks the same whether it travels through stdin or HTTPS:

{
  "jsonrpc": "2.0",
  "id": 42,
  "method": "tools/call",
  "params": {
    "name": "execute_query",
    "arguments": {
      "query": "SELECT * FROM users"
    }
  }
}

Transports handle framing:

  • stdio uses newline delimiters
  • Streamable HTTP uses HTTP message bodies and SSE data fields
  • WebSocket (proposed) uses WebSocket frames

But the JSON content is always the same. Server implementations work across transports without modifying their core logic. Transport is a deployment decision, not a development decision.

On this page