Applied Intelligence
Module 10: MCP and Tool Integration

Exercise: MCP Server Setup and Integration

This exercise walks through the full MCP server lifecycle: installing servers, configuring authentication, using servers for real tasks, and building a simple custom server.

Overview

The Chinook database is a sample dataset representing a digital music store. It contains 11 tables covering artists, albums, tracks, customers, invoices, and employees—realistic data with meaningful relationships. The database is small enough to explore in a single session but complex enough to demonstrate MCP's value.

The scenario: you need to analyze sales data and answer business questions. Instead of writing SQL manually or exporting to a spreadsheet, you connect the database via MCP and let Claude Code query it directly.

You will:

  1. Install and configure the SQLite MCP server
  2. Use the server to explore the database schema
  3. Answer business questions through natural language queries
  4. Configure a second MCP server (filesystem) to export results
  5. Build a minimal custom MCP server

Setup

Download the Chinook database

Clone the repository and locate the SQLite database:

git clone https://github.com/lerocha/chinook-database.git
cd chinook-database

The SQLite database is at ChinookDatabase/DataSources/Chinook_Sqlite.sqlite. Verify it exists:

ls -la ChinookDatabase/DataSources/Chinook_Sqlite.sqlite

Verify SQLite access

Confirm SQLite works on your system:

sqlite3 ChinookDatabase/DataSources/Chinook_Sqlite.sqlite ".tables"

Expected output lists 11 tables:

Album          Customer       Genre          InvoiceLine    Playlist
Artist         Employee       Invoice        MediaType      PlaylistTrack
Track

If sqlite3 is not installed, install it via your package manager:

  • macOS: brew install sqlite
  • Ubuntu/Debian: sudo apt install sqlite3
  • Windows: Download from sqlite.org

Verify Node.js version

MCP servers require Node.js 18.17 or later:

node --version

If below 18.17, upgrade before proceeding.

Phase 1: Install the SQLite MCP server

Add the SQLite MCP server to Claude Code.

Add via CLI

claude mcp add sqlite -- npx -y @modelcontextprotocol/server-sqlite \
  "$(pwd)/ChinookDatabase/DataSources/Chinook_Sqlite.sqlite"

The absolute path is required—relative paths cause "database not found" errors.

Verify installation

claude mcp list
claude mcp get sqlite

The output should show the sqlite server with its command and arguments.

Start Claude Code and verify

claude

Inside Claude Code, check server status:

/mcp

The sqlite server should appear as connected with available tools listed.

If the server shows as disconnected, check:

  • Node.js version (18.17+)
  • Absolute path to the database file
  • On Windows, use cmd /c wrapper: claude mcp add sqlite -- cmd /c npx -y @modelcontextprotocol/server-sqlite "C:/path/to/database.sqlite"

Phase 2: Explore the database

Use the MCP server to understand the database structure before querying it.

Schema exploration

Ask Claude Code to describe the database:

Using the sqlite MCP server, describe the database schema.
What tables exist and how are they related?

Record the response. The agent should identify the core entities: artists create albums containing tracks, customers purchase through invoices, employees manage sales.

Verify the schema description

Ask for specifics to validate understanding:

What columns are in the Invoice table?
What is the relationship between Invoice and InvoiceLine?

Compare the response to direct inspection:

sqlite3 ChinookDatabase/DataSources/Chinook_Sqlite.sqlite ".schema Invoice"
sqlite3 ChinookDatabase/DataSources/Chinook_Sqlite.sqlite ".schema InvoiceLine"

The agent's description should match the actual schema.

Document what you learned

TablePurposeKey relationships
Artist
Album
Track
Customer
Invoice
InvoiceLine

Fill in this table based on exploration.

Phase 3: Answer business questions

Now use the MCP server for actual analysis. These questions require multi-table joins that would take time to write manually.

Question 1: Top genres by revenue

Using the sqlite server, find the top 5 genres by total sales revenue.
Show the genre name and total revenue.

Record the results:

GenreRevenue

Verify one result manually:

sqlite3 ChinookDatabase/DataSources/Chinook_Sqlite.sqlite \
  "SELECT g.Name, SUM(il.UnitPrice * il.Quantity) as Revenue
   FROM Genre g
   JOIN Track t ON g.GenreId = t.GenreId
   JOIN InvoiceLine il ON t.TrackId = il.TrackId
   GROUP BY g.GenreId
   ORDER BY Revenue DESC
   LIMIT 5;"

Does Claude's answer match?

Question 2: Sales representative performance

Which sales support agent has the highest total invoice amount?
Show the employee name and their total sales.

Record the result and verify it.

Question 3: Customer analysis

Which country has the most customers?
And which country generates the most revenue?
Are they the same?

This question tests whether the agent reasons about the data or just runs queries mechanically.

Question 4: Open-ended analysis

Based on the data, which customers should receive a loyalty discount?
Explain your reasoning and show the supporting data.

Evaluate: Did the agent define reasonable criteria and support them with queries?

Phase 4: Add a second MCP server

MCP servers can work together. Add the filesystem server to export analysis results.

Install the filesystem server

Create an output directory first:

mkdir -p ~/mcp-exercise-output

Add the server:

claude mcp add filesystem -- npx -y @modelcontextprotocol/server-filesystem ~/mcp-exercise-output

Verify both servers are running:

/mcp

Combined workflow

Ask Claude Code to use both servers:

Query the top 10 customers by total purchase amount from the sqlite server.
Then use the filesystem server to save the results as a CSV file called top_customers.csv.

Verify the file was created:

cat ~/mcp-exercise-output/top_customers.csv

This demonstrates MCP servers composing: one provides data, another handles output.

When multiple servers are available, Claude Code automatically selects the appropriate one based on the task. You can be explicit ("using the sqlite server") or let the agent decide.

Phase 5: Build a custom MCP server

Understanding how servers work deepens your ability to use them. Build a minimal server that exposes a single tool.

Create the server

Create a new directory:

mkdir ~/mcp-exercise-server
cd ~/mcp-exercise-server
npm init -y
npm install @modelcontextprotocol/sdk

Create server.js:

#!/usr/bin/env node
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";

const server = new Server(
  {
    name: "exercise-server",
    version: "1.0.0"
  },
  {
    capabilities: {
      tools: {}
    }
  }
);

// Define a single tool
server.setRequestHandler("tools/list", async () => ({
  tools: [
    {
      name: "get_greeting",
      description: "Returns a greeting message for the given name",
      inputSchema: {
        type: "object",
        properties: {
          name: {
            type: "string",
            description: "The name to greet"
          }
        },
        required: ["name"]
      }
    }
  ]
}));

// Handle tool calls
server.setRequestHandler("tools/call", async (request) => {
  if (request.params.name === "get_greeting") {
    const name = request.params.arguments?.name || "World";
    return {
      content: [
        {
          type: "text",
          text: `Hello, ${name}! This response came from your custom MCP server.`
        }
      ]
    };
  }
  throw new Error(`Unknown tool: ${request.params.name}`);
});

// Start the server
const transport = new StdioServerTransport();
await server.connect(transport);

// Log to stderr (stdout is reserved for JSON-RPC messages)
console.error("Exercise MCP server running");

Update package.json to enable ES modules:

{
  "name": "mcp-exercise-server",
  "version": "1.0.0",
  "type": "module",
  "main": "server.js",
  "bin": {
    "mcp-exercise-server": "./server.js"
  }
}

Test the server manually

chmod +x server.js
echo '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}' | node server.js

The server should respond with a JSON-RPC message listing the get_greeting tool.

Add to Claude Code

claude mcp add exercise -- node ~/mcp-exercise-server/server.js

Verify in Claude Code:

/mcp

Use the custom server

Use the exercise server to greet "MCP Developer"

The response should come from your custom server.

The critical rule: all logs go to stderr, never stdout. Stdout is exclusively for JSON-RPC messages. Console.log() in a server breaks the protocol.

Phase 6: Cleanup

Remove the exercise servers:

claude mcp remove sqlite
claude mcp remove filesystem
claude mcp remove exercise

Verify removal:

claude mcp list

Analysis

MCP server usage

QuestionYour answer
How long did server installation take?
Did you encounter any configuration errors?
How did MCP compare to manual SQL queries?
Were the query results accurate?

Multi-server workflow

QuestionYour answer
Did the agent correctly choose which server to use?
How natural was the combined workflow?
What other server combinations would be useful?

Custom server development

QuestionYour answer
How long did building the server take?
Did you encounter stdout/stderr issues?
What tools would you add for your work?

Success criteria

  • SQLite MCP server installed and verified
  • Database schema explored through MCP
  • Business questions answered (4 queries)
  • At least one query result verified manually
  • Filesystem MCP server added
  • Multi-server workflow completed (query + export)
  • Custom MCP server built and tested
  • Custom server added to Claude Code and used
  • All exercise servers removed after completion
  • Analysis section completed

Variations

Variation A: PostgreSQL instead of SQLite

If you have PostgreSQL available:

# Using DBHub for PostgreSQL
claude mcp add postgres -- npx -y @bytebase/dbhub \
  --dsn "postgresql://user:pass@localhost:5432/mydb"

Run the same business questions against your own database. Does the workflow differ between SQLite and PostgreSQL?

Variation B: GitHub MCP server

Add the GitHub server and explore a repository:

claude mcp add github --transport http https://api.githubcopilot.com/mcp/

Authenticate with /mcp, then:

Using the GitHub server, find all open issues labeled "bug" in the modelcontextprotocol/servers repository.
Summarize the top 5 by age.

Compare repository exploration via MCP versus gh CLI.

Variation C: Project configuration

Instead of user-scoped configuration, create a .mcp.json for the chinook-database project:

{
  "mcpServers": {
    "chinook": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-sqlite", "./ChinookDatabase/DataSources/Chinook_Sqlite.sqlite"]
    }
  }
}

Commit this to version control. How would team members benefit from shared MCP configuration?

Variation D: Extend the custom server

Add a second tool to your custom server:

{
  name: "count_words",
  description: "Counts the words in a given text",
  inputSchema: {
    type: "object",
    properties: {
      text: { type: "string", description: "The text to analyze" }
    },
    required: ["text"]
  }
}

Implement the handler and test it. How would you add tools relevant to your daily work?

Variation E: MCP Inspector debugging

Use the MCP Inspector to debug your custom server:

npx @modelcontextprotocol/inspector node ~/mcp-exercise-server/server.js

The Inspector provides a web UI for testing tools and viewing raw JSON-RPC messages. What does the message flow look like for a tool call?

What this exercise teaches

MCP servers turn Claude Code from a code assistant into a data access tool. The SQLite server demonstrates this: questions that would require writing SQL, running queries, and interpreting results become natural language conversations. The agent handles the translation.

Installation matters. The difference between a working server and a "connection closed" error is often a path format, a Node.js version, or a Windows wrapper. The troubleshooting you did (or didn't need) reflects real-world MCP deployment.

Multiple servers compose naturally. The filesystem export demonstrated how one server can handle data retrieval while another handles output. Production workflows often chain multiple MCP servers: database access, API calls, file operations, and notifications.

Building a server demystifies the protocol. The custom server had maybe 50 lines of code. The complexity is in what the server exposes, not in the protocol itself. If an existing server doesn't do what you need, building one is approachable.

The stdout/stderr rule is not optional. Many first-time server developers debug "server won't connect" issues that trace back to a console.log on stdout. Understanding the protocol—stdin for requests, stdout for responses, stderr for everything else—prevents this class of errors.

Configuration scope has workflow implications. User-scoped servers follow you across projects. Project-scoped servers (.mcp.json) travel with the repository. The choice affects how teams share MCP configurations and how you manage secrets.

Verification matters. The manual SQL checks confirmed (or would have caught) agent errors. MCP doesn't guarantee correct queries—it enables them. The human still validates the results.

On this page