Applied Intelligence
Module 2: The Agent Mental Model

Writing Effective CLAUDE.md Content

What to include (commands, conventions, gotchas); what to exclude; templates and examples

Given a budget of roughly 150-200 instructions before cognitive saturation sets in, every line must earn its place. This section examines what content maximizes agent effectiveness and what content wastes precious tokens.

The WHAT, WHY, HOW framework

  • WHAT Establishes technical reality: framework, language, critical dependencies with versions. An agent cannot reliably generate Next.js 14 App Router code if it assumes Pages Router conventions.
  • WHY Provides architectural rationale context that falls squarely in the inference boundary. Why does the API layer use a specific authentication pattern?
  • HOW Documents workflows: build commands, test commands, development servers. Agents cannot discover that pnpm run build:staging exists by reading code.

What to include

The highest-value content addresses what agents cannot infer.

Essential commands with descriptions

# Commands
- `npm run build`: Build production bundle
- `npm run typecheck`: Run TypeScript compiler (no emit)
- `npm run test`: Run full test suite with coverage
- `npm run test:unit -- path/to/test.ts`: Run single test file
- `npm run lint:fix`: Auto-fix linting issues

Including the single-test syntax is particularly valuable. Agents running verification loops should target specific tests rather than executing the full suite.

Critical code patterns

Document conventions that differ from defaults or common assumptions the surprising, not everything:

# Code patterns
- Use ES modules (import/export), not CommonJS (require)
- Destructure imports: `import { foo } from 'bar'` not `import bar from 'bar'`
- All new components must be function components with hooks
- Prefer arrow functions for component definitions
- API responses always wrapped in `Result<T, Error>` type

Project structure with purpose

# Structure
- `src/app/`: Next.js App Router pages and layouts
- `src/components/`: Reusable React components (organized by domain)
- `src/lib/`: Core utilities, API clients, shared types
- `src/server/`: Server-only code (database, auth, external APIs)
- `tests/`: Integration and e2e tests (unit tests colocated with source)

Explicit boundaries and prohibitions

Telling agents what NOT to do is as important as telling them what to do.

# IMPORTANT: Do not modify
- `src/legacy/`: Deprecated code pending removal (do not extend or fix)
- `migrations/`: Database migrations must be created via CLI, never manually
- `.env*`: Never read or reference environment files

# Restrictions
- Do not commit directly to main branch
- Do not add dependencies without explicit approval
- Do not modify shared types without updating all consumers

Verification requirements

# Before completing any task
1. Run `npm run typecheck` and resolve all errors
2. Run affected tests: `npm run test:unit -- <changed-files>`
3. Ensure `npm run lint` passes (auto-fix with `npm run lint:fix`)

As Boris Cherny notes: "If Claude has that feedback loop, it will 2-3x the quality of the final result."

What to exclude

  • Linter-enforceable rules Never send an agent to do a linter's job. Write "Run npm run lint:fix after changes" instead of cataloging formatting rules.
  • Code snippets Embedding actual code creates maintenance burden. When code changes, the CLAUDE.md snippet becomes stale guidance.
  • API documentation Large documentation dumps overwhelm context. If the agent needs reference material, it can retrieve it through navigation tools.
  • Task-specific instructions Instructions for certain types of work shouldn't consume permanent token budget. Create separate files and reference them.

Progressive disclosure pattern

# Task documentation
For detailed guidance, see:
- `docs/agent/testing.md`: Testing patterns and fixtures
- `docs/agent/database.md`: Migration and query conventions
- `docs/agent/api.md`: Endpoint design and error handling

This keeps the always-present CLAUDE.md lean while making detailed guidance available on demand.

Template: minimal effective CLAUDE.md

Production CLAUDE.md files in mature teams average 485-535 words. Some highly effective files are under 60 lines.

# Project Name
[One-sentence description of what this project does]

# Tech Stack
- Framework: [Name] [Version]
- Language: [Name] [Version]
- Key dependencies: [List critical libraries]

# Commands
- `[build command]`: [Description]
- `[test command]`: [Description]
- `[dev command]`: [Description]
- `[lint command]`: [Description]

# Structure
- `[directory]`: [Purpose]
- `[directory]`: [Purpose]

# Code Patterns
- [Pattern 1 - specific and actionable]
- [Pattern 2 - specific and actionable]

# IMPORTANT: Do Not
- [Prohibition 1]
- [Prohibition 2]

# Verification
Before completing: [Required checks]

Advanced patterns

Skills-based organization

Large projects benefit from extracting detailed guidance into skill files:

# Skills
For detailed guidance, see `.claude/skills/`:
- `clojure-write`: Clojure development with REPL-driven workflow
- `typescript-write`: TypeScript development and best practices
- `docs-write`: Documentation with project style guide

Path-specific rules

For codebases where different directories have different conventions, use .claude/rules/ with YAML frontmatter:

---
paths:
  - "src/api/**/*.ts"
---

# API Development Rules
- All endpoints must include input validation with zod
- Error responses follow RFC 7807 Problem Details format
- Rate limiting required for public endpoints

Evolving the file over time

CLAUDE.md is not write-once documentation. When an agent makes a mistake repeatedly, add a rule addressing that specific failure. When you find yourself repeatedly correcting the same issue in code review, that correction belongs in CLAUDE.md.

Periodically audit the file:

  • Are instructions being followed?
  • Remove guidance that no longer applies
  • Remove instructions agents consistently ignore despite clear wording the latter suggests the instruction may be too vague

A well-crafted CLAUDE.md transforms every agent session, providing curated context that reflects accumulated team knowledge rather than starting with generic knowledge and discovering conventions through trial and error.

On this page