Applied Intelligence
Module 2: The Agent Mental Model

Session Transition Patterns

Recognizing context degradation; handoff summaries; starting fresh vs continuing; parallel session management

Extended sessions eventually require transitions either to fresh context or to parallel workstreams. This section addresses the practical patterns for recognizing when transitions are needed and executing them cleanly.

Recognizing context degradation

Observable symptoms of degradation:

  • Agent cycles through the same suggestions repeatedly
  • Responses become generic where they were previously specific
  • Earlier decisions get contradicted or forgotten
  • Agent makes up file names, APIs, or patterns that don't exist
  • Explicit instructions fail to stick

These symptoms reflect the "lost in the middle" phenomenon as context accumulates, information positioned between the beginning and end receives less attention.

Technical indicators

The /context command displays a visual representation of token usage. As the indicator approaches 90% capacity, degradation risk increases. Auto-compaction triggers at approximately 95%, but performance typically degrades before that threshold.

The 80% guideline: Meaningful quality degradation begins before technical limits are reached. Proactive management at 70-80% capacity prevents subtle errors.

Context poisoning

Distinct from gradual degradation, context poisoning occurs when errors or confabulations enter the conversation and propagate.

Signs of poisoning:

  • Agent repeatedly references a non-existent file or function
  • Debugging suggestions point to problems that don't exist
  • Same incorrect pattern appears across multiple files
  • Corrections produce acknowledgment but no behavioral change

Poisoned context requires the /clear command. Continuing with poisoned context compounds problems.

Handoff summaries

Before ending a productive session, request a structured summary:

"Write a handoff summary for another engineer who will continue this work. Include: goal and current progress, key decisions made and their rationale, files modified and why, what approaches failed, open questions and next steps."

The request for failed approaches is often the most valuable component. New sessions tend to repeat unsuccessful patterns unless explicitly warned.

Handoff template

# Session Handoff

## Goal
[The intended outcome]

## Current Progress
- [x] Completed: Authentication middleware implemented
- [ ] In progress: Rate limiting integration
- [ ] Pending: Token refresh endpoint

## Key Decisions
- JWT with RS256 signing (not HS256) for production security
- Refresh tokens stored in Redis with 7-day TTL
- Rate limiting at 10 requests/minute on refresh endpoint

## Files Modified
- src/middleware/auth.ts - New JWT validation middleware
- src/routes/auth.ts - Login and refresh endpoints
- src/config/redis.ts - Connection configuration

## What Did Not Work
- Initial attempt with HS256 failed security review
- Storing tokens in PostgreSQL caused latency issues

## Open Questions
- Should token blacklist use Redis SET or SORTED SET?
- Clarify rate limit behavior for authenticated vs anonymous requests

## Next Steps
1. Complete rate limiting integration
2. Add token blacklist for logout
3. Write integration tests

This structure transfers maximum context in minimum tokens typically 500-800 words compared to 10,000+ tokens to replay the full conversation.

Starting fresh versus continuing

ConditionAction
Task complete, context cleanCommit and continue
Task complete, context clutteredCommit, clear, fresh start
Task incomplete, context cleanContinue
Task incomplete, context degradedHandoff, clear, resume with summary
Context poisonedClear immediately, start fresh

Continue when working on the same feature, building on established decisions, and the agent demonstrates memory of earlier decisions without prompting.

Start fresh when topics switch, extended debugging sessions have accumulated failed attempts, or repeated correction patterns appear in recent history.

Compact versus clear

/compact preserves a summary of context while freeing space. Use when context contains decisions that should survive, work will continue on the same task, or complete restart would require extensive recreation.

/clear removes all history. Use when switching to unrelated work, context has become poisoned, or starting with a handoff summary provides sufficient context.

Parallel session management

Git worktrees for isolation

Each agent session requires exclusive file access. Git worktrees provide isolated working directories:

# Create worktree with feature branch
git worktree add ../project-auth-feature -b feat/auth

# Navigate to worktree
cd ../project-auth-feature

# Start Claude Code session
claude

# Clean up when done
git worktree remove ../project-auth-feature

Each worktree has independent file state. Changes in one worktree do not affect others until explicitly merged. The shared git history enables coordination while isolation prevents conflicts.

Parallel session workflow

  1. Create worktrees for independent features
  2. Start sessions in separate terminal windows or tabs
  3. Work interleaved when one session runs tests, switch to another
  4. Merge completed work back to the main branch
  5. Remove worktrees after merging

Practitioner experience suggests 3-5 parallel sessions typically outperforms higher numbers. The merge complexity and context-switching overhead of managing 8-10 sessions often exceeds the throughput benefits.

Subagents for investigation

When the main session needs information that requires extensive exploration, delegate to subagents rather than consuming main context. Subagents return condensed results summaries and key findings rather than full investigation histories.

The transition execution

Pre-transition checklist

  1. Commit any in-progress work (even partial implementations to a feature branch)
  2. Generate handoff summary if work will continue
  3. Update CLAUDE.md with persistent learnings
  4. Run /compact if preserving some context, /clear if starting completely fresh

Post-transition verification

After starting a fresh session or loading a handoff:

  1. Verify the agent understands current state: "What do we have so far?"
  2. Confirm understanding of next steps: "What should we do next?"
  3. Check for any context gaps: "What questions do you have before proceeding?"

The continuous improvement loop

Session transitions offer natural reflection points:

  • Did the handoff capture necessary context?
  • Did the fresh session perform better than the degraded one?
  • Should any decisions move into CLAUDE.md for permanence?
  • Were parallel sessions actually independent, or did overlap cause problems?

Module conclusion

The agent mental model established across this module provides the foundation for productive Agentic Software Development. Understanding how agents perceive codebases, the practical reality of context windows, and the essential role of project documentation transforms the developer from user to collaborator.

These fundamentals prepare for the next phase: context engineering, where the abstract understanding of agent perception becomes practical technique for structuring information that produces superior outcomes.

On this page