Building Complex Outcomes Across Sessions
From single sessions to orchestrated workflows
The previous page covered recovery mechanisms for individual sessions. Large-scale work requires something different: coordinating multiple sessions working in parallel, isolating their changes, and debugging context issues when they arise.
Enterprise development rarely fits within a single agent session. A refactor touching 50 files, a feature requiring database changes plus API updates plus frontend modifications, a migration spanning multiple services these exceed what one context window handles well. The answer is decomposition: break large work into independent pieces that parallel sessions can tackle simultaneously.
The parallel session strategy
Running multiple agent sessions in parallel multiplies throughput. Where a single session might complete one task per hour, five parallel sessions can complete five if you manage them correctly. The hard part is coordination: preventing sessions from conflicting, managing their outputs, and merging their work cleanly.
Practitioners tend to land on three to five parallel sessions as the practical limit. Below three, you don't gain much from parallelization. Above five, merge complexity and coordination overhead eat the gains. Boris Cherny runs five Claude Code sessions in terminal tabs numbered 1-5, plus additional sessions on claude.ai for tasks that work better in the web interface.
Why the limit exists
The constraint has nothing to do with human attention span you can track more than five things. The limit comes from codebase overlap. Each additional session increases the probability of conflicting changes. Two sessions editing the same file, three sessions modifying interdependent modules, four sessions touching shared configuration merge complexity grows geometrically with overlap.
Effective parallelization requires task decomposition with minimal overlap:
- Session 1: Authentication refactor (auth module only)
- Session 2: Database migration (schema and data layer)
- Session 3: API endpoint updates (API routes only)
- Session 4: Test suite updates (test files only)
- Session 5: Documentation updates (docs directory)
Each session owns a distinct portion of the codebase. Conflicts become rare. Merges stay trivial.
Task assignment principles
Decompose work along natural boundaries:
| Boundary type | Example assignment |
|---|---|
| Directory | Each session owns a directory tree |
| Layer | Frontend, backend, database as separate sessions |
| Feature | Independent features to separate sessions |
| File type | Code, tests, docs as separate sessions |
The worst decomposition assigns overlapping concerns. Two sessions both "improving" the same utility module will produce conflicting changes. One session refactoring error handling while another modifies the same functions creates merge nightmares.
Before starting parallel sessions, map the work explicitly. Identify file boundaries. Assign ownership. Write down which session handles what.
Git worktrees for session isolation
Multiple sessions need multiple working directories. Git worktrees provide this without duplicating repository history.
A worktree is an additional working directory attached to the same repository. Each worktree has its own HEAD, index, and working files. All worktrees share the same object database commits, branches, and history exist once. Creating a worktree takes milliseconds, not the minutes that cloning requires.
Creating worktrees for parallel sessions
# From your main repository directory
git worktree add ../project-auth -b feature/auth-refactor
git worktree add ../project-api -b feature/api-updates
git worktree add ../project-tests -b feature/test-expansionEach command creates a new directory with a fresh checkout of the specified branch.
The -b flag creates the branch; omit it to check out an existing branch.
Directory organization matters for clarity:
~/projects/
├── myproject/ # Main worktree (main branch)
├── myproject-auth/ # Auth refactor session
├── myproject-api/ # API updates session
└── myproject-tests/ # Test expansion sessionOr keep worktrees in a subdirectory:
~/projects/myproject/
├── .git/
├── src/
└── .trees/ # Add to .gitignore
├── auth/
├── api/
└── tests/Running sessions in worktrees
Start each agent session in its designated worktree:
# Terminal 1
cd ~/projects/myproject-auth
claude
# Terminal 2
cd ~/projects/myproject-api
claude
# Terminal 3
cd ~/projects/myproject-tests
claudeEach Claude instance sees only its worktree's files. Changes in one worktree do not affect others until explicitly merged. Sessions cannot accidentally overwrite each other's work.
Worktree lifecycle management
When work completes, clean up worktrees properly:
# Merge the feature branch first
cd ~/projects/myproject
git merge feature/auth-refactor
# Then remove the worktree
git worktree remove ../myproject-auth
# Delete the branch if no longer needed
git branch -d feature/auth-refactorNever delete worktree directories manually with rm -rf.
This leaves orphaned references in Git's internal tracking.
Always use git worktree remove.
If you do accidentally delete a directory, repair with:
git worktree prunePort and resource conflicts
Worktrees share filesystem-level resources.
Each worktree that runs a development server needs its own port.
Each worktree's node_modules or virtual environment is independent run npm install or equivalent in each.
For complex projects with many services, coordinate port assignments:
myproject/ PORT=3000 # Main development
myproject-auth/ PORT=3001 # Auth feature testing
myproject-api/ PORT=3002 # API feature testingDocument port assignments in each worktree's session context to prevent agents from conflicting.
Multi-agent communication patterns
Parallel sessions sometimes need coordination beyond branch isolation. Scratchpad files enable communication without polluting any session's context.
The shared-read pattern
Create a coordination file that all sessions read but only you write:
.claude/coordination/
├── decisions.md # You write; all sessions read
├── session-1-status.md # Session 1 writes; others read
├── session-2-status.md # Session 2 writes; others read
└── session-3-status.md # Session 3 writes; others readThe decisions file contains cross-cutting choices:
# Cross-Session Decisions
## API Response Format
- Standard: { data: T, error?: string, meta?: object }
- All sessions must use this format
## Error Codes
- AUTH_001: Invalid token
- AUTH_002: Expired token
- DB_001: Connection failedStatus files allow sessions to broadcast progress:
# Session 1 Status: Auth Refactor
## Completed
- Migrated token validation to new library
- Updated all auth middleware
## In Progress
- Updating refresh token logic
## Blocked On
- Need decision on token expiry durationYou monitor status files and update the decisions file as cross-session issues arise.
When to use coordination files
Coordination has overhead. Each file a session reads consumes context. Each check for updates takes agent attention.
Use coordination files when:
- Sessions must agree on shared interfaces
- Decisions in one session affect others
- Work depends on another session's output
Skip coordination files when:
- Sessions are truly independent
- Work can be reconciled at merge time
- Communication overhead exceeds benefit
For most parallel work, clean task decomposition eliminates coordination needs. Reserve coordination patterns for genuinely interdependent tasks.
Context debugging techniques
When sessions produce unexpected results, context issues are often the cause. Debugging context requires recognizing symptoms and applying systematic diagnosis.
Recognizing context problems
Context degradation shows up as:
- Forgotten instructions: Agent ignores rules you established
- Inconsistent behavior: Same prompt produces different results
- Repetitive errors: Agent keeps making mistakes you've already corrected
- Generic responses: Specific context replaced by generic patterns
- Circular reasoning: Agent argues in circles without progress
These symptoms point to context problems, not capability limits. The agent could handle the task with proper context; accumulated pollution blocks it.
The diagnostic sequence
When context problems appear, diagnose systematically:
1. Check context utilization
# Use /context to see current usage
/contextHigh utilization (above 70%) correlates with degraded performance. If utilization is high, compact or reset before debugging further.
2. Review recent turns
Look at the last 5-10 turns. Did you provide contradictory instructions? Did error messages pile up? Did the agent's responses become progressively less relevant?
Context pollution often begins at an identifiable turn. Finding that turn enables targeted recovery.
3. Test with fresh context
Start a new session with the same prompt. If the fresh session succeeds where the polluted one failed, context was the problem.
4. Examine what was lost
After compaction or reset, check whether critical context survived:
Confirm your understanding of:
1. The project's error handling conventions
2. The current task and its requirements
3. Constraints we established earlierIf the agent cannot recall essential context, it was lost in compression. Restore it explicitly before continuing.
Strategic resets for large refactors
Large refactors span many sessions. Each session benefits from strategic resets at natural boundaries:
- Phase completion: Reset after completing a distinct phase
- Error accumulation: Reset when fix attempts compound
- Topic shift: Reset when moving to unrelated work
- Daily boundaries: Reset at the start of each working day
The handoff summary pattern from earlier pages applies here. Before resetting, capture the current state:
Summarize this session's progress:
- What was completed
- What remains
- Key decisions made
- Known issuesCopy this summary into the fresh session's initial context. The new session inherits understanding without inheriting pollution.
When to abandon and restart
Not every session reaches useful completion. Boris Cherny reports discarding 10-20% of sessions that "end up nowhere." Recognizing unproductive sessions early saves time.
Signs a session should be abandoned:
- Three or more fix iterations without progress
- Agent consistently misunderstands requirements despite clarification
- Accumulated context prevents clear reasoning
- The approach itself is wrong, not just the implementation
Abandonment is resource management, not failure. Starting fresh with lessons learned often succeeds where continued iteration would not.
Orchestrating large refactors
Put these techniques together for large-scale work. A multi-thousand-line refactor might proceed:
1. Decomposition phase
Map the refactor into independent pieces. Identify file boundaries and dependencies. Create a plan document listing each piece and its scope.
2. Setup phase
Create worktrees for parallel work:
git worktree add ../project-phase1 -b refactor/phase1
git worktree add ../project-phase2 -b refactor/phase2
git worktree add ../project-phase3 -b refactor/phase3Start sessions in each worktree with appropriate initial context.
3. Execution phase
Run sessions in parallel. Monitor progress through status files or periodic checks. Provide cross-session decisions as needed. Reset individual sessions at phase boundaries.
4. Integration phase
Merge completed branches:
cd ~/projects/myproject
git merge refactor/phase1
git merge refactor/phase2
git merge refactor/phase3Resolve any conflicts. Run full test suite to validate integration.
5. Cleanup phase
Remove worktrees and temporary branches:
git worktree remove ../project-phase1
git worktree remove ../project-phase2
git worktree remove ../project-phase3
git branch -d refactor/phase1 refactor/phase2 refactor/phase3This workflow scales to refactors of any size because each phase is independent, each session is isolated, and each merge stays straightforward.
What this module covered
Building complex outcomes requires orchestration beyond single-session work. Parallel sessions multiply throughput when decomposed along non-overlapping boundaries. Git worktrees provide the isolation parallel sessions need. Coordination files enable communication when sessions must share decisions. Context debugging identifies and resolves degradation before it derails progress.
The 3-5 session range balances parallelization gains against coordination costs.
This module covered the advanced mechanics of context management: how context accumulates and degrades, when to reset, how different tools handle compaction, how to persist context across sessions, and how to orchestrate parallel work. The exercise applies these techniques to a refactor that would overwhelm any single session.