Project Context Fundamentals
Files and folders that shape agent behavior; code organization as implicit context; naming conventions that help agents
The previous pages established the context hierarchy and examined failure patterns. This page begins the detailed exploration of each layer, starting with project context the persistent information that shapes every agent interaction before a single prompt is written.
Project context is the most leveraged form of context engineering. Information placed here loads automatically into every conversation, eliminating repetitive explanation and establishing the constraints that enable focused work.
The codebase as implicit context
Before considering explicit documentation files, recognize that the codebase itself provides context. Agents read files, navigate directories, and infer patterns from what they observe. Every architectural decision, every naming choice, every folder structure becomes context that shapes agent behavior.
Practitioners report that AI "mistakes" often reveal organizational problems: "When AI struggles to locate files or understand patterns, humans likely will too. Human readability and AI interpretability overlap significantly."
This overlap means improving code organization for agents simultaneously improves it for human developers. The investment pays dividends beyond agentic workflows.
What agents perceive
When an agent begins work on a codebase, it gathers context through several mechanisms:
File system exploration Agents traverse directories, reading file names and folder structures. This navigation builds a mental map of where different types of code live.
Code reading Reading source files reveals patterns: how components are structured, what libraries are imported, how functions are named.
Configuration files Package manifests, build configurations, and linter rules provide metadata about project conventions.
Documentation files README files, inline comments, and dedicated documentation directories offer explicit guidance.
Each mechanism contributes to the agent's working model of the project. Gaps in any area force the agent to infer, which introduces opportunities for confabulation.
Folder structure as context
Directory organization communicates architectural intent. Agents use folder structure to determine where code belongs, what components relate to each other, and where to find specific functionality.
Principles for agent-friendly structure
Predictability over novelty.
Agents perform best with conventional structures that follow established patterns.
A React project with src/components/, src/hooks/, and src/utils/ matches expectations agents bring from thousands of training examples.
Novel organizational schemes require more explicit documentation to compensate.
Flat over deeply nested. Agents trace paths more effectively through shallow hierarchies. A structure three levels deep is easier to navigate than one six levels deep. When agents must traverse many directories to locate related files, they spend tokens on navigation rather than comprehension.
# Agent-friendly: flat and predictable
src/
├── components/
│ ├── Button.tsx
│ ├── Modal.tsx
│ └── Form.tsx
├── hooks/
├── utils/
└── types/
# Harder to navigate: deep and fragmented
src/
└── features/
└── user/
└── components/
└── profile/
└── forms/
└── EditProfileForm.tsxColocation over distribution. Related files placed together provide context through proximity. When a component lives next to its tests, styles, and types, the agent discovers the complete picture by reading one directory. When these files scatter across the tree, the agent must assemble context from multiple locations.
Studies of effective AI configuration files show 72.6% prioritize software architecture specifications. Agents benefit most from understanding how the system is organized before attempting modifications.
Monorepo considerations
Large codebases with multiple packages require additional structure. A single CLAUDE.md at the root cannot address package-specific conventions.
Hierarchical configuration solves this problem:
project-root/
├── CLAUDE.md # Project-wide standards
├── packages/
│ ├── frontend/
│ │ └── CLAUDE.md # Frontend-specific guidance
│ └── backend/
│ └── CLAUDE.md # Backend-specific guidanceAgents load context files progressively, with the closest file to the current work taking precedence. Root-level context establishes shared conventions; package-level context provides specifics.
Self-contained configurations within packages reduce cross-package dependencies that agents must trace. Overly interdependent configurations force agents to load multiple files to understand single operations.
Naming conventions as context
Names communicate intent. Agents derive substantial context from how files, functions, and variables are named. Research confirms the impact: descriptive naming yields 34.2% exact match rates in code completion tasks compared to 16.6% for obfuscated names.
File naming
Consistent file naming enables agents to predict where code lives before reading it.
Use predictable patterns:
| Convention | Example | Benefit |
|---|---|---|
| Kebab-case files | user-profile.ts | Consistent, readable |
| Component-named files | UserProfile.tsx | Direct component-to-file mapping |
| Test suffix | user-profile.test.ts | Obvious test discovery |
| Type suffix | user-profile.types.ts | Clear type definition location |
Avoid mixed conventions:
A codebase where some files use UserProfile.ts, others use user_profile.ts, and still others use userProfile.ts forces agents to learn three patterns.
Each inconsistency increases the probability of misplacement.
Function and variable naming
Agents extract "literal features" the semantic content of names rather than fully understanding logical behavior. This makes descriptive naming critical for accurate code generation.
Effective names reveal purpose:
// Agent understands intent immediately
function calculateMonthlyPayment(principal: number, rate: number, months: number): number
// Agent must infer from context
function calc(p: number, r: number, m: number): numberThe descriptive version provides context that persists through code generation. The terse version requires the agent to maintain inference across multiple operations, increasing error probability.
Type information amplifies naming: Statically typed languages provide additional context through type signatures. Research shows developers provide less explicit context when working with Go, Java, or TypeScript because type systems enforce structural correctness automatically. Dynamically typed languages require more extensive naming and documentation to compensate.
Python relies more heavily on naming than Java because Python lacks static typing. Descriptive function and variable names become critical context in dynamically typed codebases.
Naming patterns that help agents
Certain naming conventions provide particularly strong signals:
Intent-revealing prefixes:
is_,has_,should_for boolean valuesget_,set_,update_for accessor patternshandle_,on_for event handlers
Domain-specific vocabulary: Consistent terminology within a codebase helps agents recognize related concepts. If authentication functions consistently use "auth" while authorization uses "authz," agents learn the distinction.
Framework conventions:
Following framework-specific patterns leverages training data.
React components named with PascalCase, hooks prefixed with use_, and constants in SCREAMING_SNAKE_CASE match patterns agents have seen extensively.
Configuration files as context
Build tools, linters, and package managers generate configuration files that provide implicit context about project conventions.
What agents learn from configuration
Package manifests (package.json, requirements.txt, Cargo.toml) reveal dependencies.
Agents check these files to determine what libraries are available before generating imports.
Build configurations (tsconfig.json, webpack.config.js) establish compilation rules.
Path aliases defined here guide import statement generation.
Linter configurations (.eslintrc, .prettierrc) encode style rules.
Agents reading these files can match formatting expectations without explicit instruction.
Editor configurations (.editorconfig, .vscode/settings.json) standardize formatting.
While not directly consumed by agents, consistent formatting reduces diff noise in generated code.
Configuration files serve dual purposes: they control tools and provide context for agents.
A well-documented .eslintrc tells agents what style rules to follow even before running the linter.
Configuration pitfalls
Complex inheritance chains obscure effective configuration. When a TypeScript config extends three other configs, agents must trace the chain to understand the final result. Self-contained configurations are easier to parse.
Configuration-as-code patterns complicate matters further. A Webpack configuration that dynamically generates rules based on environment provides little static context. Agents benefit from configurations that can be read directly.
The implicit-explicit balance
Project context divides into implicit context patterns that must be inferred from code and explicit context guidance written in documentation files.
Implicit context:
- Folder structure
- File naming conventions
- Code patterns and idioms
- Configuration files
- Test organization
Explicit context:
- CLAUDE.md / AGENTS.md files
- Architecture documentation
- Inline comments and docstrings
- README files
The optimal balance depends on codebase complexity. Simple projects with conventional structures need minimal explicit documentation. Complex projects with non-obvious patterns require more extensive explicit guidance.
The rule of thumb: if a human developer joining the project would need explanation, an agent needs explicit documentation. Implicit context suffices for obvious patterns; explicit context addresses the non-obvious.
Building project context
Effective project context develops incrementally:
-
Establish conventions early. Initial architectural decisions propagate through all future work. Documenting these decisions creates context that compounds over time.
-
Codify rather than explain. When the same instruction appears in multiple prompts, move it to project documentation. Repetition signals missing project context.
-
Treat AI confusion as signal. When agents misunderstand patterns, improve documentation rather than prompts. Each confusion resolved at the project level prevents recurrence.
-
Verify through fresh sessions. Test project context by starting new conversations. If agents require extensive setup instructions, project context is insufficient.
The goal is to minimize the context work required at conversation and prompt levels. Strong project context enables concise conversations and focused prompts. Weak project context forces repeated explanation that consumes tokens and introduces inconsistency.
The foundation for documentation
This page has examined how codebase structure, naming, and configuration provide implicit project context. The next page addresses explicit context: documentation written specifically to guide agent behavior. Understanding implicit context clarifies what explicit documentation must address the gaps between what can be inferred and what must be stated.
Common Context Failures and Their Symptoms
The five failure patterns (memory/forgetting, confusion, brittleness, confabulation, quality degradation); diagnosis techniques
Documentation as Context
Why human-oriented docs fail agents; modular documentation patterns; context-specific chunking; avoiding context overload