Prompt patterns that work
The previous lesson examined prompt anatomy structure, specificity, and verbosity. This lesson catalogs specific patterns that consistently produce good results across common development scenarios.
These patterns are not arbitrary conventions. Each emerged from observing what distinguishes successful agent interactions from unsuccessful ones. The patterns encode principles separation of concerns, explicit verification, structured decomposition that apply whether working with Claude Code, Codex, or any capable coding agent.
The explore-plan-code-verify workflow
The most reliable pattern for non-trivial tasks separates the work into distinct phases, each with a clear purpose and completion criteria.
Phase 1: Explore
Before writing code, the agent reads and analyzes relevant portions of the codebase without making changes. This phase uses read-only operations searching, reading files, examining structure to build understanding.
Read the authentication module in src/auth/ and identify:
- How user sessions are currently managed
- What middleware patterns are in use
- How errors are handled and reported
Don't write code yet.The explicit instruction "don't write code yet" prevents premature implementation. Without this constraint, agents often begin coding before understanding the existing patterns, producing output that contradicts established conventions.
Phase 2: Plan
With understanding established, the agent creates a concrete implementation plan before writing any code.
Based on your analysis, create a plan for adding two-factor authentication.
List the files to modify, the order of changes, and any new files needed.
Include how you'll handle backward compatibility during rollout.The plan serves as a contract. Both developer and agent can review it before committing to execution. Misunderstandings surface during planning when they're cheap to correct rather than during implementation.
Boris Cherny, creator of Claude Code, describes his workflow: "If my goal is to write a Pull Request, I will use Plan mode, and go back and forth with Claude until I like its plan. A good plan is really important!"
Phase 3: Code
Implementation proceeds incrementally, following the validated plan.
Implement step 1 of the plan: add the TOTP library and create
the TwoFactorService class with generateSecret() and verifyCode() methods.Each coding step stays small enough for the agent to handle within context. Smaller steps allow course correction without abandoning the overall approach.
Phase 4: Verify
The final phase confirms the implementation works correctly.
Run the authentication test suite and verify all existing tests pass.
Then create tests for the new two-factor functionality covering:
- Successful code verification
- Expired code rejection
- Invalid code handlingVerification closes the loop. An implementation that passes tests provides stronger confidence than one that merely looks correct.
A Plan-Do-Check-Act study comparing structured versus unstructured AI coding found structured approaches used fewer tokens and produced superior test coverage, while the unstructured approach required extensive post-implementation troubleshooting. The explore-plan-code-verify workflow invests effort upfront to avoid costly corrections later.
Test-as-specification
Tests serve as executable specifications that provide concrete, unambiguous requirements for code generation.
Rather than describing behavior in prose, the test-as-specification pattern provides failing tests that define success criteria:
Here are failing tests that define the required behavior:
test('calculateTax returns correct amount for standard rate', () => {
expect(calculateTax(100, 'standard')).toBe(20);
});
test('calculateTax returns zero for exempt items', () => {
expect(calculateTax(100, 'exempt')).toBe(0);
});
test('calculateTax throws for invalid rate type', () => {
expect(() => calculateTax(100, 'invalid')).toThrow('Unknown rate type');
});
Implement calculateTax() to make these tests pass.Research demonstrates the effectiveness of this approach. Studies found test-driven prompts outperform narrative descriptions one comparison showed 231 successful attempts versus 222 for narrative specifications. User studies report task correctness improvements from 40% to 84% when using test-validated code generation. Tests eliminate ambiguity either the code passes or it does not.
Test-as-specification works best when tests are written or reviewed by humans. AI-generated tests validating AI-generated code can create circular validation that misses actual requirements.
The pattern extends beyond unit tests. API contracts, type definitions, and interface specifications all function as testable constraints that reduce ambiguity in code generation.
Decomposition patterns
Complex tasks exceed what a single prompt can handle reliably. Decomposition breaks large requests into manageable sub-tasks.
Horizontal decomposition
Divide work by component or concern:
We need to add user preferences to the application.
First task: Create the database schema for storing preferences.
I'll ask for the API endpoints afterward.Now create the API endpoints for preferences CRUD operations.
Follow the patterns established in the user-preferences schema.Each prompt addresses one layer of the stack, producing focused output that integrates with subsequent work.
Vertical decomposition
Divide work by feature slice:
Add the "dark mode" preference:
1. Add the field to the preferences schema
2. Create the API endpoint to toggle it
3. Update the frontend to read and apply the preference
Start with step 1.Vertical decomposition keeps related changes together, producing complete vertical slices that can be tested in isolation.
Chain-of-thought prompting a form of explicit decomposition reduces logical errors in code by up to 25%. Few-shot prompts with step-by-step examples outperform zero-shot approaches by 25-40% in accuracy.
When decomposition helps
Decomposition proves most valuable when tasks involve:
- Multiple files or components
- Dependencies between changes
- Significant complexity in any single step
- Risk of conflating unrelated concerns
Single-file changes, well-defined refactoring, and simple bug fixes typically do not require explicit decomposition.
Skeleton priming
Skeleton priming provides structural scaffolding that the agent fills in with implementation details.
Implement this service following this structure:
class NotificationService {
constructor(emailClient, pushClient) {
// Initialize clients
}
async sendNotification(userId, message, channels) {
// 1. Validate inputs
// 2. Fetch user preferences
// 3. Send to enabled channels
// 4. Record delivery status
}
private async sendEmail(userId, message) {
// Implementation
}
private async sendPush(userId, message) {
// Implementation
}
}The skeleton constrains the solution space while leaving implementation decisions to the agent. This pattern works particularly well when:
- The developer knows the desired structure but not implementation details
- Consistency with existing patterns is important
- The task involves filling in a known template
Function signatures with detailed type annotations function as lightweight skeletons.
Providing async function processOrder(order: Order): Promise<OrderResult> constrains output more effectively than prose descriptions of the same interface.
The Skeleton-of-Thought technique where the agent first generates an outline then expands each point produces quality results while reducing generation time. The structure focuses agent effort on implementation rather than design decisions.
Persona-based prompting
Persona patterns establish a perspective that shapes how the agent approaches tasks.
Review this authentication code from a security auditor's perspective.
Focus on potential vulnerabilities, input validation gaps, and areas
where defensive coding practices are missing.The persona frames evaluation criteria without exhaustively listing them. A "security auditor" perspective implies attention to input validation, injection risks, and authorization checks. A "performance engineer" perspective would emphasize different concerns entirely.
Effective persona prompts specify:
- The role being adopted
- The scope of concern
- The type of output expected
As a senior developer reviewing a junior's PR, identify issues in this
code that would block approval. Explain each concern in terms a less
experienced developer would understand.The persona sets expectations for both what to find and how to communicate it.
Pattern selection
Different scenarios favor different patterns:
| Scenario | Recommended Pattern |
|---|---|
| New feature implementation | Explore-plan-code-verify |
| Behavior already defined by tests | Test-as-specification |
| Complex multi-component work | Decomposition |
| Known structure, unknown details | Skeleton priming |
| Evaluation or review tasks | Persona-based |
Patterns combine naturally. A complex feature might use the explore-plan-code-verify workflow with decomposition applied at the coding phase and test-as-specification for verification. The patterns are tools, not constraints select and combine based on task requirements.
Summary
Effective prompt patterns encode principles that transcend specific tools or models. The explore-plan-code-verify workflow separates concerns across phases. Test-as-specification provides unambiguous success criteria. Decomposition converts overwhelming complexity into manageable steps. Skeleton priming constrains structure while enabling implementation flexibility. Persona-based prompting shapes perspective without exhaustive criteria lists.
The next lesson examines the inverse: prompt anti-patterns and smells that reliably produce poor results, and how to recognize and avoid them.