Applied Intelligence
Module 12: Knowing When Not to Use Agents

Exercise: Design Your Personal ASD Workflow

Audit your development workflow, identify high-impact integration points, and build a sustainable agentic software development practice.

This exercise pulls together everything from the course into a workflow you can use tomorrow. You'll audit what you actually do all day, figure out where AI agents help most, write some agent documentation, and try the spec-first pattern on real code.

Overview

The GitHub CLI (cli/cli) is a mature open source project—42,000+ stars, 10,000+ commits, clean Go codebase. It has no CLAUDE.md or agent documentation, which makes it a realistic scenario: bringing ASD practices to a project that wasn't built with them in mind.

You will:

  1. Audit your current development workflow and spot improvement opportunities
  2. Create agent documentation for an unfamiliar codebase
  3. Apply spec-first development to a real feature
  4. Measure what happened and reflect on it

This isn't a step-by-step tutorial. It's a framework for building your own ASD practice.

Setup

Clone the repository

git clone https://github.com/cli/cli.git
cd cli

Verify Go installation

The GitHub CLI requires Go 1.22 or later:

go version

Install Go first if it's not there.

Build the project

Make sure the codebase compiles:

make

The binary ends up at bin/gh.

You don't need to know Go deeply. The focus is workflow methodology, not the language. These patterns work regardless of what you're coding in.

Phase 1: Workflow audit

Before bolting AI tools onto your workflow, understand what your workflow actually is.

Time allocation assessment

Estimate how you spend your development hours. Be honest—this baseline determines where AI can help.

ActivityHours/weekNotes
Writing new code
Reading/understanding existing code
Debugging
Writing tests
Code review (giving)
Code review (receiving/addressing)
Documentation
Meetings/coordination
Build/deploy/ops tasks
Other

Identify repetitive tasks

List tasks you do over and over that follow predictable patterns:






Examples: boilerplate generation, test scaffolding, doc updates, commit messages, PR descriptions.

Identify judgment-heavy tasks

List tasks where your expertise and context are irreplaceable:






Examples: architecture decisions, security review, performance optimization, UX design, tech debt prioritization.

Rate your current workflow

DimensionScore (1-5)Notes
Time efficiency
Consistency (same task, same approach)
Knowledge capture (can others follow?)
Tool integration (minimal context switching?)
Error prevention (catch mistakes early?)

Research shows 30-60% time savings for test generation, documentation, and boilerplate with AI assistance. Reading unfamiliar code benefits too. Architecture decisions and performance optimization? Minimal or negative returns. Use this when prioritizing where to integrate.

Phase 2: Create agent documentation

The cli/cli repo has no CLAUDE.md. Write one that captures what an agent needs to work effectively.

Explore the codebase

Use Claude Code to understand the project:

claude

In Claude Code:

Explore this codebase. Describe the project structure, key directories, and how commands are organized. Don't make any changes.

Record what you learn:

AspectObservation
Main entry point
Command structure
Package organization
Test patterns
Build system

Create CLAUDE.md

Based on exploration, create a CLAUDE.md in the repository root.

Follow the structure from Module 2:

# GitHub CLI (gh)

## Project overview

[One paragraph: what is this project]

## Architecture

[Key directories and what they're for]

## Development commands

```bash
# Build
make

# Test
go test ./...

# Run
bin/gh

Code conventions

[Patterns you observed]

Boundaries

Always

  • [Things the agent should do]

Ask first

  • [Things needing human approval]

Never

  • [Things the agent must not do]

Write the file:

```bash
# In Claude Code
Create a CLAUDE.md file based on what we learned about this codebase

Or write it manually.

Verify the documentation

Start a fresh Claude Code session:

claude

Test if your CLAUDE.md actually helps:

Based on the project documentation, where would I add a new command?

Does the agent's answer match reality?

QuestionAgent correct?Notes
Where to add new command
How to run tests
Code style expectations

Refine the CLAUDE.md if the agent got things wrong.

Phase 3: Spec-first development

Now apply the spec-first pattern to actual work.

Choose a task

Pick one appropriate for cli/cli:

Option A: Add a command flag Add a --json output flag to a command that lacks one.

Option B: Improve error message Find an unhelpful error message and make it better.

Option C: Add a test Find untested code and add coverage.

Option D: Documentation improvement Improve a command's help text.

Selected task: _________________________________________

Create the specification

Before writing code, create a spec.md file.

In Claude Code:

I want to [describe your task]. Help me create a detailed specification before we implement. Let's brainstorm:

1. What exactly should change?
2. What files are involved?
3. What are the edge cases?
4. How will we test this?
5. What could go wrong?

Don't write code yet. Output the results as spec.md.

The spec should include:

  • Objective: What we're building
  • Scope: What's in and what's out
  • Files affected: Specific paths
  • Implementation approach: How we'll do it
  • Testing strategy: How we'll verify it
  • Risks: What could break

Save the spec:

# Create in repository
cat > spec.md << 'EOF'
[paste spec content]
EOF

Review the spec

Before implementing, review it yourself:

CheckPass?Notes
Objective is clear and specific
Scope boundaries defined
Files identified are correct
Implementation approach is sound
Testing strategy is sufficient
Risks are realistic

Revise the spec if any check fails.

Spec-first fails when developers skip review. A polished spec that's wrong produces confidently wrong code. Your judgment is the quality gate.

Create the implementation plan

Convert the spec into sequential tasks:

Based on spec.md, create a step-by-step implementation plan. Each step should be:
- Small enough to complete and verify independently
- Ordered by dependencies
- Clear on what "done" looks like

Output as tasks.md.

Example format:

# Implementation Tasks

## Task 1: [description]
- Files: [list]
- Done when: [criteria]

## Task 2: [description]
- Files: [list]
- Depends on: Task 1
- Done when: [criteria]

Execute the plan

Work through tasks one at a time:

Let's implement Task 1 from tasks.md. Follow the spec exactly.

After each task:

  1. Verify it works (go test ./...)
  2. Commit if tests pass
  3. Move to the next task

Track progress:

TaskCompletedTests passCommitted
1
2
3
4

Compare with direct approach

Think about it: how would this have gone without the spec?

ApproachEstimated result
Direct prompting ("add feature X")
Spec-first pattern
Fully manual implementation

Phase 4: Measure and reflect

Quantitative assessment

Record metrics from this session:

MetricValue
Total time spent
Time on spec creation
Time on implementation
Number of iterations needed
Tests passing on first try?
Commits made

Qualitative assessment

QuestionYour answer
Did the CLAUDE.md improve agent accuracy?
Did the spec prevent wasted iterations?
Where did you intervene to correct the agent?
What would you do differently?

Identify your high-impact integration points

Based on the exercise and workflow audit, where should AI integration focus?

High value (use AI regularly):




Moderate value (use AI selectively):



Low value (prefer manual):



Define your personal workflow

Document your ASD workflow as a repeatable process:

# My ASD Workflow

## For new codebases
1. [your step]
2. [your step]

## For feature work
1. [your step]
2. [your step]

## For bug fixes
1. [your step]
2. [your step]

## Quality gates I always apply
- [your gate]
- [your gate]

## Red flags that mean I should work manually
- [your trigger]
- [your trigger]

Write this somewhere you'll actually reference.

Success criteria

  • Workflow audit completed with honest time estimates
  • Repetitive and judgment-heavy tasks identified
  • cli/cli repository cloned and building
  • Codebase explored with Claude Code
  • CLAUDE.md created and tested
  • Task selected from options
  • spec.md created through brainstorming
  • Spec reviewed and validated before implementation
  • tasks.md created from spec
  • At least one task implemented following the plan
  • Quantitative metrics recorded
  • Qualitative reflection completed
  • Personal workflow documented

Variations

Variation A: Cross-tool comparison

Create an AGENTS.md file (tool-agnostic format) alongside CLAUDE.md. Try the same task with Claude Code and Codex if you have both. Compare how each tool interprets the documentation.

Variation B: Team workflow

Design a workflow for a team instead of yourself. Include:

  • Onboarding steps for new team members
  • Review requirements for AI-generated code
  • Shared conventions

Variation C: Metrics baseline

Complete a task manually first and record time, files touched, iterations, bugs. Then use spec-first on an equivalent task. Compare the numbers.

Variation D: Complex multi-file change

Pick a task spanning multiple files—a refactoring or feature across packages. See whether spec-first scales. Note where coordination gets difficult.

Variation E: Failure mode documentation

Intentionally trigger failures:

  • Give a vague spec and watch the agent struggle
  • Skip verification steps
  • Ask for changes the agent can't handle

Document how failures show up and how to catch them early.

Takeaways

Workflow audits reveal patterns invisible during daily work. Developers consistently underestimate repetitive task time and overestimate creative work time. The audit gives you data for targeted improvement.

CLAUDE.md files work when they capture what the agent actually needs—not generic project descriptions but specific commands, conventions, and boundaries. The test is straightforward: does a fresh session with the documentation produce correct answers?

Spec-first trades upfront time for fewer iterations. Writing a spec feels slower than diving into code. But specification errors caught before implementation cost less than errors caught in review or production.

Judgment stays the rate limiter. This exercise forces deliberate review at each stage. Is the spec right? Is the plan sound? Did implementation match intent? Skipping these checks just produces wrong code faster.

Personal workflows are personal. This course provided patterns. Your workflow will combine, adapt, and throw out patterns based on your context. A workflow you built beats one you copied.

The documentation you create outlives the session. CLAUDE.md improves every future session on that codebase. Your workflow guide improves every future task. Documentation investment compounds.

The course ends where it began: with you, making decisions about how to work. Tools will change. The discipline of deliberate practice, explicit documentation, and honest self-assessment doesn't.

On this page