AI Catchup

Claude Code Subagent Patterns: 10 Production Workflows That Actually Work

By 12 min read

Claude Code subagents are the underused unlock for keeping a long session productive. The pattern is always the same -- delegate a noisy task to a child agent with its own clean context window, get back only the conclusion -- but the workflows people actually run with it cluster into a small number of high-value patterns. This guide walks the 10 patterns we use weekly: verifier, codebase scout, spec-to-tests translator, refactor proposer, PR reviewer, migration planner, bug hunter, docs generator, convention detective, and library briefer. Each one ships with a copy-paste prompt template and the failure mode to watch.

The single most under-used pattern in Claude Code is delegation. Most readers know subagents exist; few use them as routine production tools. This guide is the practical playbook -- 10 specific subagent patterns, each with a copy-paste prompt template, the workflow it solves, and the failure mode to watch. They are the patterns we use weekly across multiple production codebases.

For the underlying mechanics of subagents -- when to spawn vs when to stay, how compaction interacts, what exactly the parent gets back -- the Claude Code context management guide is the prerequisite. This guide assumes you already understand the architecture and want patterns you can put to work tomorrow.

Key Takeaways

  • Pattern 1 (The Verifier) -- delegate verification of work to a fresh context to catch errors the parent missed.
  • Pattern 2 (The Codebase Scout) -- read an unfamiliar codebase without polluting main context.
  • Pattern 3 (The Spec-to-Tests Translator) -- convert prose specs into testable assertions.
  • Pattern 4 (The Refactor Proposer) -- explore + propose, never execute.
  • Pattern 5 (The PR Reviewer) -- review a diff with no conversation bias.
  • Pattern 6 (The Migration Planner) -- plan a library/framework migration from scratch.
  • Pattern 7 (The Bug Hunter) -- investigate a symptom in fresh context.
  • Pattern 8 (The Docs Generator) -- write docs from git changes.
  • Pattern 9 (The Convention Detective) -- extract patterns from existing code.
  • Pattern 10 (The Library Briefer) -- summarize external docs for the parent agent.

Read the patterns linearly the first time. Pin the two or three most relevant to your work. Add the others as the situations show up.

Pattern 1: The Verifier

Use when: Claude has just finished a non-trivial chunk of work and you want a second opinion before declaring it done.

Why this beats not using a subagent: the parent agent that wrote the code has a confirmation bias toward its own work. A fresh subagent reads the same code with no prior commitment and catches things the parent's attention skips.

Prompt template:

Spin up a subagent to verify the work I just completed. Give the subagent: the spec I started from (paste it), the final code (point to the files), and the test command. Have it review whether the implementation actually meets the spec, run the tests, and return a list of any gaps, regressions, or questionable choices. Do not make the changes yet -- just return the findings.

Failure mode to watch: the subagent returns a vague "looks fine" without actually checking. If you do not see specific file references and concrete observations in the report, the subagent did not do real work; re-prompt with more explicit verification criteria.

This is the highest-ROI pattern in the list. Use it after every significant agent-driven change before you ship.

Pattern 2: The Codebase Scout

Use when: you need to understand how something works in a codebase you do not actively maintain (a vendored library, an internal tool, a reference implementation).

Why this beats not using a subagent: reading 30 files into the parent context to understand one auth flow burns context on noise. A subagent reads them all, compresses the answer to a half-page summary, and returns it.

Prompt template:

Spin off a subagent to read through {repo path or URL} and return a summary of how {specific subsystem} works. The subagent should follow the actual code (not just the README), trace the entry points, and identify any non-obvious decisions. Return a 1-2 page summary that I can use as context to implement something similar in our codebase.

Failure mode: the subagent returns generic descriptions instead of specifics. Add explicit requirements: "include 3-5 short code excerpts that illustrate the key pattern", "name the files where each concept lives", "call out the design decisions a careful reader would not catch on first read".

Pattern 3: The Spec-to-Tests Translator

Use when: you have a written spec or PRD and want to start with the test suite before writing implementation.

Why this beats not using a subagent: turning prose into test cases is a focused task with a clear deliverable. The subagent does the translation in a clean context where it is not tempted to start implementing.

Prompt template:

Spin up a subagent to read the spec at {path} and translate it into a list of test cases. For each test case, return: the exact test name (in our project's naming convention), the inputs, the expected output, and a one-line description of what behavior it covers. Do not write the test code yet -- just produce the list. Group the tests by which acceptance criterion in the spec they cover.

Failure mode: the subagent invents requirements not in the spec. Anchor it: "do not infer requirements not stated in the spec; if the spec is ambiguous, list the ambiguity rather than picking an answer".

Pattern 4: The Refactor Proposer

Use when: you suspect a refactor would help but want to evaluate the proposal before committing to it.

Why this beats not using a subagent: asking the same Claude session that is mid-feature-work to "also propose a refactor" usually produces either a half-hearted suggestion or an over-eager one. A fresh subagent comes in with no investment in the current direction and proposes more honestly.

Prompt template:

Spin off a subagent to examine these files: {list}. Have it propose a refactor that would address {specific concern: duplicated logic / unclear responsibilities / fragile coupling / etc.}. The subagent should: identify the specific problems, propose the smallest refactor that addresses them, list the files that would need to change, and call out any risks. Do not perform the refactor -- just return the proposal. Format as a markdown report I can review before deciding.

Failure mode: the subagent proposes a refactor that touches half the codebase. Constrain it: "the proposal must touch fewer than 5 files", "if a clean refactor is not possible at this scope, return that finding".

Pattern 5: The PR Reviewer

Use when: you want a code review on a diff before opening the PR (or after, for a second pass).

Why this beats not using a subagent: the parent session that wrote the code remembers the design discussion and will rationalize the choices. A subagent that reads only the diff is the closest you can get to a fresh reviewer.

Prompt template:

Spin up a subagent to review the changes in {git diff or PR URL}. The subagent should: identify any bugs or edge cases the diff introduces, flag any places where the change is hard to understand, note any naming or convention violations against {our CLAUDE.md or AGENTS.md}, and identify any tests that should exist but do not. Return a structured review with severity (must-fix / should-fix / nit) for each finding.

Failure mode: the subagent praises everything. Add: "Be skeptical. Assume there is at least one issue worth raising. If after careful review there genuinely are none, say so explicitly."

This pattern works particularly well with Claude Code's /ultrareview slash command introduced with Opus 4.7. Use /ultrareview for the formal pre-PR pass and a custom-prompted subagent reviewer for ad-hoc deeper looks.

Pattern 6: The Migration Planner

Use when: you need to migrate a codebase from one library or framework to another and want a real plan before touching anything.

Why this beats not using a subagent: migration planning is exploration-heavy. The subagent surveys the surface area of the dependency, maps every call site, and returns a phased plan -- all in its own context, so the parent never absorbs the file-by-file scanning.

Prompt template:

Spin off a subagent to plan our migration from {library X} to {library Y}. The subagent should: catalog every file that uses {library X} (with line numbers), classify each usage by complexity (trivial rename / non-trivial transform / requires manual decision), produce a phased migration plan with concrete first steps, and identify any usages that require human input before they can be migrated. Return as a markdown plan I can hand to the team.

Failure mode: the subagent gives up on the survey halfway through. Constrain: "the survey must be exhaustive across the whole repo; if the search is too large, say so and propose how to scope it".

Pattern 7: The Bug Hunter

Use when: there is a reproducible bug and you want to investigate without polluting your current session with the trail of dead ends.

Why this beats not using a subagent: bug investigation is by definition exploratory and most of the exploration is wrong. Doing it in a subagent means the parent only sees the conclusion, not the 20 turns of "let me check this hypothesis" that came before.

Prompt template:

Spin up a subagent to investigate this bug: {repro steps + observed behavior + expected behavior}. The subagent should: form hypotheses, test each one against the code, narrow to the most likely cause, and return a finding with: the file and line where the bug lives, why it produces this behavior, the smallest change that would fix it, and the test that should be added to prevent regression. Do not apply the fix -- just return the analysis.

Failure mode: the subagent commits to the first plausible hypothesis. Anchor it: "consider at least three hypotheses before settling; if the evidence is genuinely insufficient to choose, return the top hypotheses with their evidence rather than picking arbitrarily".

Pattern 8: The Docs Generator

Use when: you have shipped a feature and need to write docs for it.

Why this beats not using a subagent: writing docs requires reading the changes carefully and synthesizing them. The subagent does that reading in its own context and returns the finished doc; your main session does not have to load all the changed files.

Prompt template:

Spin off a subagent to write user-facing docs for the changes in this PR / commit range / set of files: {pointer}. The subagent should: identify what user-visible behavior changed, write the doc as a single markdown page following the structure in {existing doc URL or path as a template}, include a "what you can do with this" section with concrete use cases, and add example code or commands where useful. Return the finished markdown.

Failure mode: the docs are written in a different voice than your existing docs. Always pass an existing doc as a style template; a subagent that has never seen your docs will produce generic prose.

Pattern 9: The Convention Detective

Use when: you joined a new codebase or your team is finally formalizing conventions and you want to extract the patterns that already exist.

Why this beats not using a subagent: extracting conventions requires reading a large amount of code looking for patterns. The subagent does that reading and returns the patterns; the parent never has to load the full survey.

Prompt template:

Spin up a subagent to read through {directory or file pattern} and identify the conventions this codebase uses. The subagent should: report on naming conventions (functions, files, variables), error handling patterns, testing patterns, import organization, and any non-obvious idioms it spots. For each, include 2-3 example file references. Return as a markdown list I can use as the basis for our CLAUDE.md or AGENTS.md.

Failure mode: the subagent reports the obvious and misses the subtle. Add: "include at least three non-obvious idioms that a new contributor would not catch by reading style guides".

This pattern is particularly valuable when consolidating multiple agent tools' configuration files; for the cross-tool comparison see Codex CLI vs Claude Code vs Cursor architecture.

Pattern 10: The Library Briefer

Use when: you need to use a library you have not used before, and you want a focused summary instead of reading the full docs.

Why this beats not using a subagent: the parent agent's training data has whatever the library looked like months ago. A subagent that fetches the live docs and returns a focused brief is more current and more relevant.

Prompt template:

Spin off a subagent to read the docs at {URL} for {library name}. The subagent should: produce a focused brief covering {specific concern: how to do X, the most idiomatic patterns, common pitfalls, etc.}. Include code examples for the patterns I will most likely need. Cite the docs URL section for each claim. Return as markdown I can paste into a working note.

Failure mode: the brief is too generic. Anchor it tightly to your actual concern: "I am specifically trying to do {very specific thing}. Optimize the brief for that concrete use case, not for a general overview".

This pattern compounds with the Cursor self-documentation feature -- the same architectural primitive (subagent fetches live docs) is becoming the cross-tool default for keeping AI tools current with the products they explain.

Combining the Patterns

The patterns work alone but compound when combined. Two combinations worth knowing:

  • Pattern 4 → Pattern 5. Refactor Proposer plans the change, you accept the plan, then PR Reviewer reviews the implemented diff. Two subagents, one human approval, no parent-context pollution.
  • Pattern 7 → Pattern 1. Bug Hunter finds the bug, you (or the parent) implement the fix, Verifier checks the fix actually addresses the cause. The Bug Hunter's analysis is part of the Verifier's input -- the subagents share artifacts even though they have separate contexts.

The general orchestration pattern: a parent session manages the flow and the human approvals, subagents do the bounded research/verification work, and the parent's context stays clean throughout. A real production day might run 3-8 subagent spawns across two or three of these patterns.

When Not to Use Subagents

Subagents are not free. They take time, they consume model budget, and they require you to write a clear delegation prompt. Skip them when:

  • The task is small enough to do inline (rename a function, fix a one-line bug).
  • The work depends heavily on context the subagent would not have (an in-progress design discussion, a half-finished refactor).
  • You are in a fast iteration loop where round-tripping to a subagent breaks flow.
  • The deliverable is not a written report -- subagents pass back text, not direct edits, so anything that needs to mutate files is usually better done in the parent.

The honest rule: if the answer to "would I want this done by a separate teammate so I do not have to context-switch?" is yes, it is a subagent task. If you would rather just do it yourself in 30 seconds, stay in the parent.

A Realistic First Week With These Patterns

Pick three patterns and try them on real work this week:

  • Day 1-2: The Verifier (Pattern 1) on every non-trivial chunk of agent work. The friction is low and the value is immediate.
  • Day 3-4: The PR Reviewer (Pattern 5) on at least two of your own PRs before opening them. Notice what it catches that you missed.
  • Day 5-7: Pick one of the Bug Hunter (7), Refactor Proposer (4), or Codebase Scout (2) and use it on real work that surfaces. Notice the difference in your main session's clarity at end of day.

By end of week one you will have a felt sense of which patterns fit your workflow. By end of week two, two or three of them will be habits. The point of subagents is not to use them on everything; it is to have them ready for the moments when they are the right tool.

For the broader Claude Code workflow stack -- routines, ultrareview, auto mode, effort levels -- our Claude Opus 4.7 best practices guide is the companion piece.

Frequently Asked Questions

What is a Claude Code subagent?

A subagent is a child Claude Code session spawned from your main session. It runs with its own clean context window, performs a delegated task, and returns only its final report to the parent. The intermediate tool output, exploration, and dead-end reasoning stay walled off in the subagent's context, so your main session never has to absorb the noise of finding the answer.

When should I use a subagent vs continuing in the main session?

Use a subagent when the work would generate significant noise (tool output, exploration, failed attempts) you do not want in your main context, when you want fresh judgment without prior bias, when the work is parallelizable, or when the work scope is clearly bounded with a well-defined deliverable. Stay in the main session when the work builds incrementally on conversation context the subagent would not have.

How do I spawn a subagent in Claude Code?

Tell Claude what to delegate and to whom. Plain language works: 'spin up a subagent to read the auth module and return a summary of how session validation works'. Claude Code interprets this as a subagent spawn, executes the delegated task in a fresh context, and returns the subagent's final report. You can also use explicit slash commands when the spawn pattern is part of a saved workflow.

Can subagents themselves spawn more subagents?

Yes, though the practical depth is shallow. A subagent can delegate sub-tasks to its own children, but going more than two levels deep usually produces diminishing returns -- the cost of orchestration overtakes the benefit of context isolation. The most useful pattern is parent + one layer of subagents, with each subagent doing one well-scoped piece of work.

Get the weekly AI Catchup

Tools, practices, and what matters -- in your inbox every Monday.