How to Use Development Workflow (2026)

Building effective Claude Code projects requires structured workflows that use skills, tools, and agent patterns. This guide provides practical templates you can adapt for different development scenarios, whether you’re automating documentation with the pdf skill, implementing test-driven development with tdd, or managing complex project contexts with supermemory.

Why Workflow Templates Matter

Without a consistent structure, Claude Code projects tend to accumulate technical debt in the form of ad-hoc skill invocations, missing context files, and undocumented decisions. A workflow template is not a rigid script. it is a repeatable starting point that you adapt to your project’s needs while ensuring nothing important gets skipped.

The templates in this guide fall into three categories:

Category Templates Best For
Foundation Project initialization, memory management All projects
Development TDD, skill chaining, code review Active feature work
Operations Deployment pipeline, multi-agent coordination Production systems

Start with the foundation templates, add development templates as you write code, and layer in operations templates when you’re ready to ship.

Core Project Initialization Template

Every Claude Code project starts with a consistent initialization workflow. The following template establishes project structure, configures essential skills, and sets up the development environment:

Project initialization workflow
1. Create project directory structure
2. Initialize git repository with proper .gitignore
3. Install required Claude skills (frontend-design, pdf, tdd)
4. Configure skill-specific settings in _skills/ or config/
5. Set up supermemory context for project documentation

This template ensures all projects begin with the same baseline configuration. The pdf skill becomes particularly useful here for generating project requirement documents automatically from initial conversations.

Directory Structure Convention

A well-organized project directory makes skill configuration and context retrieval predictable:

my-project/
 .claude/
 CLAUDE.md # Project-level Claude instructions
 frontend-context.md
 backend-context.md
 _skills/
 custom-skill.md # Any project-specific skill definitions
 src/
 ui/
 api/
 docs/
 tests/

The .claude/CLAUDE.md file is especially important. Claude reads this automatically when you open a project, so put your coding conventions, architectural decisions, and skill preferences here rather than re-explaining them every session.

Sample CLAUDE.md Template

Project: [Name]
Stack
- Language: Python 3.11
- Framework: FastAPI
- Database: PostgreSQL via SQLAlchemy
- Testing: pytest
Conventions
- All functions must have docstrings
- API endpoints require integration tests
- Use type hints throughout
Active Skills
- tdd: Generate test-first implementations
- pdf: Export API documentation
- supermemory: Track architectural decisions
Key Context
[Brief description of what this project does and its current state]

This upfront investment in CLAUDE.md saves significant context-setting time across every subsequent session.

Skill Chaining Workflow

Complex tasks often require multiple skills working together. The skill chaining pattern orchestrates sequential skill execution where each skill’s output feeds into the next:

User Request → [frontend-design] → Design Tokens + Components
 ↓
 [pdf] → Component Documentation
 ↓
 [tdd] → Test Files + Implementation

The key to effective skill chaining is clear output expectations. Each skill should produce artifacts that the next skill can consume directly. For example, when frontend-design generates component specifications, it should output structured JSON or Markdown that tdd can parse to generate corresponding test files.

Designing Clean Handoffs Between Skills

The most common failure mode in skill chaining is ambiguous handoff formats. Skill A produces output, and Skill B doesn’t know how to parse it. Prevent this by specifying the expected output format explicitly when invoking each skill:

/frontend-design Generate a Button component spec. Output as JSON with keys:
 - componentName (string)
 - props (array of {name, type, required, defaultValue})
 - variants (array of variant names)
 - accessibility (object with role, aria attributes)

Then the next step in the chain has a predictable input:

/tdd Given this component spec [paste JSON], generate:
 1. A Jest test file covering all props and variants
 2. The minimal TypeScript implementation to pass those tests

Explicit format contracts between steps make chains reliable and repeatable.

Parallel vs. Sequential Chains

Not all skill chains need to be sequential. When steps are independent, run them in parallel:

User Request
 → [tdd] → Backend tests + implementation
 → [frontend-design] → UI components
 ↓ (both complete)
 [pdf] → Full feature documentation

Parallelism reduces total wall-clock time significantly on larger features. Identify dependencies between steps and only enforce sequencing where the output of one step genuinely feeds into the next.

Test-Driven Development Workflow with tdd Skill

The tdd skill transforms how you approach implementation. Rather than writing code then tests, you define behavior through tests first:

Step 1: Define expected behavior in test file
def test_user_authentication():
 """User should be authenticated via JWT token"""
 token = generate_token(user_id="123")
 assert validate_token(token)["user_id"] == "123"
 assert token.expiry > datetime.now()

The tdd skill analyzes these specifications and generates the minimal implementation code needed to pass tests. This workflow particularly excels when combined with the supermemory skill, which maintains a persistent context of your test suite across sessions.

The Red-Green-Refactor Loop with Claude

The classic TDD loop maps naturally onto Claude Code sessions:

  1. Red: Describe the feature in plain language, ask /tdd to generate a failing test
  2. Green: Ask /tdd to generate the minimal implementation that passes that test
  3. Refactor: Ask Claude Code to review the implementation for code quality, then apply improvements while keeping tests green
Red: failing test generated by tdd skill
def test_calculate_discount():
 """Premium users get 20% off, standard users get 10% off"""
 assert calculate_discount(price=100, tier="premium") == 80.0
 assert calculate_discount(price=100, tier="standard") == 90.0
 assert calculate_discount(price=100, tier="basic") == 100.0
Green: minimal implementation
def calculate_discount(price: float, tier: str) -> float:
 discounts = {"premium": 0.20, "standard": 0.10, "basic": 0.0}
 return price * (1 - discounts.get(tier, 0.0))
Refactor: Claude suggests adding validation
def calculate_discount(price: float, tier: str) -> float:
 if price < 0:
 raise ValueError(f"Price cannot be negative: {price}")
 discounts = {"premium": 0.20, "standard": 0.10, "basic": 0.0}
 if tier not in discounts:
 raise ValueError(f"Unknown tier: {tier}")
 return price * (1 - discounts[tier])

Each iteration stays focused on a single behavior, preventing the sprawl that often accompanies AI-assisted code generation.

Documentation Generation Workflow

Documentation often becomes outdated because it requires manual maintenance. The pdf skill combined with code analysis creates an automated documentation pipeline:

Code Changes → Skill Analysis → Content Generation → pdf Renderer → Documentation Artifact

This workflow runs as part of your CI/CD pipeline, ensuring documentation always reflects current code. The supermemory skill contributes by tracking which documentation sections need updates based on recent changes.

Structuring Docstrings for Automatic Export

To get high-quality documentation from the pdf skill, structure your docstrings consistently:

def process_payment(
 amount: float,
 currency: str,
 customer_id: str,
 idempotency_key: str
) -> PaymentResult:
 """
 Process a payment for a customer.
 Args:
 amount: Payment amount in the smallest currency unit (e.g., cents for USD)
 currency: ISO 4217 currency code (e.g., "USD", "EUR")
 customer_id: Internal customer identifier
 idempotency_key: Unique key to prevent duplicate charges on retry
 Returns:
 PaymentResult with status, transaction_id, and timestamp
 Raises:
 InsufficientFundsError: If the customer account has insufficient balance
 InvalidCurrencyError: If the currency code is not supported
 Examples:
 result = process_payment(1000, "USD", "cust_123", "order_456")
 print(result.transaction_id) # "txn_789"
 """

The pdf skill can then extract this structured content and generate clean API reference documentation without manual intervention.

Integrating Documentation into CI/CD

Add a documentation generation step to your GitHub Actions workflow:

.github/workflows/docs.yml
name: Generate Documentation
on:
 push:
 branches: [main]
jobs:
 docs:
 runs-on: ubuntu-latest
 steps:
 - uses: actions/checkout@v4
 - name: Generate API docs
 run: |
 # Run Claude Code with pdf skill in headless mode
 claude-code --skill pdf --input src/ --output docs/api-reference.pdf
 - name: Commit updated docs
 run: |
 git config user.name "Claude Code Bot"
 git add docs/
 git commit -m "Auto-update API documentation" || exit 0
 git push

This ensures documentation is always in sync with the latest merged code.

Multi-Agent Coordination Pattern

Large projects benefit from dividing work among specialized agents. This template coordinates multiple Claude Code instances:

agent-coordination.yaml
agents:
 - name: frontend
 skills: [frontend-design, canvas-design]
 scope: "src/ui//*"
 context_file: ".claude/frontend-context.md"
 - name: backend
 skills: [tdd, database]
 scope: "src/api//*"
 context_file: ".claude/backend-context.md"
 - name: docs
 skills: [pdf, memory]
 scope: "docs//*"
 context_file: ".claude/docs-context.md"

Each agent operates within defined boundaries, reporting progress to a central coordinator. The supermemory skill stores coordination state, enabling agents to resume interrupted work smoothly.

Defining Agent Boundaries

The most important aspect of multi-agent coordination is clearly defined scopes. Overlapping scopes cause conflicts where two agents modify the same files in incompatible ways. Use these principles:

  • Scope by directory, not by file type. src/api/ rather than /*.py
  • Never let two agents own the same directory
  • Shared interfaces (API contracts, database schemas) belong to a neutral “contract” agent or are locked files that require explicit coordinator approval to change

Coordinator Context File

The coordinator agent needs a high-level view of all sub-agent states:

Coordinator Context
Current Sprint
Feature: User authentication flow
Agent Status
- frontend: Implementing login form (src/ui/auth/). 60% complete
- backend: JWT endpoint tests passing, implementing refresh token. 40% complete
- docs: Waiting on backend endpoint specs before generating API docs
Blockers
- frontend waiting on: POST /auth/login response schema from backend
- backend waiting on: final decision on token expiry policy (product decision)
Integration Points
- Login form (frontend) → POST /auth/login (backend): schema TBD
- Token refresh (backend) → localStorage handling (frontend): agreed on httpOnly cookies

Update this file after each agent session. The supermemory skill can maintain and query this coordinator context automatically.

Memory Management Workflow

Effective context management prevents token limit issues while maintaining project awareness. The supermemory skill provides several memory patterns:

Session Memory: Stores conversation context for retrieval within current session Project Memory: Maintains project-wide knowledge including architecture decisions and coding standards Long-term Memory: Persists across projects for reusable patterns and solutions

Memory hierarchy in practice
1. Active Context (current conversation)
 ↓
2. Project Memory (architecture, standards, current tasks)
 ↓
3. Long-term Memory (reusable patterns, solved problems)

When starting new work, first query supermemory for relevant past solutions before building from scratch. This avoids duplicate work and maintains consistency across projects.

What to Store in Each Memory Layer

Memory Layer Store Here Avoid Storing
Session Memory Current task state, intermediate outputs, open questions Complete file contents (use file references instead)
Project Memory Architecture decisions, API contracts, coding standards, known bugs Implementation details that change frequently
Long-term Memory Reusable patterns, solutions to recurring problems, skill configurations Project-specific details that won’t generalize

A practical rule of thumb: if you would write it in a CLAUDE.md file, it belongs in project memory. If you would write it in a personal code notebook, it belongs in long-term memory.

Memory Query Patterns

Start every new session with a structured memory query rather than diving straight into work:

Before we start: query project memory for:
1. Current task status (what was I working on last session?)
2. Any open blockers or decisions
3. Relevant architectural constraints for today's work area

This 30-second habit saves significant time re-establishing context that Claude already has stored.

Code Review Workflow

Automated code review using Claude skills catches issues before human review:

Developer submits PR → [Claude Code: /review-skill] → Analysis Report
 ↓
 Issues Found → Assign to Developer
 ↓
 No Issues → Merge Approval

The review skill examines code against project standards, checks for common vulnerabilities, and verifies test coverage. Integrate this workflow through GitHub Actions or similar CI systems.

Structuring the Review Prompt

A generic “review this code” prompt produces inconsistent results. Use a structured review template:

Review the changes in this PR against these criteria:
Security:
- No hardcoded secrets or API keys
- SQL queries use parameterized inputs (no string interpolation)
- User inputs are validated before processing
Code quality:
- Functions are under 50 lines
- All public functions have docstrings
- No duplicated logic (DRY)
Test coverage:
- New functions have corresponding tests
- Edge cases (empty input, null values) are tested
- Tests are isolated (no shared state between tests)
Project standards (from .claude/CLAUDE.md):
[paste relevant sections]
Flag any violations with: file, line number, issue category, and suggested fix.

This structured prompt produces actionable, categorized feedback rather than vague suggestions.

Deployment Pipeline Template

Automating deployments requires careful skill orchestration:

Deployment workflow
1. tdd skill: Verify all tests pass
2. security skill: Scan for vulnerabilities
3. build skill: Compile and bundle application
4. deploy skill: Push to target environment
5. verify skill: Confirm deployment success
6. supermemory: Update deployment log and rollback procedures

Each step produces artifacts consumed by the next. If any step fails, the pipeline halts and supermemory records the failure context for debugging.

Rollback Context Template

When a deployment fails, having structured context in supermemory accelerates recovery:

Deployment Log Entry
Deploy: v2.4.1 → v2.4.2
- Date: [timestamp]
- Deployer: [agent or user]
- Environment: production
Changes Included
- Fix: Payment retry logic (PR #234)
- Feature: Dark mode toggle (PR #231)
Rollback Procedure
1. Run: `kubectl rollout undo deployment/api-server`
2. Verify: Check /health endpoint returns 200
3. Notify: Post to #incidents Slack channel
Known Issues in This Release
- Migration 0024 is irreversible; do not roll back database if >1h elapsed

Store this in supermemory at deploy time. If something goes wrong, the rollback procedure is immediately accessible in the next Claude Code session without hunting through Slack or runbooks.

Custom Skill Development Pattern

When existing skills don’t meet requirements, build custom skills following this template:

custom-skill.md
---
name: project-scaffolder
description: "Generates project structure from specifications"
---
Input Format
Describe your project using this structure:
- Project name
- Tech stack (language, framework)
- Required features
Output
Creates complete project structure with:
- Configuration files
- Basic directory layout
- Starter code files

The skill development workflow includes iterative testing using the tdd skill to verify skill behavior matches expectations.

Testing Custom Skills

Treat your custom skill definitions as code that needs tests. Write a test specification that describes expected skill behavior, then use the tdd skill to verify that your skill definition produces the right outputs:

Skill Test: project-scaffolder
Test Case 1: Python FastAPI project
Input: "Create a Python FastAPI project called task-manager with REST endpoints and PostgreSQL"
Expected output must include:
- [ ] requirements.txt with fastapi, uvicorn, sqlalchemy, psycopg2
- [ ] main.py with FastAPI app initialization
- [ ] models/ directory with base model class
- [ ] routes/ directory with at least one example endpoint
- [ ] .env.example with DATABASE_URL variable
Test Case 2: Minimal project (no database)
Input: "Create a Python script project called data-processor with no web framework"
Expected output must NOT include:
- [ ] Any web framework dependencies
- [ ] Database configuration files

Running these test cases manually and refining the skill definition until all cases pass gives you confidence before deploying the skill to your team.

Choosing the Right Workflow

Select workflow templates based on project characteristics:

Project Type Recommended Templates
Single-task scripts Core initialization + one specialized skill
Multi-file features Add skill chaining for sequential transformations
Long-running projects Implement memory management from the start
Team projects Multi-agent coordination with defined boundaries
Maintenance projects Prioritize documentation and review workflows
Production systems Full deployment pipeline with rollback context

Start with simpler workflows and add complexity as project needs demand it. The combination of pdf for documentation, tdd for implementation, and supermemory for context management provides a foundation for most development scenarios.

Workflow Evolution Path

A healthy project typically evolves through these stages:

  1. Solo exploration (days 1-3): Core initialization template only; focus on learning the domain
  2. Active development (weeks 1-4): Add TDD workflow and skill chaining; establish memory patterns
  3. Team expansion (month 2+): Introduce multi-agent coordination; formalize review workflows
  4. Production operations (ongoing): Full deployment pipeline; regular documentation generation

Trying to implement all workflows on day one creates unnecessary overhead. Let the workflow complexity grow with the project’s actual needs.


Try it: Paste your error into our Error Diagnostic for an instant fix.

This site was built by 5 autonomous agents running in tmux while I was in Bali. 2,500 articles. Zero manual work. 100% quality gate pass rate. The orchestration configs, sprint templates, and quality gates that made that possible are in the Zovo Lifetime bundle. Along with 16 CLAUDE.md templates and 80 tested prompts. **[See how the pipeline works →](https://zovo.one/lifetime?utm_source=ccg&utm_medium=cta-skills&utm_campaign=claude-code-development-workflow-templates)** $99 once. I'm a solo dev in Da Nang. This is how I scale.

Related Reading

Built by theluckystrike. More at zovo.one

Find the right skill → Browse 155+ skills in our Skill Finder.

Quick setup → Launch your project with our Project Starter.