Passing Context Between Claude Code (2026)
Claude Code enables powerful multi-agent architectures through subagents. For an introduction to subagent coordination, see multi-agent orchestration with Claude subagents. When building complex workflows, you need reliable ways to pass context, share state, and coordinate results between subagents. This guide covers practical patterns for context passing that work in real production workflows.
Understanding Subagent Context Architecture
Claude Code subagents operate as isolated execution units within a parent session Each subagent receives its own context window, which means data generated in one subagent does not automatically propagate to another. This isolation provides safety and predictability, but requires explicit patterns for sharing information.
The parent agent serves as the coordinator. It holds the master context and decides which pieces of information to pass to each subagent at invocation time. This design gives you fine-grained control over what each subagent sees and can act upon.
Pattern 1: Explicit Context Injection at Invocation
The most straightforward approach involves passing context directly when invoking a subagent. You construct the prompt with all necessary background information included.
/subagent:code-review Review this PR focusing on security issues. The codebase uses Python Django and the PR modifies authentication logic in accounts/views.py
This pattern works well when the context is relatively small and static. For larger contexts, consider summarizing or extracting only the relevant portions before passing to the subagent.
Pattern 2: Shared File-Based State
When subagents need to work with larger datasets or maintain state across multiple turns, file-based storage provides a reliable mechanism. The parent agent writes context to a shared file, then instructs subagents to read from it.
Parent agent creates shared context
context_data = {
"task_id": "pr_review_042",
"priority": "high",
"files_under_review": ["auth.py", "tokens.py", "permissions.py"],
"previous_review_comments": []
}
Write to shared file
with open(".claude/context/review_task.json", "w") as f:
json.dump(context_data, f)
Subagents then read this file at startup:
/subagent:security-scan Read .claude/context/review_task.json and perform security analysis on the listed files
This pattern integrates well with skills like the tdd skill, which might need access to test specifications or requirements documents across multiple subagent invocations.
Pattern 3: Structured Result Passing
For workflows where one subagent’s output becomes another subagent’s input, use structured result objects. Define a clear schema for what each subagent should return, making downstream processing reliable.
/subagent:data-extractor Extract all financial metrics from Q4-report.pdf and return as JSON with keys: revenue, expenses, net_income, yoy_change
The parent agent then parses this output and passes it to the next subagent:
/subagent:financial-analysis The extracted metrics are: {extracted_json}. Create a visualization comparing these to industry benchmarks
This pattern works particularly well with the pdf skill for extraction workflows and the xlsx skill when generating reports from extracted data.
Pattern 4: Context Summarization for Long Workflows
For extended multi-step workflows, context can grow beyond manageable limits. Periodic summarization keeps things efficient. The parent agent periodically asks a subagent to summarize the current state:
/subagent:summarizer Review the current project state including all subagent results so far and provide a concise summary of: 1) completed tasks, 2) pending items, 3) blockers
Store this summary in a dedicated context file that gets passed to all subsequent subagents. This pattern prevents context window exhaustion in long-running workflows.
Pattern 5: Event-Driven Coordination
For more complex architectures, consider an event-based approach where subagents publish results to named channels, and other subagents subscribe to those channels.
In skill definition or workflow config
channels:
- name: code_analysis
subscribers: [security-review, performance-check]
- name: test_results
subscribers: [report-generator]
The frontend-design skill might publish component specifications to a channel that the pdf skill then consumes for documentation generation.
Best Practices for Context Passing
Keep context payloads focused. Include only what each subagent needs rather than dumping entire conversation histories. This improves both reliability and performance.
Use consistent naming conventions for shared files and channels. A clear structure like .claude/context/{workflow_name}/{agent_name}.json makes debugging much easier.
Log context passing points in your workflow. When a subagent fails, knowing exactly what context it received helps identify the problem quickly.
Handle partial failures gracefully. If one subagent fails, ensure downstream subagents can still operate with the context they received up to that point.
Combining Patterns in Production
Real-world workflows often combine multiple patterns. A typical production pipeline might use:
- File-based state for persistent configuration
- Explicit injection for immediate task context
- Structured results for inter-agent data flow
- Periodic summarization for long workflows
For example, a documentation generation pipeline might invoke the pdf skill to extract content, pass that to a writing subagent, then use the xlsx skill to generate supporting data tables, and finally invoke frontend-design skill for visual components.
Troubleshooting Context Issues
If subagents appear to miss context, verify the parent agent actually received and stored the information first. Check file permissions if using file-based storage. Review whether the context exceeded token limits, which can cause truncation.
When results seem inconsistent across subagents, ensure they’re reading from the same version of shared files. Timestamp your context files to detect stale data issues.
Implementing a Context Registry for Complex Pipelines
When multi-agent workflows grow beyond 3-4 subagents, ad-hoc file-based context passing becomes difficult to reason about. A lightweight context registry, a structured JSON store with a consistent read/write interface, brings order to complex pipelines.
The registry pattern gives each subagent a named context slot and enforces a consistent structure:
import json
import os
from datetime import datetime, timezone
from pathlib import Path
class ContextRegistry:
"""Simple file-backed context registry for multi-agent workflows."""
def __init__(self, registry_path: str = ".claude/context/registry.json"):
self.registry_path = Path(registry_path)
self.registry_path.parent.mkdir(parents=True, exist_ok=True)
if not self.registry_path.exists():
self._write({})
def _read(self) -> dict:
with open(self.registry_path) as f:
return json.load(f)
def _write(self, data: dict):
with open(self.registry_path, 'w') as f:
json.dump(data, f, indent=2)
def store(self, agent_name: str, key: str, value, ttl_minutes: int = 60):
"""Store a value in the registry with an expiry."""
registry = self._read()
if agent_name not in registry:
registry[agent_name] = {}
registry[agent_name][key] = {
"value": value,
"stored_at": datetime.now(timezone.utc).isoformat(),
"expires_at": None if ttl_minutes == 0 else (
datetime.now(timezone.utc).timestamp() + (ttl_minutes * 60)
)
}
self._write(registry)
def retrieve(self, agent_name: str, key: str):
"""Retrieve a value, returning None if missing or expired."""
registry = self._read()
entry = registry.get(agent_name, {}).get(key)
if not entry:
return None
# Check expiry
if entry.get("expires_at"):
if datetime.now(timezone.utc).timestamp() > entry["expires_at"]:
return None # Expired
return entry["value"]
def list_context(self, agent_name: str) -> dict:
"""List all non-expired context for an agent."""
registry = self._read()
now = datetime.now(timezone.utc).timestamp()
return {
k: v["value"]
for k, v in registry.get(agent_name, {}).items()
if not v.get("expires_at") or v["expires_at"] > now
}
Usage in parent agent orchestration
registry = ContextRegistry()
Store output from first subagent
registry.store("data-extractor", "requirements", extracted_requirements, ttl_minutes=120)
registry.store("data-extractor", "schema_version", "2.1")
Second subagent reads what it needs
context = registry.list_context("data-extractor")
print(f"Available context: {list(context.keys())}")
The TTL (time-to-live) on stored context prevents stale data from persisting across workflow runs. Set shorter TTLs for intermediate results that shouldn’t be reused, and longer TTLs (or 0 for no expiry) for stable configuration data.
The registry also serves as an audit log. The stored_at timestamps let you reconstruct the exact sequence of context-passing events when debugging a failed pipeline run.
Validating Context Integrity Before Subagent Invocation
Context corruption is subtle, a subagent receiving malformed JSON or truncated data will produce incorrect results without obvious errors. Adding validation before passing context to a subagent catches these issues early.
A simple schema validation pattern using Python’s typing module:
from typing import TypedDict, Optional
class ReviewTaskContext(TypedDict):
task_id: str
priority: str # "high" | "medium" | "low"
files_under_review: list[str]
previous_review_comments: list[dict]
reviewer: Optional[str]
def validate_review_context(data: dict) -> tuple[bool, list[str]]:
"""
Validate that a context dict matches ReviewTaskContext schema.
Returns (is_valid, list_of_errors).
"""
errors = []
required_fields = ["task_id", "priority", "files_under_review", "previous_review_comments"]
for field in required_fields:
if field not in data:
errors.append(f"Missing required field: {field}")
if "priority" in data and data["priority"] not in ("high", "medium", "low"):
errors.append(f"Invalid priority value: {data['priority']}")
if "files_under_review" in data:
if not isinstance(data["files_under_review"], list):
errors.append("files_under_review must be a list")
elif len(data["files_under_review"]) == 0:
errors.append("files_under_review cannot be empty")
return len(errors) == 0, errors
Before invoking subagent
context_data = load_context_from_registry()
is_valid, validation_errors = validate_review_context(context_data)
if not is_valid:
print(f"Context validation failed: {validation_errors}")
# Handle gracefully: use defaults, skip the subagent, or raise
else:
invoke_subagent("code-review", context_data)
This validation layer prevents the cascading failures that occur when a subagent at step 3 fails because of malformed context set at step 1, a failure mode that’s frustrating to diagnose without explicit validation.
Try it: Paste your error into our Error Diagnostic for an instant fix.
Related Reading
- Advanced Usage Guide. Power user techniques and advanced patterns
- How to Pass State Between AI Agents. Framework-agnostic patterns for state sharing applicable to Claude, LangChain, CrewAI, and other agent runtimes.
- Multi-Agent Orchestration with Claude Subagents Guide. Understand the subagent coordination model before implementing context-passing patterns.
- Claude Code Multi-Agent Subagent Communication Guide. Implement clear communication protocols alongside context passing for reliable workflows.
- Claude SuperMemory Skill: Persistent Context Guide. Use the supermemory skill to persist and share context across subagent sessions.
- Claude Skills Advanced Hub. Explore advanced context management and subagent coordination patterns.
Built by theluckystrike. More at zovo.one
Estimate usage → Calculate your token consumption with our Token Estimator.