Claude Code for LangChain — Workflow Guide
The Setup
You are building LLM-powered applications using LangChain, the framework for chaining AI model calls with tools, retrievers, and memory. Claude Code can generate LangChain chains and agents, but it generates outdated API patterns from the rapidly evolving library and mixes up LangChain.js with LangChain Python.
What Claude Code Gets Wrong By Default
-
Uses deprecated chain classes. Claude writes
new LLMChain({ llm, prompt })which is deprecated. Modern LangChain uses LCEL (LangChain Expression Language):prompt.pipe(model).pipe(outputParser). -
Mixes Python and JavaScript APIs. Claude generates
from langchain import ...Python imports in a TypeScript project. LangChain.js usesimport { ChatOpenAI } from '@langchain/openai'— completely different module paths. -
Uses the legacy
langchainpackage. Claude imports fromlangchain. LangChain.js has been split: use@langchain/core,@langchain/openai,@langchain/anthropic, etc. The monolithic package is deprecated. -
Creates manual prompt templating. Claude concatenates strings for prompts. LangChain provides
ChatPromptTemplate.fromMessages()which handles variable substitution, message roles, and type safety.
The CLAUDE.md Configuration
# LangChain Application
## Framework
- LangChain: @langchain/core + provider packages
- Model: @langchain/anthropic (Claude) or @langchain/openai
- Expression: LCEL (pipe-based chain composition)
- Vector store: depends on project (@langchain/pinecone, etc.)
## LangChain Rules
- Use LCEL: prompt.pipe(model).pipe(parser)
- Import from @langchain/core, NOT from 'langchain'
- Models: new ChatAnthropic({ model: 'claude-sonnet-4-20250514' })
- Prompts: ChatPromptTemplate.fromMessages([...])
- Output: StringOutputParser, JsonOutputParser
- Tools: @tool decorator or DynamicTool for function calling
- Retrieval: retriever.pipe(formatDocsFn).pipe(prompt).pipe(model)
- Memory: use RunnableWithMessageHistory for chat memory
## Conventions
- Chains in src/chains/ directory
- Tools in src/tools/ directory
- Prompts in src/prompts/ directory
- Use LCEL pipe syntax, not legacy Chain classes
- Streaming: .stream() for real-time output
- Error handling: .withFallbacks() for model fallback chains
- Never import from 'langchain' directly
Workflow Example
You want to create a RAG (Retrieval-Augmented Generation) chain. Prompt Claude Code:
“Create a LangChain RAG chain that searches a Pinecone vector store for relevant documents, formats them as context, and generates an answer using Claude. Use LCEL pipe syntax and support streaming responses.”
Claude Code should create a chain using retriever.pipe(formatDocs).pipe(ragPrompt).pipe(new ChatAnthropic()).pipe(new StringOutputParser()) with a ChatPromptTemplate that includes system message with {context} and user message with {question}, format retrieved documents into the context variable, and use .stream() for the response.
Common Pitfalls
-
Importing from wrong package scope. Claude uses
import { ChatPromptTemplate } from 'langchain/prompts'. The correct import isimport { ChatPromptTemplate } from '@langchain/core/prompts'. The scoped packages are required since LangChain’s package split. -
Not handling streaming correctly. Claude collects the entire response with
.invoke(). For chat applications, use.stream()which returns an async iterable of chunks. Each chunk is a partial response that can be sent to the client immediately. -
Memory management across requests. Claude creates a new memory instance per request, losing conversation history. Use
RunnableWithMessageHistorywith a session ID-based message store (Redis, database) to persist chat history across requests.