Skip to main content
Technology & EngineeringLlm Integration262 lines

Langchain

LangChain orchestration for chains, agents, memory, and retrieval workflows

Quick Summary24 lines
You are an expert in LangChain orchestration for building composable LLM-powered applications.

## Key Points

- Use LCEL pipe syntax (`|`) for composing chains; it is more readable and supports streaming natively.
- Keep tools focused with clear docstrings; the LLM uses the description to decide when to call them.
- Use `RunnablePassthrough` to forward inputs through parallel chain branches.
- Set `verbose=True` on `AgentExecutor` during development to see the agent's reasoning.
- Use structured output parsers (Pydantic-based) for reliable data extraction.
- Pin LangChain package versions in production; the API surface changes frequently.
- Prefer `langchain-{provider}` packages over the monolithic `langchain` imports.
- Importing from deprecated `langchain` paths instead of `langchain_core` or provider-specific packages.
- Not handling agent tool errors, causing the entire chain to crash on a single bad tool call.
- Using overly large `chunk_size` in text splitting, which dilutes retrieval relevance.
- Forgetting to pass `agent_scratchpad` in agent prompt templates, breaking the tool-use loop.
- Creating chains that are too deeply nested, making debugging nearly impossible.

## Quick Example

```bash
pip install langchain langchain-openai langchain-anthropic langchain-community
```
skilldb get llm-integration-skills/LangchainFull skill: 262 lines
Paste into your CLAUDE.md or agent config

LangChain — LLM Integration

You are an expert in LangChain orchestration for building composable LLM-powered applications.

Overview

LangChain is a framework for developing applications powered by language models. It provides abstractions for chains, agents, memory, retrieval, and tool use, enabling developers to compose complex workflows from modular building blocks. LangChain supports multiple LLM providers and integrates with vector stores, document loaders, and external APIs.

Core Concepts

Installation and Setup

pip install langchain langchain-openai langchain-anthropic langchain-community

Chat Models

from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic
from langchain_core.messages import HumanMessage, SystemMessage

# OpenAI
llm = ChatOpenAI(model="gpt-4o", temperature=0)

# Anthropic
llm = ChatAnthropic(model="claude-sonnet-4-20250514", temperature=0)

response = llm.invoke([
    SystemMessage(content="You are a helpful assistant."),
    HumanMessage(content="What is LangChain?"),
])
print(response.content)

Prompt Templates

from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a {role} who speaks {language}."),
    ("human", "{input}"),
])

chain = prompt | llm
result = chain.invoke({"role": "translator", "language": "French", "input": "Hello, world!"})

Output Parsers

from langchain_core.output_parsers import JsonOutputParser
from pydantic import BaseModel, Field

class Movie(BaseModel):
    title: str = Field(description="The movie title")
    year: int = Field(description="Release year")
    genre: str = Field(description="Primary genre")

parser = JsonOutputParser(pydantic_object=Movie)

prompt = ChatPromptTemplate.from_messages([
    ("system", "Extract movie info from the user's message.\n{format_instructions}"),
    ("human", "{input}"),
])

chain = prompt.partial(format_instructions=parser.get_format_instructions()) | llm | parser

result = chain.invoke({"input": "The Matrix came out in 1999, it's a sci-fi film"})
# result is a dict: {"title": "The Matrix", "year": 1999, "genre": "sci-fi"}

LCEL (LangChain Expression Language)

from langchain_core.runnables import RunnablePassthrough, RunnableLambda

# Pipe syntax composes runnables
chain = (
    {"context": retriever, "question": RunnablePassthrough()}
    | prompt
    | llm
    | output_parser
)

# Parallel execution
from langchain_core.runnables import RunnableParallel

chain = RunnableParallel(
    summary=summary_chain,
    translation=translation_chain,
)
result = chain.invoke({"text": "Some input text"})

Implementation Patterns

Tool-Using Agent

from langchain_core.tools import tool
from langchain.agents import create_tool_calling_agent, AgentExecutor

@tool
def search_database(query: str) -> str:
    """Search the product database for items matching the query."""
    # Real implementation here
    return f"Found 3 results for '{query}'"

@tool
def calculate_price(item: str, quantity: int) -> str:
    """Calculate total price for an item and quantity."""
    prices = {"widget": 9.99, "gadget": 24.99}
    price = prices.get(item, 0) * quantity
    return f"Total: ${price:.2f}"

tools = [search_database, calculate_price]

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a shopping assistant. Use tools to help customers."),
    ("human", "{input}"),
    ("placeholder", "{agent_scratchpad}"),
])

agent = create_tool_calling_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

result = executor.invoke({"input": "How much for 5 widgets?"})

Retrieval-Augmented Generation (RAG)

from langchain_openai import OpenAIEmbeddings
from langchain_community.vectorstores import FAISS
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_core.output_parsers import StrOutputParser

# Build vector store
embeddings = OpenAIEmbeddings()
vectorstore = FAISS.from_texts(documents, embeddings)
retriever = vectorstore.as_retriever(search_kwargs={"k": 4})

# RAG chain
prompt = ChatPromptTemplate.from_messages([
    ("system", "Answer based on the context below.\n\nContext: {context}"),
    ("human", "{question}"),
])

def format_docs(docs):
    return "\n\n".join(doc.page_content for doc in docs)

rag_chain = (
    {"context": retriever | format_docs, "question": RunnablePassthrough()}
    | prompt
    | llm
    | StrOutputParser()
)

answer = rag_chain.invoke("What is the return policy?")

Conversation Memory

from langchain_core.chat_history import InMemoryChatMessageHistory
from langchain_core.runnables.history import RunnableWithMessageHistory

store = {}

def get_session_history(session_id: str):
    if session_id not in store:
        store[session_id] = InMemoryChatMessageHistory()
    return store[session_id]

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    ("placeholder", "{history}"),
    ("human", "{input}"),
])

chain = prompt | llm

chain_with_history = RunnableWithMessageHistory(
    chain,
    get_session_history,
    input_messages_key="input",
    history_messages_key="history",
)

response = chain_with_history.invoke(
    {"input": "My name is Alice"},
    config={"configurable": {"session_id": "user-1"}},
)

Document Loading and Splitting

from langchain_community.document_loaders import PyPDFLoader, TextLoader
from langchain_text_splitters import RecursiveCharacterTextSplitter

loader = PyPDFLoader("document.pdf")
docs = loader.load()

splitter = RecursiveCharacterTextSplitter(
    chunk_size=1000,
    chunk_overlap=200,
    separators=["\n\n", "\n", ". ", " ", ""],
)

chunks = splitter.split_documents(docs)

Best Practices

  • Use LCEL pipe syntax (|) for composing chains; it is more readable and supports streaming natively.
  • Keep tools focused with clear docstrings; the LLM uses the description to decide when to call them.
  • Use RunnablePassthrough to forward inputs through parallel chain branches.
  • Set verbose=True on AgentExecutor during development to see the agent's reasoning.
  • Use structured output parsers (Pydantic-based) for reliable data extraction.
  • Pin LangChain package versions in production; the API surface changes frequently.
  • Prefer langchain-{provider} packages over the monolithic langchain imports.

Core Philosophy

LangChain's value proposition is composability: it provides standardized interfaces for LLMs, retrievers, tools, memory, and output parsers so they can be snapped together like building blocks. The pipe operator (|) in LCEL is the physical manifestation of this philosophy -- each runnable transforms its input and passes the result to the next. When this composability works, it dramatically accelerates prototyping. When it does not, the abstraction layers become opaque obstacles to debugging.

Use LangChain's abstractions when they match your use case, and do not hesitate to drop to the provider SDK when they do not. LangChain is a framework, not a requirement. For a simple chat completion with streaming, calling the OpenAI or Anthropic SDK directly may be clearer and more maintainable than wrapping it in LangChain's chain abstraction. LangChain shines when you need to compose retrieval, memory, tool use, and output parsing into a pipeline -- the exact scenario where manual orchestration becomes tedious and error-prone.

The LangChain ecosystem moves fast and breaks things. Package paths, class names, and best practices shift between versions. Pinning dependency versions in production is not optional; it is a survival requirement. Prefer the provider-specific packages (langchain-openai, langchain-anthropic) over the monolithic langchain package, and import from langchain_core for stable base abstractions. When upgrading, read the migration guide before updating the version pin.

Anti-Patterns

  • Importing from deprecated paths: Using from langchain.llms import OpenAI or from langchain.chat_models import ChatOpenAI instead of the current provider-specific packages (from langchain_openai import ChatOpenAI). Deprecated imports may still work but produce warnings and will eventually break.

  • Deeply nested chains that are impossible to debug: Building chains of chains of chains where a failure in an inner runnable produces an error message that does not indicate which step failed or what input caused the failure. Keep chains shallow and add logging at each step during development.

  • Using LangChain for simple API calls: Wrapping a single ChatOpenAI.invoke() call in a chain, prompt template, and output parser when a direct SDK call would be 5 lines of clear, dependency-free code. LangChain adds value for composition, not for simple calls.

  • No iteration limits on agents: Running an AgentExecutor without setting max_iterations, allowing the agent to loop indefinitely if it cannot resolve a task. This consumes tokens without bound and can produce enormous bills. Always set max_iterations and max_execution_time.

  • Assuming abstractions are free: Treating LangChain's retriever, memory, and chain abstractions as zero-cost wrappers. Each layer adds latency, memory overhead, and debugging complexity. Profile your chain end-to-end and remove abstractions that are not earning their keep.

Common Pitfalls

  • Importing from deprecated langchain paths instead of langchain_core or provider-specific packages.
  • Not handling agent tool errors, causing the entire chain to crash on a single bad tool call.
  • Using overly large chunk_size in text splitting, which dilutes retrieval relevance.
  • Forgetting to pass agent_scratchpad in agent prompt templates, breaking the tool-use loop.
  • Creating chains that are too deeply nested, making debugging nearly impossible.
  • Not setting token limits or max iterations on agents, risking runaway loops and high costs.
  • Assuming LangChain abstractions are zero-cost; each layer adds latency and complexity.

Install this skill directly: skilldb add llm-integration-skills

Get CLI access →