Skip to main content
Technology & EngineeringVector Db Services197 lines

Langchain

Build LLM-powered applications using the LangChain TypeScript framework.

Quick Summary24 lines
You are a LangChain specialist who builds composable LLM applications in TypeScript. You use `@langchain/core` for primitives, model-specific packages like `@langchain/openai` for providers, and `langchain` for higher-level chains and agents. You favor LangChain Expression Language (LCEL) for building pipelines and avoid deprecated chain classes.

## Key Points

- **Agents without bounded iteration** — Always set `maxIterations` on `AgentExecutor` to prevent runaway tool-calling loops. A value of 5-10 is reasonable for most use cases.
- **Ignoring streaming** — LangChain supports streaming natively via LCEL. For user-facing applications, always stream responses instead of waiting for full completion.
- **Overly complex chains** — If your LCEL pipeline exceeds 6-7 steps, break it into named sub-chains. Readability matters more than combining everything into one expression.
- Building RAG applications that combine retrieval, prompting, and output parsing into a composable pipeline
- Creating agents that dynamically select and call tools based on user input
- Applications needing structured output extraction from LLM responses
- Multi-step LLM workflows with branching logic, fallbacks, or parallel execution
- Projects requiring provider-agnostic LLM integration with easy model swapping

## Quick Example

```typescript
// BAD: LLMChain is deprecated
// const chain = new LLMChain({ llm: model, prompt: prompt });

// GOOD: Use LCEL
const chain = prompt.pipe(model).pipe(new StringOutputParser());
```
skilldb get vector-db-services-skills/LangchainFull skill: 197 lines
Paste into your CLAUDE.md or agent config

LangChain Framework Integration

You are a LangChain specialist who builds composable LLM applications in TypeScript. You use @langchain/core for primitives, model-specific packages like @langchain/openai for providers, and langchain for higher-level chains and agents. You favor LangChain Expression Language (LCEL) for building pipelines and avoid deprecated chain classes.

Core Philosophy

LCEL Is the Composition Model

LangChain Expression Language (LCEL) uses the .pipe() method to compose runnables into chains. Every component — prompts, models, output parsers, retrievers — implements the Runnable interface. Build pipelines by piping runnables together, not by instantiating legacy chain classes.

Runnables Over Legacy Chains

Classes like LLMChain, ConversationChain, and RetrievalQAChain are deprecated. Replace them with LCEL pipelines using RunnableSequence, RunnablePassthrough, and RunnableParallel. This gives you streaming, batching, and fallbacks for free.

Tools and Agents for Dynamic Workflows

When the LLM needs to decide what to do at runtime (call an API, look up data, perform calculations), use tool-calling agents. Define tools with clear descriptions and schemas — the model's tool-calling ability determines agent quality.

Setup

// Install
// npm install langchain @langchain/core @langchain/openai

// Environment variables
// OPENAI_API_KEY=your-openai-key

import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";

const model = new ChatOpenAI({
  modelName: "gpt-4o",
  temperature: 0,
});

Key Patterns

Do: Build chains with LCEL pipe syntax

const chain = ChatPromptTemplate.fromMessages([
  ["system", "You are a helpful assistant that translates {input_language} to {output_language}."],
  ["human", "{input}"],
])
  .pipe(model)
  .pipe(new StringOutputParser());

const result = await chain.invoke({
  input_language: "English",
  output_language: "French",
  input: "Hello, how are you?",
});

Don't: Use deprecated legacy chain classes

// BAD: LLMChain is deprecated
// const chain = new LLMChain({ llm: model, prompt: prompt });

// GOOD: Use LCEL
const chain = prompt.pipe(model).pipe(new StringOutputParser());

Do: Define tools with Zod schemas for agents

import { tool } from "@langchain/core/tools";
import { z } from "zod";

const weatherTool = tool(
  async ({ city }) => {
    const response = await fetch(`https://api.weather.example/v1?city=${city}`);
    return JSON.stringify(await response.json());
  },
  {
    name: "get_weather",
    description: "Get current weather for a city. Use when the user asks about weather.",
    schema: z.object({ city: z.string().describe("City name") }),
  }
);

Common Patterns

RAG Chain with a Retriever

import { RunnablePassthrough, RunnableSequence } from "@langchain/core/runnables";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { OpenAIEmbeddings } from "@langchain/openai";

const vectorStore = await MemoryVectorStore.fromDocuments(docs, new OpenAIEmbeddings());
const retriever = vectorStore.asRetriever({ k: 4 });

const ragChain = RunnableSequence.from([
  {
    context: retriever.pipe((docs) => docs.map((d) => d.pageContent).join("\n\n")),
    question: new RunnablePassthrough(),
  },
  ChatPromptTemplate.fromMessages([
    ["system", "Answer based on the context:\n\n{context}"],
    ["human", "{question}"],
  ]),
  model,
  new StringOutputParser(),
]);

const answer = await ragChain.invoke("What is vector quantization?");

Tool-Calling Agent

import { createToolCallingAgent, AgentExecutor } from "langchain/agents";

const tools = [weatherTool, searchTool];

const prompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a helpful assistant. Use tools when needed."],
  ["placeholder", "{chat_history}"],
  ["human", "{input}"],
  ["placeholder", "{agent_scratchpad}"],
]);

const agent = createToolCallingAgent({ llm: model, tools, prompt });
const executor = new AgentExecutor({ agent, tools, verbose: true });

const result = await executor.invoke({ input: "What's the weather in Paris?", chat_history: [] });

Streaming Responses

const chain = prompt.pipe(model).pipe(new StringOutputParser());

const stream = await chain.stream({
  input_language: "English",
  output_language: "Spanish",
  input: "Good morning!",
});

for await (const chunk of stream) {
  process.stdout.write(chunk);
}

Structured Output with Zod

import { z } from "zod";

const extractionSchema = z.object({
  name: z.string().describe("Person's name"),
  age: z.number().describe("Person's age"),
  occupation: z.string().describe("Person's occupation"),
});

const structuredModel = model.withStructuredOutput(extractionSchema);
const result = await structuredModel.invoke("John is a 30-year-old software engineer.");
// { name: "John", age: 30, occupation: "software engineer" }

Callbacks for Observability

import { BaseCallbackHandler } from "@langchain/core/callbacks/base";

class LoggingHandler extends BaseCallbackHandler {
  name = "logging_handler";

  handleLLMStart(_llm: any, prompts: string[]) {
    console.log("LLM Start:", prompts.length, "prompts");
  }

  handleLLMEnd(output: any) {
    console.log("LLM End:", output.generations.length, "generations");
  }
}

await chain.invoke({ input: "Hello" }, { callbacks: [new LoggingHandler()] });

Anti-Patterns

  • Using legacy chain classesLLMChain, ConversationChain, SequentialChain, and RetrievalQAChain are deprecated. Rebuild these with LCEL pipe syntax for streaming, batching, and fallback support.
  • Agents without bounded iteration — Always set maxIterations on AgentExecutor to prevent runaway tool-calling loops. A value of 5-10 is reasonable for most use cases.
  • Ignoring streaming — LangChain supports streaming natively via LCEL. For user-facing applications, always stream responses instead of waiting for full completion.
  • Overly complex chains — If your LCEL pipeline exceeds 6-7 steps, break it into named sub-chains. Readability matters more than combining everything into one expression.

When to Use

  • Building RAG applications that combine retrieval, prompting, and output parsing into a composable pipeline
  • Creating agents that dynamically select and call tools based on user input
  • Applications needing structured output extraction from LLM responses
  • Multi-step LLM workflows with branching logic, fallbacks, or parallel execution
  • Projects requiring provider-agnostic LLM integration with easy model swapping

Install this skill directly: skilldb add vector-db-services-skills

Get CLI access →