Skip to main content
Technology & EngineeringMigration Patterns198 lines

Monolith to Microservices

Decompose a monolithic application into microservices using the strangler fig pattern

Quick Summary18 lines
You are an expert in decomposing monolithic applications into microservices for independent scaling, deployment, and team ownership.

## Key Points

1. **Identify Boundaries** — use domain-driven design to map bounded contexts within the monolith.
2. **Introduce an API Gateway** — place a reverse proxy in front of the monolith to control routing.
3. **Extract One Service** — pick the highest-value, lowest-coupling candidate and build it as a standalone service.
4. **Route and Validate** — shift traffic to the new service, compare responses, then cut over fully.
5. **Repeat** — extract the next bounded context; continue until the monolith is hollow or fully decomposed.
- Own their own database tables
- Have minimal cross-module joins
- Map to a distinct business capability
- Extract one service at a time. Resist the temptation to decompose everything in parallel.
- Use the strangler fig pattern: route traffic at the gateway, not by rewriting clients.
- Each service owns its own data store. Shared databases are the top cause of tight coupling.
- Prefer asynchronous event-driven communication for cross-service data synchronization.
skilldb get migration-patterns-skills/Monolith to MicroservicesFull skill: 198 lines
Paste into your CLAUDE.md or agent config

Monolith to Microservices — Migration Patterns

You are an expert in decomposing monolithic applications into microservices for independent scaling, deployment, and team ownership.

Core Philosophy

Overview

Migrating from a monolith to microservices is best done incrementally using the strangler fig pattern: extract one bounded context at a time behind an API gateway, route traffic to the new service, and retire the corresponding monolith code only after the new service is proven in production.

Migration Strategy

  1. Identify Boundaries — use domain-driven design to map bounded contexts within the monolith.
  2. Introduce an API Gateway — place a reverse proxy in front of the monolith to control routing.
  3. Extract One Service — pick the highest-value, lowest-coupling candidate and build it as a standalone service.
  4. Route and Validate — shift traffic to the new service, compare responses, then cut over fully.
  5. Repeat — extract the next bounded context; continue until the monolith is hollow or fully decomposed.

Step-by-Step Guide

1. Map bounded contexts

Identify natural seams in the codebase. Look for modules that:

  • Own their own database tables
  • Have minimal cross-module joins
  • Map to a distinct business capability

Example domain map:

Monolith
├── Users & Auth        → auth-service
├── Product Catalog     → catalog-service
├── Orders & Payments   → order-service
├── Notifications       → notification-service
└── Reporting           → analytics-service

2. Set up an API gateway

# nginx gateway configuration
upstream monolith {
  server monolith-app:8080;
}

upstream catalog_service {
  server catalog-service:3000;
}

server {
  listen 80;

  # Extracted service — route to new microservice
  location /api/v1/products {
    proxy_pass http://catalog_service;
  }

  # Everything else — route to monolith
  location / {
    proxy_pass http://monolith;
  }
}

3. Extract the first service (Catalog example)

// catalog-service/src/server.ts
import express from 'express';
import { Pool } from 'pg';

const app = express();
const db = new Pool({ connectionString: process.env.CATALOG_DB_URL });

app.get('/api/v1/products', async (req, res) => {
  const { rows } = await db.query(
    'SELECT id, name, price, category FROM products WHERE active = true ORDER BY created_at DESC LIMIT $1 OFFSET $2',
    [req.query.limit || 20, req.query.offset || 0]
  );
  res.json({ products: rows });
});

app.get('/api/v1/products/:id', async (req, res) => {
  const { rows } = await db.query('SELECT * FROM products WHERE id = $1', [req.params.id]);
  if (!rows.length) return res.status(404).json({ error: 'Not found' });
  res.json(rows[0]);
});

app.listen(3000);

4. Separate the database

Migrate the products table to its own database:

-- Create the new catalog database
CREATE DATABASE catalog_db;

-- Migrate data
pg_dump --table=products monolith_db | psql catalog_db

-- In the monolith, replace direct table access with API calls
-- Remove foreign keys from other tables that reference products
ALTER TABLE order_items DROP CONSTRAINT fk_order_items_product;

5. Add inter-service communication

For synchronous calls:

// order-service calling catalog-service
async function getProduct(productId: string): Promise<Product> {
  const res = await fetch(`http://catalog-service:3000/api/v1/products/${productId}`);
  if (!res.ok) throw new Error(`Product ${productId} not found`);
  return res.json();
}

For asynchronous events:

// catalog-service publishes event
import { Kafka } from 'kafkajs';

const kafka = new Kafka({ brokers: [process.env.KAFKA_BROKER!] });
const producer = kafka.producer();

async function updateProduct(id: string, data: ProductUpdate) {
  await db.query('UPDATE products SET name=$1, price=$2 WHERE id=$3', [data.name, data.price, id]);
  await producer.send({
    topic: 'catalog.product.updated',
    messages: [{ key: id, value: JSON.stringify({ id, ...data }) }],
  });
}

6. Containerize and orchestrate

# docker-compose.yml (development)
services:
  gateway:
    image: nginx:alpine
    ports: ["80:80"]
    volumes: ["./nginx.conf:/etc/nginx/conf.d/default.conf"]

  monolith:
    build: ./monolith
    environment:
      DATABASE_URL: postgres://db:5432/monolith_db

  catalog-service:
    build: ./catalog-service
    environment:
      CATALOG_DB_URL: postgres://catalog-db:5432/catalog_db
      KAFKA_BROKER: kafka:9092

  catalog-db:
    image: postgres:16

Best Practices

  • Extract one service at a time. Resist the temptation to decompose everything in parallel.
  • Use the strangler fig pattern: route traffic at the gateway, not by rewriting clients.
  • Each service owns its own data store. Shared databases are the top cause of tight coupling.
  • Prefer asynchronous event-driven communication for cross-service data synchronization.
  • Implement health checks, circuit breakers, and retries from the start.
  • Define clear API contracts with OpenAPI specs or protobuf schemas.
  • Keep the monolith deployable and shippable throughout the migration.

Common Pitfalls

  • Distributed monolith — extracting services that still share a database or require synchronous call chains in lock-step deployments. Each service must be independently deployable.
  • Starting with the hardest module — pick a well-bounded, low-coupling context for the first extraction to build confidence and tooling.
  • Ignoring data consistency — cross-service transactions do not work like local transactions. Use sagas or eventual consistency patterns.
  • No observability — distributed systems are harder to debug. Implement distributed tracing (OpenTelemetry), centralized logging, and service mesh metrics before extracting the first service.
  • Premature decomposition — not every application needs microservices. If the team is small and deployment frequency is low, a well-structured modular monolith may be the better choice.

Anti-Patterns

Over-engineering for hypothetical scale. Building for millions of users when you have hundreds adds complexity without value. Solve today's problems first.

Ignoring the existing ecosystem. Reinventing functionality that mature libraries already provide well wastes time and introduces unnecessary risk.

Premature abstraction. Creating elaborate frameworks and utilities before you have enough concrete cases to know what the abstraction should look like produces the wrong abstraction.

Neglecting error handling at boundaries. Internal code can trust its inputs, but system boundaries (user input, APIs, file I/O) require defensive validation.

Skipping documentation for obvious code. What is obvious to you today will not be obvious to your colleague next month or to you next year.

Install this skill directly: skilldb add migration-patterns-skills

Get CLI access →