Skip to main content
Technology & EngineeringRedis197 lines

Caching Patterns

Cache-aside, write-through, and write-behind caching strategies with Redis

Quick Summary14 lines
You are an expert in caching strategies using Redis, including cache-aside, write-through, and write-behind patterns for application-level caching.

## Key Points

- **Invalidate, don't update, on cache-aside writes.** Deleting the key avoids race conditions where two concurrent writes leave the cache with stale data.
- **Always set a TTL.** Even write-through caches should have TTLs as a safety net against bugs in invalidation logic.
- **Use consistent serialization.** Pick JSON or MessagePack and stick with it. Mixed formats cause subtle deserialization bugs.
- **Add cache key versioning** (e.g., `v2:user:42`) so schema changes can roll out without flushing the entire cache.
- **Monitor hit/miss ratios.** A hit ratio below 80% usually means TTLs are too short or the working set is too large for the cache.
- **Cache-database inconsistency on write-through.** If the DB write succeeds but the Redis write fails, the cache is stale. Use delete-on-write (cache-aside) or wrap both in a retry.
- **Caching null results.** If you don't cache misses, repeated queries for non-existent keys bypass the cache entirely. Cache nulls with a short TTL.
- **Over-caching.** Caching data that changes every second or is only accessed once wastes memory and adds complexity. Profile before caching.
skilldb get redis-skills/Caching PatternsFull skill: 197 lines
Paste into your CLAUDE.md or agent config

Caching Patterns — Redis

You are an expert in caching strategies using Redis, including cache-aside, write-through, and write-behind patterns for application-level caching.

Core Philosophy

Overview

Caching is the most common Redis use case. The choice of caching pattern determines consistency guarantees, latency characteristics, and failure behavior. There is no universally best pattern; the right choice depends on read/write ratios, tolerance for stale data, and system complexity budget.

Core Concepts

Cache-Aside (Lazy Loading)

The application checks Redis first. On a miss, it reads from the primary database, writes the result to Redis, and returns it. The cache is populated on demand.

Write-Through

Every write to the primary database simultaneously writes to Redis. Reads always hit the cache (after the first write). Guarantees cache freshness at the cost of higher write latency.

Write-Behind (Write-Back)

Writes go to Redis first, and an asynchronous process flushes changes to the primary database. Provides the lowest write latency but introduces a durability risk window.

TTL-Based Expiration

All patterns benefit from time-to-live settings. TTLs prevent unbounded cache growth and provide a staleness ceiling even when explicit invalidation is missed.

Implementation Patterns

Cache-aside

import Redis from "ioredis";
import db from "./db";

const redis = new Redis();
const CACHE_TTL = 3600; // 1 hour

async function getUser(userId: string) {
  const cacheKey = `user:${userId}`;

  // 1. Check cache
  const cached = await redis.get(cacheKey);
  if (cached) {
    return JSON.parse(cached);
  }

  // 2. Cache miss — read from DB
  const user = await db.users.findById(userId);
  if (!user) return null;

  // 3. Populate cache
  await redis.set(cacheKey, JSON.stringify(user), "EX", CACHE_TTL);

  return user;
}

async function updateUser(userId: string, data: Partial<User>) {
  // 1. Update DB
  const user = await db.users.update(userId, data);

  // 2. Invalidate cache (do NOT set — avoids race conditions)
  await redis.del(`user:${userId}`);

  return user;
}

Write-through

async function saveProduct(product: Product) {
  const cacheKey = `product:${product.id}`;

  // Write to DB and cache in a "transaction" (best-effort)
  await db.products.upsert(product);
  await redis.set(cacheKey, JSON.stringify(product), "EX", CACHE_TTL);

  return product;
}

async function getProduct(productId: string) {
  const cacheKey = `product:${productId}`;
  const cached = await redis.get(cacheKey);
  if (cached) return JSON.parse(cached);

  // Fallback for cold cache or evicted entries
  const product = await db.products.findById(productId);
  if (product) {
    await redis.set(cacheKey, JSON.stringify(product), "EX", CACHE_TTL);
  }
  return product;
}

Write-behind with a queue

// Writer: push changes to Redis immediately, enqueue for DB persistence
async function updateInventory(sku: string, quantity: number) {
  const cacheKey = `inventory:${sku}`;

  // Immediate Redis update
  await redis.set(cacheKey, String(quantity));

  // Enqueue for async DB write
  await redis.rpush(
    "queue:db-writes",
    JSON.stringify({ table: "inventory", key: sku, value: quantity, ts: Date.now() })
  );
}

// Background worker: flush queued writes to the database
async function flushWorker() {
  while (true) {
    const item = await redis.blpop("queue:db-writes", 5);
    if (!item) continue;

    const payload = JSON.parse(item[1]);
    try {
      await db.query(
        `UPDATE ${payload.table} SET quantity = $1 WHERE sku = $2`,
        [payload.value, payload.key]
      );
    } catch (err) {
      // Re-enqueue on failure (with retry limit in production)
      await redis.rpush("queue:db-writes", item[1]);
    }
  }
}

Multi-tier cache with stale-while-revalidate

async function getWithSWR(key: string, fetcher: () => Promise<any>) {
  const cached = await redis.get(key);
  const meta = await redis.get(`${key}:meta`);

  if (cached) {
    const { expiresAt } = meta ? JSON.parse(meta) : { expiresAt: 0 };

    if (Date.now() < expiresAt) {
      return JSON.parse(cached); // Fresh
    }

    // Stale — return immediately but revalidate in background
    setImmediate(async () => {
      const fresh = await fetcher();
      await redis.set(key, JSON.stringify(fresh), "EX", 7200);
      await redis.set(`${key}:meta`, JSON.stringify({ expiresAt: Date.now() + 3600_000 }), "EX", 7200);
    });

    return JSON.parse(cached);
  }

  // Full miss
  const data = await fetcher();
  await redis.set(key, JSON.stringify(data), "EX", 7200);
  await redis.set(`${key}:meta`, JSON.stringify({ expiresAt: Date.now() + 3600_000 }), "EX", 7200);
  return data;
}

Best Practices

  • Invalidate, don't update, on cache-aside writes. Deleting the key avoids race conditions where two concurrent writes leave the cache with stale data.
  • Always set a TTL. Even write-through caches should have TTLs as a safety net against bugs in invalidation logic.
  • Use consistent serialization. Pick JSON or MessagePack and stick with it. Mixed formats cause subtle deserialization bugs.
  • Add cache key versioning (e.g., v2:user:42) so schema changes can roll out without flushing the entire cache.
  • Monitor hit/miss ratios. A hit ratio below 80% usually means TTLs are too short or the working set is too large for the cache.

Common Pitfalls

  • Thundering herd on cache miss. When a popular key expires, hundreds of requests simultaneously query the database. Mitigate with a lock (SET NX with short TTL) so only one request repopulates the cache.
  • Cache-database inconsistency on write-through. If the DB write succeeds but the Redis write fails, the cache is stale. Use delete-on-write (cache-aside) or wrap both in a retry.
  • Write-behind data loss. If Redis restarts before queued writes are flushed to the database, those writes are lost. Use Redis persistence (AOF) or an external queue (e.g., Kafka) for critical data.
  • Caching null results. If you don't cache misses, repeated queries for non-existent keys bypass the cache entirely. Cache nulls with a short TTL.
  • Over-caching. Caching data that changes every second or is only accessed once wastes memory and adds complexity. Profile before caching.

Anti-Patterns

Over-engineering for hypothetical scale. Building for millions of users when you have hundreds adds complexity without value. Solve today's problems first.

Ignoring the existing ecosystem. Reinventing functionality that mature libraries already provide well wastes time and introduces unnecessary risk.

Premature abstraction. Creating elaborate frameworks and utilities before you have enough concrete cases to know what the abstraction should look like produces the wrong abstraction.

Neglecting error handling at boundaries. Internal code can trust its inputs, but system boundaries (user input, APIs, file I/O) require defensive validation.

Skipping documentation for obvious code. What is obvious to you today will not be obvious to your colleague next month or to you next year.

Install this skill directly: skilldb add redis-skills

Get CLI access →