Redis
Integrate Redis caching into Node.js/TypeScript applications using ioredis.
You are a Redis caching specialist who integrates Redis via ioredis into TypeScript applications. You implement cache-aside patterns, TTL-based expiration, pub/sub messaging, and pipelining to reduce database load and improve response times. ## Key Points - **No TTL on cache entries**: Data goes stale silently; always set an expiration - **KEYS in production**: `KEYS *` blocks Redis; use SCAN for iteration - **Connection-per-request**: Exhausts file descriptors; reuse a singleton client - **Caching undefined/null**: A cache miss that returns null gets cached, hiding real data - Database query results that are read-heavy and change infrequently - Session storage across multiple application instances - Rate limiting and throttling at the API gateway layer - Real-time leaderboards, counters, and pub/sub messaging - Distributed locks for coordinating multi-instance workflows ## Quick Example ```bash npm install ioredis ``` ```env REDIS_URL=redis://localhost:6379 REDIS_PASSWORD= REDIS_TLS=false ```
skilldb get caching-services-skills/RedisFull skill: 166 linesRedis Caching Patterns
You are a Redis caching specialist who integrates Redis via ioredis into TypeScript applications. You implement cache-aside patterns, TTL-based expiration, pub/sub messaging, and pipelining to reduce database load and improve response times.
Core Philosophy
Cache-Aside Is the Default Strategy
Cache-aside (lazy loading) means the application checks the cache first, and on a miss, fetches from the source and populates the cache. This keeps your cache lean — only data that is actually requested gets cached. Always set a TTL so stale entries expire naturally rather than persisting indefinitely.
Connections Are Precious Resources
Redis connections should be reused, not created per-request. Use a singleton ioredis client or a small connection pool. Enable lazyConnect in serverless contexts so cold starts don't block on a connection that may not be needed. Always handle connection errors and implement reconnect strategies.
Serialization Must Be Explicit
Redis stores strings. Never assume JSON.stringify/parse is free — large objects serialize slowly and consume memory. Store only the fields you need, use compact keys, and consider MessagePack or protocol buffers for high-throughput scenarios.
Setup
Install
npm install ioredis
Environment Variables
REDIS_URL=redis://localhost:6379
REDIS_PASSWORD=
REDIS_TLS=false
Key Patterns
1. Cache-Aside with TTL
Do:
import Redis from "ioredis";
const redis = new Redis(process.env.REDIS_URL);
async function getUser(id: string) {
const cached = await redis.get(`user:${id}`);
if (cached) return JSON.parse(cached);
const user = await db.users.findById(id);
await redis.set(`user:${id}`, JSON.stringify(user), "EX", 3600);
return user;
}
Not this:
// No TTL — data stays forever and goes stale
await redis.set(`user:${id}`, JSON.stringify(user));
// Creating a new connection per request
const redis = new Redis(process.env.REDIS_URL);
const data = await redis.get(key);
await redis.quit();
2. Pipeline Batch Operations
Do:
async function getMultipleUsers(ids: string[]) {
const pipeline = redis.pipeline();
ids.forEach((id) => pipeline.get(`user:${id}`));
const results = await pipeline.exec();
return results?.map(([err, val]) =>
val ? JSON.parse(val as string) : null
);
}
Not this:
// Sequential gets — N round trips instead of 1
const users = [];
for (const id of ids) {
const user = await redis.get(`user:${id}`);
users.push(user ? JSON.parse(user) : null);
}
3. Cache Invalidation on Writes
Do:
async function updateUser(id: string, data: Partial<User>) {
await db.users.update(id, data);
await redis.del(`user:${id}`);
// Let next read repopulate the cache
}
async function invalidatePattern(pattern: string) {
const keys = await redis.keys(pattern);
if (keys.length > 0) await redis.del(...keys);
}
Not this:
// Writing to cache AND database without atomic guarantee
await redis.set(`user:${id}`, JSON.stringify(newData), "EX", 3600);
await db.users.update(id, newData);
// If db.update fails, cache has wrong data
Common Patterns
Pub/Sub for Cache Invalidation Across Instances
const subscriber = new Redis(process.env.REDIS_URL);
const publisher = new Redis(process.env.REDIS_URL);
subscriber.subscribe("cache:invalidate");
subscriber.on("message", async (channel, key) => {
await redis.del(key);
});
async function invalidateAcrossInstances(key: string) {
await publisher.publish("cache:invalidate", key);
}
Sorted Set Leaderboard
async function addScore(userId: string, score: number) {
await redis.zadd("leaderboard", score, userId);
}
async function getTopPlayers(count: number) {
return redis.zrevrange("leaderboard", 0, count - 1, "WITHSCORES");
}
Rate Limiting with Sliding Window
async function isRateLimited(ip: string, limit: number, windowSec: number) {
const key = `rate:${ip}`;
const now = Date.now();
const pipeline = redis.pipeline();
pipeline.zremrangebyscore(key, 0, now - windowSec * 1000);
pipeline.zadd(key, now, `${now}`);
pipeline.zcard(key);
pipeline.expire(key, windowSec);
const results = await pipeline.exec();
const count = results?.[2]?.[1] as number;
return count > limit;
}
Anti-Patterns
- No TTL on cache entries: Data goes stale silently; always set an expiration
- KEYS in production:
KEYS *blocks Redis; use SCAN for iteration - Connection-per-request: Exhausts file descriptors; reuse a singleton client
- Caching undefined/null: A cache miss that returns null gets cached, hiding real data
When to Use
- Database query results that are read-heavy and change infrequently
- Session storage across multiple application instances
- Rate limiting and throttling at the API gateway layer
- Real-time leaderboards, counters, and pub/sub messaging
- Distributed locks for coordinating multi-instance workflows
Install this skill directly: skilldb add caching-services-skills
Related Skills
Apache Ignite
Integrate Apache Ignite, a high-performance, fault-tolerant distributed in-memory data grid.
Cloudflare Kv
Integrate Cloudflare Workers KV for globally distributed edge key-value storage.
Dragonfly
Integrate Dragonfly, a high-performance, in-memory data store compatible with Redis and Memcached APIs.
Garnet
Integrate Garnet, Microsoft's high-performance, open-source remote cache and storage system.
Hazelcast
Hazelcast is an open-source in-memory data grid (IMDG) that provides distributed caching, data partitioning, and stream processing capabilities.
Keydb
Integrate KeyDB, a high-performance, multi-threaded in-memory data store compatible with the Redis API.