Skip to main content
Technology & EngineeringRedis153 lines

Data Structures

Redis core data structures including strings, hashes, sets, sorted sets, and lists

Quick Summary15 lines
You are an expert in Redis data structures and their optimal use cases for building performant applications.

## Key Points

- **Choose hashes over many string keys** when storing object fields. A hash with 100 fields uses significantly less memory than 100 individual string keys.
- **Use `SCAN` instead of `KEYS`** in production. `KEYS *` blocks the server; `SCAN` is cursor-based and non-blocking.
- **Set TTLs aggressively** on ephemeral data. Unmanaged key growth is the most common cause of Redis memory exhaustion.
- **Use pipelines** to batch multiple independent commands in a single round trip.
- **Prefer `NX`/`XX` flags** on `SET` for conditional writes instead of separate GET-then-SET logic, which introduces race conditions.
- **Storing large blobs in strings.** Strings over a few KB hurt performance and memory. Consider external storage with Redis holding only a reference.
- **Using `LINDEX` or `LINSERT` on long lists.** These are O(N). If random access is needed, a sorted set or hash is a better choice.
- **Forgetting that sorted set scores are doubles.** Floating-point precision limits apply. Avoid using sorted set scores for exact equality checks on very large integers.
- **Not namespacing keys.** Without a consistent prefix convention (e.g., `service:entity:id`), key collisions and debugging difficulty increase rapidly.
skilldb get redis-skills/Data StructuresFull skill: 153 lines
Paste into your CLAUDE.md or agent config

Data Structures — Redis

You are an expert in Redis data structures and their optimal use cases for building performant applications.

Core Philosophy

Overview

Redis provides five primary data structures: strings, hashes, lists, sets, and sorted sets. Each structure has distinct time complexities and memory characteristics that make it suited to specific access patterns. Choosing the right structure is the single most impactful decision when designing a Redis-backed system.

Core Concepts

Strings

The simplest Redis type. Stores text, integers, or binary data up to 512 MB. Supports atomic increment/decrement, making it ideal for counters.

Hashes

A map of field-value pairs attached to a single key. Memory-efficient for objects with many fields due to ziplist encoding for small hashes (controlled by hash-max-ziplist-entries and hash-max-ziplist-value).

Lists

Doubly-linked lists of strings. O(1) push/pop at head or tail. Useful for queues, activity feeds, and bounded collections via LTRIM.

Sets

Unordered collections of unique strings. Support intersection, union, and difference operations. Ideal for tagging, unique visitor tracking, and membership checks.

Sorted Sets

Sets where each member has a floating-point score. Members are ordered by score, enabling range queries, leaderboards, and priority queues. Operations like ZADD, ZRANGE, and ZRANGEBYSCORE run in O(log N).

Implementation Patterns

String counters and flags

import Redis from "ioredis";
const redis = new Redis();

// Atomic counter
await redis.incr("page:views:/home");
await redis.incrby("page:views:/home", 10);

// String with expiry (SET with EX)
await redis.set("otp:user:42", "839201", "EX", 300);

// GET and conditional logic
const views = await redis.get("page:views:/home");

Hash-based object storage

// Store a user profile as a hash
await redis.hset("user:1001", {
  name: "Alice",
  email: "alice@example.com",
  plan: "pro",
  loginCount: "0",
});

// Increment a single field atomically
await redis.hincrby("user:1001", "loginCount", 1);

// Fetch specific fields
const [name, plan] = await redis.hmget("user:1001", "name", "plan");

// Fetch entire hash
const user = await redis.hgetall("user:1001");

List-based queue

// Producer pushes to the tail
await redis.rpush("queue:emails", JSON.stringify({ to: "bob@example.com", subject: "Hi" }));

// Consumer pops from the head (blocking)
const [key, value] = await redis.blpop("queue:emails", 30); // 30s timeout
const job = JSON.parse(value);

// Bounded list: keep only the last 100 entries
await redis.lpush("feed:user:42", JSON.stringify(event));
await redis.ltrim("feed:user:42", 0, 99);

Set operations

// Tag system
await redis.sadd("tags:article:55", "redis", "database", "caching");
await redis.sadd("tags:article:78", "redis", "performance");

// Articles sharing the "redis" tag
await redis.sinter("tags:article:55", "tags:article:78");
// => ["redis"]

// Unique visitor tracking
await redis.sadd("visitors:2026-03-17", "user:42");
const uniqueCount = await redis.scard("visitors:2026-03-17");

Sorted set leaderboard

// Add scores
await redis.zadd("leaderboard:march", 1500, "player:1");
await redis.zadd("leaderboard:march", 2300, "player:2");
await redis.zadd("leaderboard:march", 1800, "player:3");

// Top 10 (highest scores first)
const top10 = await redis.zrevrange("leaderboard:march", 0, 9, "WITHSCORES");

// Rank of a specific player (0-indexed, highest first)
const rank = await redis.zrevrank("leaderboard:march", "player:1");

// Score range query
const midTier = await redis.zrangebyscore("leaderboard:march", 1000, 2000);

Best Practices

  • Choose hashes over many string keys when storing object fields. A hash with 100 fields uses significantly less memory than 100 individual string keys.
  • Use SCAN instead of KEYS in production. KEYS * blocks the server; SCAN is cursor-based and non-blocking.
  • Set TTLs aggressively on ephemeral data. Unmanaged key growth is the most common cause of Redis memory exhaustion.
  • Use pipelines to batch multiple independent commands in a single round trip.
  • Prefer NX/XX flags on SET for conditional writes instead of separate GET-then-SET logic, which introduces race conditions.

Common Pitfalls

  • Storing large blobs in strings. Strings over a few KB hurt performance and memory. Consider external storage with Redis holding only a reference.
  • Using LINDEX or LINSERT on long lists. These are O(N). If random access is needed, a sorted set or hash is a better choice.
  • Ignoring encoding thresholds. When a hash or set exceeds its ziplist/intset threshold, Redis converts it to a hashtable, sharply increasing memory use. Tune *-max-ziplist-entries and *-max-ziplist-value for your workload.
  • Forgetting that sorted set scores are doubles. Floating-point precision limits apply. Avoid using sorted set scores for exact equality checks on very large integers.
  • Not namespacing keys. Without a consistent prefix convention (e.g., service:entity:id), key collisions and debugging difficulty increase rapidly.

Anti-Patterns

Over-engineering for hypothetical scale. Building for millions of users when you have hundreds adds complexity without value. Solve today's problems first.

Ignoring the existing ecosystem. Reinventing functionality that mature libraries already provide well wastes time and introduces unnecessary risk.

Premature abstraction. Creating elaborate frameworks and utilities before you have enough concrete cases to know what the abstraction should look like produces the wrong abstraction.

Neglecting error handling at boundaries. Internal code can trust its inputs, but system boundaries (user input, APIs, file I/O) require defensive validation.

Skipping documentation for obvious code. What is obvious to you today will not be obvious to your colleague next month or to you next year.

Install this skill directly: skilldb add redis-skills

Get CLI access →