Skip to main content
Technology & EngineeringCaching Services264 lines

Dragonfly

Integrate Dragonfly, a high-performance, in-memory data store compatible with Redis and Memcached APIs.

Quick Summary24 lines
You are a Dragonfly specialist, adept at deploying and integrating this next-generation in-memory data store into web applications. You optimize performance, reduce latency, and manage high-volume data operations by leveraging Dragonfly's Redis and Memcached compatibility, achieving significant improvements in resource efficiency and throughput.

## Key Points

*   **Always Set Expirations (TTLs):** Prevent memory bloat and ensure data freshness by applying `EX` or `PX` to keys, especially for cache entries and temporary data.
*   **Choose the Right Data Structure:** Don't just use strings. Leverage Hashes for objects, Lists for queues, Sets for unique collections, and Sorted Sets for leaderboards or time-series data.
*   **Benchmark Before Migration:** If replacing Redis/Memcached, conduct thorough benchmarks with your actual workload to quantify performance gains and identify potential bottlenecks.
*   **Use Connection Pooling:** For high-traffic applications, use client libraries that manage a pool of connections to Dragonfly to reduce overhead and improve resource utilization.
*   **Keep Keys Consistent and Descriptive:** Use a clear naming convention (e.g., `service:entity:id:field`) to make keys easy to understand, manage, and invalidate.
*   **Using Dragonfly as a Primary Database.** Dragonfly is primarily an in-memory data store for speed, not a durable primary database. Data

## Quick Example

```bash
docker run -p 6379:6379 docker.dragonflydb.io/dragonflydb/dragonfly
```

```bash
npm install ioredis
# or
yarn add ioredis
```
skilldb get caching-services-skills/DragonflyFull skill: 264 lines
Paste into your CLAUDE.md or agent config

Dragonfly Caching and Data Store

You are a Dragonfly specialist, adept at deploying and integrating this next-generation in-memory data store into web applications. You optimize performance, reduce latency, and manage high-volume data operations by leveraging Dragonfly's Redis and Memcached compatibility, achieving significant improvements in resource efficiency and throughput.

Core Philosophy

Dragonfly stands out as an exceptionally fast and efficient in-memory data store, designed as a modern, drop-in replacement for Redis and Memcached. Its core philosophy revolves around maximizing hardware utilization, particularly multi-core CPUs and large memory capacities, to deliver superior performance with a smaller operational footprint. This means you can often achieve higher throughput and lower latency on the same hardware compared to traditional alternatives.

You choose Dragonfly when your application demands extreme speed for caching, real-time data processing, session management, or pub/sub messaging, and you want to reduce infrastructure costs or simplify scaling. Its strict compatibility with the Redis and Memcached protocols means that existing client libraries and application code can typically switch to Dragonfly with minimal to no modifications, making migration a straightforward process.

The power of Dragonfly comes from its innovative architecture, which includes a shared-nothing design, lock-free data structures, and a shard-per-core approach. This allows it to scale vertically incredibly well, making better use of available resources on a single machine before requiring complex horizontal scaling strategies. It's a strategic choice for modern high-performance backends.

Setup

1. Run Dragonfly Server

The easiest way to get Dragonfly running is via Docker.

docker run -p 6379:6379 docker.dragonflydb.io/dragonflydb/dragonfly

This command starts a Dragonfly instance listening on port 6379, the default Redis port.

2. Install Client Library

Dragonfly is Redis-compatible, so you can use any standard Redis client. For Node.js, ioredis is a popular and robust choice.

npm install ioredis
# or
yarn add ioredis

3. Connect to Dragonfly

Establish a connection using your chosen client library.

import Redis from 'ioredis';

// Connect to Dragonfly running on localhost:6379
const dragonflyClient = new Redis({
  port: 6379,
  host: '127.0.0.1',
  maxRetriesPerRequest: null, // Disable retries to prevent connection issues on startup
});

dragonflyClient.on('connect', () => {
  console.log('Connected to Dragonfly successfully!');
});

dragonflyClient.on('error', (err: Error) => {
  console.error('Dragonfly connection error:', err);
});

export default dragonflyClient;

Key Techniques

1. Basic Caching with Expiration

Implement a simple key-value cache with time-to-live (TTL) to store frequently accessed data.

import dragonflyClient from './dragonflyClient'; // Assuming the client is exported from setup

interface UserData {
  id: string;
  name: string;
  email: string;
}

async function getUserFromCacheOrDB(userId: string): Promise<UserData> {
  const cacheKey = `user:${userId}`;
  let userData: UserData | null = null;

  // Try to get from cache
  const cachedUser = await dragonflyClient.get(cacheKey);
  if (cachedUser) {
    userData = JSON.parse(cachedUser);
    console.log(`User ${userId} found in cache.`);
    return userData;
  }

  // If not in cache, fetch from database (simulate)
  console.log(`User ${userId} not in cache, fetching from DB...`);
  const userFromDB: UserData = {
    id: userId,
    name: `User ${userId} Name`,
    email: `user${userId}@example.com`,
  };

  // Cache the user data for 60 seconds
  await dragonflyClient.set(cacheKey, JSON.stringify(userFromDB), 'EX', 60); // EX for seconds
  console.log(`User ${userId} fetched from DB and cached.`);
  return userFromDB;
}

// Example usage:
getUserFromCacheOrDB('123').then(user => console.log(user));
getUserFromCacheOrDB('123').then(user => console.log(user)); // This will hit the cache

2. Storing Structured Data with Hashes

Use Redis Hashes to store objects with multiple fields efficiently, avoiding repeated JSON (de)serialization.

import dragonflyClient from './dragonflyClient';

interface Product {
  id: string;
  name: string;
  price: number;
  category: string;
  stock: number;
}

async function getProductDetails(productId: string): Promise<Product | null> {
  const hashKey = `product:${productId}`;

  // Try to get all fields from the hash
  const productData = await dragonflyClient.hgetall(hashKey);

  if (Object.keys(productData).length > 0) {
    console.log(`Product ${productId} found in cache (hash).`);
    return {
      id: productId,
      name: productData.name,
      price: parseFloat(productData.price),
      category: productData.category,
      stock: parseInt(productData.stock, 10),
    };
  }

  // If not in cache, fetch from source (simulate)
  console.log(`Product ${productId} not in cache, fetching from source...`);
  const productFromSource: Product = {
    id: productId,
    name: `Awesome Widget ${productId}`,
    price: 29.99,
    category: 'Electronics',
    stock: 150,
  };

  // Store in hash with an expiration
  await dragonflyClient.hmset(hashKey, productFromSource);
  await dragonflyClient.expire(hashKey, 300); // Expire hash in 300 seconds (5 minutes)

  console.log(`Product ${productId} fetched and cached as hash.`);
  return productFromSource;
}

// Example usage:
getProductDetails('PROD456').then(product => console.log(product));
getProductDetails('PROD456').then(product => console.log(product)); // Cache hit

3. Implementing a Basic Rate Limiter

Leverage INCR and EXPIRE to create a simple API rate limiter.

import dragonflyClient from './dragonflyClient';

const RATE_LIMIT_WINDOW_SECONDS = 60; // 1 minute
const MAX_REQUESTS_PER_WINDOW = 10; // 10 requests per minute

async function checkRateLimit(ipAddress: string): Promise<boolean> {
  const key = `rate_limit:${ipAddress}`;

  // Increment the counter for this IP
  const currentRequests = await dragonflyClient.incr(key);

  // If this is the first request in the window, set its expiration
  if (currentRequests === 1) {
    await dragonflyClient.expire(key, RATE_LIMIT_WINDOW_SECONDS);
  }

  if (currentRequests > MAX_REQUESTS_PER_WINDOW) {
    console.warn(`Rate limit exceeded for IP: ${ipAddress}`);
    return false;
  }

  console.log(`IP ${ipAddress}: ${currentRequests}/${MAX_REQUESTS_PER_WINDOW} requests made.`);
  return true;
}

// Simulate requests from an IP
async function simulateRequests(ip: string, numRequests: number) {
  for (let i = 0; i < numRequests; i++) {
    const allowed = await checkRateLimit(ip);
    if (!allowed) {
      console.log(`Request ${i + 1} blocked for ${ip}`);
      break;
    }
    await new Promise(resolve => setTimeout(resolve, 100)); // Small delay
  }
}

simulateRequests('192.168.1.100', 12);

4. Real-time Communication with Pub/Sub

Utilize Dragonfly's Pub/Sub capabilities for broadcasting messages or real-time eventing.

import Redis from 'ioredis'; // Need a separate client for subscribing as it's blocking
import dragonflyClient from './dragonflyClient'; // For publishing

const CHANNEL_NAME = 'chat-updates';

async function setupPubSub() {
  // Subscriber client
  const subscriber = new Redis({
    port: 6379,
    host: '127.0.0.1',
    maxRetriesPerRequest: null,
  });

  subscriber.on('connect', () => console.log('Subscriber connected.'));
  subscriber.on('error', (err: Error) => console.error('Subscriber error:', err));

  await subscriber.subscribe(CHANNEL_NAME);
  console.log(`Subscribed to channel: ${CHANNEL_NAME}`);

  subscriber.on('message', (channel, message) => {
    if (channel === CHANNEL_NAME) {
      console.log(`Received message on ${channel}: ${message}`);
    }
  });

  // Publisher function
  async function publishMessage(message: string) {
    const clients = await dragonflyClient.publish(CHANNEL_NAME, message);
    console.log(`Published "${message}" to ${clients} client(s).`);
  }

  // Simulate publishing messages
  setTimeout(() => publishMessage('Hello from publisher!'), 2000);
  setTimeout(() => publishMessage('Another update!'), 4000);
  setTimeout(() => publishMessage('Final message.'), 6000);
}

setupPubSub();

Best Practices

  • Always Set Expirations (TTLs): Prevent memory bloat and ensure data freshness by applying EX or PX to keys, especially for cache entries and temporary data.
  • Choose the Right Data Structure: Don't just use strings. Leverage Hashes for objects, Lists for queues, Sets for unique collections, and Sorted Sets for leaderboards or time-series data.
  • Handle Connection Errors and Retries: Implement robust error handling and automatic reconnection logic for your client to ensure application resilience. ioredis handles some of this by default, but monitor its behavior.
  • Monitor Performance Metrics: Keep an eye on latency, throughput, memory usage, and hit/miss ratios using tools like INFO command or external monitoring solutions to ensure optimal performance.
  • Benchmark Before Migration: If replacing Redis/Memcached, conduct thorough benchmarks with your actual workload to quantify performance gains and identify potential bottlenecks.
  • Use Connection Pooling: For high-traffic applications, use client libraries that manage a pool of connections to Dragonfly to reduce overhead and improve resource utilization.
  • Keep Keys Consistent and Descriptive: Use a clear naming convention (e.g., service:entity:id:field) to make keys easy to understand, manage, and invalidate.

Anti-Patterns

  • Using Dragonfly as a Primary Database. Dragonfly is primarily an in-memory data store for speed, not a durable primary database. Data

Install this skill directly: skilldb add caching-services-skills

Get CLI access →