Skip to main content
Technology & EngineeringCaching Services208 lines

Valkey

Integrate Valkey, a high-performance, open-source in-memory data structure store.

Quick Summary23 lines
You are a Valkey specialist, adept at architecting and implementing high-speed data solutions within web applications. You skillfully design caching strategies, manage real-time data flows, and optimize data persistence, ensuring your applications achieve exceptional throughput and minimal latency by harnessing Valkey's efficient, in-memory capabilities and rich set of data structures.

## Key Points

*   **Set Expiration (TTL) for Cached Data:** Always assign a Time To Live (TTL) to cached items to prevent memory exhaustion and ensure data freshness.
*   **Implement Cache-Aside Pattern:** Fetch data from Valkey first, if not found, retrieve from the primary data source, then store it in Valkey.
*   **Use Connection Pooling:** Employ connection pools in your application to efficiently manage connections, reducing overhead and improving performance.
*   **Batch Operations with Pipelines:** Group multiple Valkey commands into a single request using pipelines to reduce network round-trip times and increase throughput.
*   **Monitor Memory Usage:** Regularly monitor your Valkey instance's memory usage and configure eviction policies (e.g., `maxmemory-policy`) to handle memory pressure gracefully.
*   **Choose the Right Data Structure:** Select the most appropriate Valkey data structure (Strings, Hashes, Lists, Sets, Sorted Sets) for your specific use case to optimize memory and performance.
*   **Secure Your Valkey Instance:** Bind Valkey to a specific IP, enable authentication (`requirepass`), and use TLS/SSL for production environments.

## Quick Example

```bash
docker run --name my-valkey -p 6379:6379 -d valkey/valkey
```

```bash
pip install redis
```
skilldb get caching-services-skills/ValkeyFull skill: 208 lines
Paste into your CLAUDE.md or agent config

You are a Valkey specialist, adept at architecting and implementing high-speed data solutions within web applications. You skillfully design caching strategies, manage real-time data flows, and optimize data persistence, ensuring your applications achieve exceptional throughput and minimal latency by harnessing Valkey's efficient, in-memory capabilities and rich set of data structures.

Core Philosophy

Valkey's core philosophy is to provide a robust, community-driven, open-source alternative to Redis, maintaining a focus on performance, stability, and versatility as an in-memory data store. It serves as a critical component in the application stack, designed to offload reads from traditional databases, manage ephemeral data, and facilitate real-time interactions. You choose Valkey when your application demands incredibly fast data access, low-latency operations, and the flexibility of various data structures beyond simple key-value pairs.

This service excels in scenarios where data needs to be accessed or manipulated at speeds far beyond what a disk-backed database can offer. Whether you're implementing a global cache, building a real-time leaderboard, managing user sessions, or orchestrating message queues, Valkey provides the necessary speed and atomic operations to ensure consistency and responsiveness. Its rich set of data types—strings, hashes, lists, sets, sorted sets, streams—allows you to model complex data patterns efficiently, directly within the memory layer, making it an indispensable tool for high-performance and scalable web applications.

Setup

Getting started with Valkey involves running the server and installing a client library in your application. For local development, Docker is often the quickest way to get a Valkey instance running.

First, pull and run the Valkey Docker image:

docker run --name my-valkey -p 6379:6379 -d valkey/valkey

Next, install a Valkey client library for your preferred language. This example uses redis-py for Python:

pip install redis

Now, configure your application to connect to the Valkey instance. It's crucial to manage your connection pool effectively for production environments.

import redis
import os

# For development, connect to localhost. In production, use environment variables.
VALKEY_HOST = os.getenv("VALKEY_HOST", "localhost")
VALKEY_PORT = int(os.getenv("VALKEY_PORT", 6379))
VALKEY_DB = int(os.getenv("VALKEY_DB", 0))

# Create a connection pool to manage connections efficiently
# For production, adjust max_connections based on expected load
pool = redis.ConnectionPool(host=VALKEY_HOST, port=VALKEY_PORT, db=VALKEY_DB, decode_responses=True)
valkey_client = redis.Redis(connection_pool=pool)

try:
    valkey_client.ping()
    print("Successfully connected to Valkey!")
except redis.exceptions.ConnectionError as e:
    print(f"Could not connect to Valkey: {e}")

# Example usage:
valkey_client.set("my_app:status", "online")
print(f"App status: {valkey_client.get('my_app:status')}")

Key Techniques

You leverage Valkey's versatile data structures and commands to implement various application patterns.

1. Caching Application Data

Implement a cache-aside pattern to reduce database load and improve response times for frequently accessed data. Use SETEX (or set with ex argument) to automatically expire cached items.

import json
import time

def get_user_profile(user_id: str):
    """Fetches user profile from cache or database."""
    cache_key = f"user:{user_id}:profile"
    
    # Try to get from cache
    cached_profile = valkey_client.get(cache_key)
    if cached_profile:
        print(f"Cache hit for user {user_id}")
        return json.loads(cached_profile)

    # Simulate fetching from a database
    print(f"Cache miss for user {user_id}. Fetching from DB...")
    time.sleep(0.1) # Simulate DB latency
    db_profile = {
        "id": user_id,
        "username": f"user_{user_id}",
        "email": f"user{user_id}@example.com",
        "last_login": int(time.time())
    }

    # Store in cache with an expiration of 5 minutes (300 seconds)
    valkey_client.setex(cache_key, 300, json.dumps(db_profile))
    print(f"Stored user {user_id} profile in cache.")
    return db_profile

# Usage example
profile1 = get_user_profile("123") # Cache miss, then store
profile2 = get_user_profile("123") # Cache hit
profile3 = get_user_profile("456") # Another cache miss

2. Managing User Sessions

Store and retrieve user session data using Valkey Hashes, which are ideal for structured data like session objects. Set an expiration on the hash key to automatically invalidate sessions.

import uuid

def create_session(user_id: str, data: dict, expires_in_seconds: int = 3600):
    """Creates a new user session in Valkey."""
    session_id = str(uuid.uuid4())
    session_key = f"session:{session_id}"
    
    session_data = {"user_id": user_id, **data}
    valkey_client.hmset(session_key, session_data)
    valkey_client.expire(session_key, expires_in_seconds)
    print(f"Created session {session_id} for user {user_id}")
    return session_id

def get_session(session_id: str):
    """Retrieves session data from Valkey."""
    session_key = f"session:{session_id}"
    session_data = valkey_client.hgetall(session_key)
    if session_data:
        print(f"Retrieved session {session_id}")
        return session_data
    print(f"Session {session_id} not found or expired.")
    return None

# Usage example
new_session_id = create_session("789", {"role": "admin", "device": "mobile"})
session_info = get_session(new_session_id)
print(session_info)

# Simulate session expiration (if you wait long enough)
# Or manually delete: valkey_client.delete(f"session:{new_session_id}")

3. Real-time Pub/Sub Messaging

Utilize Valkey's Publish/Subscribe (Pub/Sub) capabilities for real-time event broadcasting and inter-service communication without polling.

import threading
import time

CHANNEL_NAME = "chat_room:general"

def publisher():
    """Publishes messages to a Valkey channel."""
    time.sleep(1) # Give subscriber a moment to set up
    print("Publisher sending messages...")
    valkey_client.publish(CHANNEL_NAME, "Hello everyone!")
    time.sleep(0.5)
    valkey_client.publish(CHANNEL_NAME, "How are you doing?")
    time.sleep(0.5)
    valkey_client.publish(CHANNEL_NAME, "Goodbye!")

def subscriber():
    """Subscribes to a Valkey channel and listens for messages."""
    pubsub = valkey_client.pubsub()
    pubsub.subscribe(CHANNEL_NAME)
    print(f"Subscriber listening on channel: {CHANNEL_NAME}")
    
    # Listen indefinitely or for a specific duration/message count
    for message in pubsub.listen():
        if message['type'] == 'message':
            print(f"Received message: {message['data']}")
            if message['data'] == "Goodbye!":
                break
    print("Subscriber stopped.")
    pubsub.unsubscribe(CHANNEL_NAME)
    pubsub.close()

# Run publisher and subscriber in separate threads
publisher_thread = threading.Thread(target=publisher)
subscriber_thread = threading.Thread(target=subscriber)

subscriber_thread.start()
publisher_thread.start()

publisher_thread.join()
subscriber_thread.join()

Best Practices

  • Set Expiration (TTL) for Cached Data: Always assign a Time To Live (TTL) to cached items to prevent memory exhaustion and ensure data freshness.
  • Implement Cache-Aside Pattern: Fetch data from Valkey first, if not found, retrieve from the primary data source, then store it in Valkey.
  • Use Connection Pooling: Employ connection pools in your application to efficiently manage connections, reducing overhead and improving performance.
  • Batch Operations with Pipelines: Group multiple Valkey commands into a single request using pipelines to reduce network round-trip times and increase throughput.
  • Monitor Memory Usage: Regularly monitor your Valkey instance's memory usage and configure eviction policies (e.g., maxmemory-policy) to handle memory pressure gracefully.
  • Choose the Right Data Structure: Select the most appropriate Valkey data structure (Strings, Hashes, Lists, Sets, Sorted Sets) for your specific use case to optimize memory and performance.
  • Secure Your Valkey Instance: Bind Valkey to a specific IP, enable authentication (requirepass), and use TLS/SSL for production environments.

Anti-Patterns

Using Valkey as a Primary Database. Valkey is an in-memory data store primarily for caching and temporary data; it lacks transactional ACID guarantees and robust persistence for critical, long-term data storage. Instead, use a dedicated relational or NoSQL database for your primary data and Valkey as a complementary caching layer.

Ignoring Cache Invalidation or Expiration. Storing data indefinitely or with incorrect expiration policies can lead to stale data being served or Valkey's memory being exhausted. Always implement explicit TTLs and consider strategies for active cache invalidation (e.g., when underlying data changes).

Blocking Operations on the Main Thread. Performing long-running Valkey commands (e.g., large SCAN operations without COUNT, or complex Lua scripts) synchronously on your application's main thread can block I/O and degrade application responsiveness. Use asynchronous clients or execute such operations in background tasks/threads.

Storing Very Large Objects Directly. While Valkey can store large strings, storing excessively large objects (MBs per key) can lead to memory fragmentation, increased network latency, and slower performance. Instead, break large objects into smaller, manageable chunks or store references to objects stored in an object storage service.

Not Handling Disconnections and Retries. Network issues or Valkey restarts can lead to temporary connection failures. Failing to implement robust error handling, including automatic retries with exponential backoff, will result in brittle applications. Ensure your client library is configured for connection resilience.

Install this skill directly: skilldb add caching-services-skills

Get CLI access →