Skip to main content
Technology & EngineeringBackground Jobs Services345 lines

Upstash QStash

"Upstash QStash: serverless message queue, HTTP-based, scheduled messages, retries, callbacks, topics, DLQ"

Quick Summary28 lines
QStash is a serverless HTTP-based message queue and task scheduler from Upstash. Unlike traditional message queues that require persistent consumers, QStash delivers messages by calling HTTP endpoints. This makes it uniquely suited to serverless architectures where there are no long-running worker processes. You publish a message to QStash with a destination URL, and QStash reliably delivers it with retries, scheduling, and deduplication.

## Key Points

- **Always verify request signatures** in your receiver endpoints. Without verification, anyone can call your endpoints pretending to be QStash.
- **Use callbacks for workflow orchestration.** The success callback lets you chain jobs without the producer needing to know about downstream steps.
- **Set appropriate retry counts.** Transient failures need 3-5 retries; permanent failures (bad data) should fail fast. Use the failure callback to handle exhausted retries.
- **Use deduplication IDs** for operations triggered by user actions that might fire multiple times (button clicks, webhook retries).
- **Use topics for fan-out** instead of publishing multiple individual messages. Topics ensure atomicity — either all endpoints receive the message or none do.
- **Keep payloads under 1MB.** QStash has payload size limits. Store large data elsewhere and pass a reference.
- **Monitor the DLQ regularly.** Dead-lettered messages indicate systemic issues that need attention.
- **Use the `notBefore` parameter** for precise scheduling instead of computing delay durations manually.
- **Not verifying signatures in production.** Your endpoints are public URLs. Without signature verification, they are open to abuse.
- **Using QStash for real-time communication.** QStash has delivery latency measured in seconds. Use WebSockets or server-sent events for real-time needs.
- **Ignoring the dead letter queue.** Failed messages accumulate silently. Set up a failure callback or periodically review the DLQ.
- **Publishing to `localhost` URLs.** QStash calls your endpoints over the public internet. Use tunneling tools (ngrok) for local development or use the Upstash QStash simulator.

## Quick Example

```typescript
// .env.local
QSTASH_TOKEN=your-qstash-token
QSTASH_CURRENT_SIGNING_KEY=sig_current_xxx
QSTASH_NEXT_SIGNING_KEY=sig_next_xxx
QSTASH_URL=https://qstash.upstash.io  // Optional, defaults to this
```
skilldb get background-jobs-services-skills/Upstash QStashFull skill: 345 lines
Paste into your CLAUDE.md or agent config

Upstash QStash

Core Philosophy

QStash is a serverless HTTP-based message queue and task scheduler from Upstash. Unlike traditional message queues that require persistent consumers, QStash delivers messages by calling HTTP endpoints. This makes it uniquely suited to serverless architectures where there are no long-running worker processes. You publish a message to QStash with a destination URL, and QStash reliably delivers it with retries, scheduling, and deduplication.

The mental model is simple: QStash is a managed, durable HTTP caller. You tell it "call this URL with this body, retry up to N times, and if everything fails, notify this callback URL." It handles the rest — persistence, retry backoff, scheduling, and dead letter queues. Since the delivery mechanism is HTTP, your consumers can be Vercel Functions, Cloudflare Workers, AWS Lambda, or any publicly reachable endpoint.

Setup

Installation and Client Configuration

// npm install @upstash/qstash

// lib/qstash/client.ts
import { Client } from "@upstash/qstash";

export const qstash = new Client({
  token: process.env.QSTASH_TOKEN!,
});

// For Next.js receiver verification
// lib/qstash/receiver.ts
import { Receiver } from "@upstash/qstash";

export const qstashReceiver = new Receiver({
  currentSigningKey: process.env.QSTASH_CURRENT_SIGNING_KEY!,
  nextSigningKey: process.env.QSTASH_NEXT_SIGNING_KEY!,
});

Environment Variables

// .env.local
QSTASH_TOKEN=your-qstash-token
QSTASH_CURRENT_SIGNING_KEY=sig_current_xxx
QSTASH_NEXT_SIGNING_KEY=sig_next_xxx
QSTASH_URL=https://qstash.upstash.io  // Optional, defaults to this

Next.js Receiver Endpoint

// app/api/qstash/process/route.ts
import { qstashReceiver } from "@/lib/qstash/receiver";
import { NextRequest } from "next/server";

export async function POST(req: NextRequest) {
  // Verify the request comes from QStash
  const body = await req.text();
  const signature = req.headers.get("upstash-signature");

  if (!signature) {
    return new Response("Missing signature", { status: 401 });
  }

  const isValid = await qstashReceiver.verify({
    body,
    signature,
    url: `${process.env.NEXT_PUBLIC_APP_URL}/api/qstash/process`,
  });

  if (!isValid) {
    return new Response("Invalid signature", { status: 401 });
  }

  const payload = JSON.parse(body);
  await handleJob(payload);

  return new Response("OK", { status: 200 });
}

Key Techniques

Publishing Messages

// app/api/orders/route.ts
import { qstash } from "@/lib/qstash/client";

export async function POST(req: Request) {
  const order = await req.json();
  const savedOrder = await db.order.create({ data: order });

  // Simple publish — QStash calls the endpoint with the payload
  await qstash.publishJSON({
    url: `${process.env.NEXT_PUBLIC_APP_URL}/api/qstash/process-order`,
    body: { orderId: savedOrder.id, action: "process" },
    retries: 3,
  });

  return Response.json({ orderId: savedOrder.id });
}

// Publish with delay
async function scheduleFollowup(userId: string) {
  await qstash.publishJSON({
    url: `${process.env.NEXT_PUBLIC_APP_URL}/api/qstash/send-followup`,
    body: { userId },
    delay: 3 * 24 * 60 * 60, // 3 days in seconds
  });
}

// Publish with specific delivery time
async function scheduleForTime(payload: unknown, deliverAt: Date) {
  await qstash.publishJSON({
    url: `${process.env.NEXT_PUBLIC_APP_URL}/api/qstash/scheduled-task`,
    body: payload,
    notBefore: Math.floor(deliverAt.getTime() / 1000), // Unix timestamp
  });
}

Scheduled / Cron Messages

// lib/qstash/setup-schedules.ts
import { qstash } from "./client";

export async function setupSchedules() {
  // Create a recurring schedule
  const schedule = await qstash.schedules.create({
    destination: `${process.env.NEXT_PUBLIC_APP_URL}/api/qstash/daily-digest`,
    cron: "0 9 * * *",          // Daily at 9 AM UTC
    body: JSON.stringify({ type: "daily-digest" }),
    headers: { "Content-Type": "application/json" },
    retries: 3,
  });

  console.log("Schedule created:", schedule.scheduleId);

  // List all schedules
  const schedules = await qstash.schedules.list();
  for (const s of schedules) {
    console.log(`${s.scheduleId}: ${s.cron} -> ${s.destination}`);
  }

  // Delete a schedule
  // await qstash.schedules.delete("sched_xxx");
}

Callbacks and Dead Letter Queues

// Using callbacks to get notified of success/failure
import { qstash } from "@/lib/qstash/client";

async function publishWithCallback(orderId: string) {
  await qstash.publishJSON({
    url: `${process.env.NEXT_PUBLIC_APP_URL}/api/qstash/process-order`,
    body: { orderId },
    retries: 5,
    // Called after successful delivery
    callback: `${process.env.NEXT_PUBLIC_APP_URL}/api/qstash/callbacks/success`,
    // Called after all retries exhausted
    failureCallback: `${process.env.NEXT_PUBLIC_APP_URL}/api/qstash/callbacks/failure`,
  });
}

// app/api/qstash/callbacks/success/route.ts
export async function POST(req: Request) {
  const result = await req.json();
  // result contains the response from the original endpoint
  console.log("Job succeeded:", result);

  await db.jobLog.create({
    data: {
      messageId: result.messageId,
      status: "completed",
      response: result.body,
    },
  });

  return new Response("OK");
}

// app/api/qstash/callbacks/failure/route.ts
export async function POST(req: Request) {
  const result = await req.json();
  console.error("Job permanently failed:", result);

  await db.jobLog.create({
    data: {
      messageId: result.messageId,
      status: "dead-letter",
      error: result.body,
    },
  });

  // Alert the team
  await slack.postMessage({
    channel: "#alerts",
    text: `QStash job failed permanently: ${result.messageId}`,
  });

  return new Response("OK");
}

Topics for Fan-Out

// Topics deliver a single message to multiple endpoints
import { qstash } from "@/lib/qstash/client";

async function setupTopics() {
  // Create a topic with multiple endpoints
  await qstash.topics.addEndpoints({
    name: "order-events",
    endpoints: [
      { url: `${process.env.NEXT_PUBLIC_APP_URL}/api/qstash/update-inventory` },
      { url: `${process.env.NEXT_PUBLIC_APP_URL}/api/qstash/send-notification` },
      { url: `${process.env.NEXT_PUBLIC_APP_URL}/api/qstash/update-analytics` },
      { url: "https://partner-api.example.com/webhook" },
    ],
  });
}

// Publish to topic — all endpoints receive the message
async function publishOrderEvent(orderId: string, event: string) {
  await qstash.publishJSON({
    topic: "order-events",
    body: { orderId, event, timestamp: Date.now() },
    retries: 3,
  });
}

Deduplication

// Prevent duplicate message processing
import { qstash } from "@/lib/qstash/client";

async function syncUser(userId: string) {
  await qstash.publishJSON({
    url: `${process.env.NEXT_PUBLIC_APP_URL}/api/qstash/sync-user`,
    body: { userId },
    deduplicationId: `sync-user-${userId}`,
    // QStash deduplicates within a rolling window
    // If a message with the same ID was sent recently, this one is dropped
  });
}

Using with the Vercel AI SDK Pattern

// Offload long AI tasks from serverless function timeout limits
// app/api/generate/route.ts
import { qstash } from "@/lib/qstash/client";

export async function POST(req: Request) {
  const { prompt, documentId } = await req.json();

  // Immediately queue the work — don't block the HTTP response
  await qstash.publishJSON({
    url: `${process.env.NEXT_PUBLIC_APP_URL}/api/qstash/ai-generate`,
    body: { prompt, documentId },
    retries: 2,
    callback: `${process.env.NEXT_PUBLIC_APP_URL}/api/qstash/ai-complete`,
  });

  return Response.json({ status: "queued", documentId });
}

// app/api/qstash/ai-generate/route.ts
export async function POST(req: Request) {
  const { prompt, documentId } = await req.json();

  const result = await openai.chat.completions.create({
    model: "gpt-4o",
    messages: [{ role: "user", content: prompt }],
  });

  await db.document.update({
    where: { id: documentId },
    data: {
      content: result.choices[0].message.content,
      status: "generated",
    },
  });

  return Response.json({ documentId, status: "generated" });
}

Managing the Dead Letter Queue

// lib/qstash/dlq.ts
import { qstash } from "./client";

export async function reviewDeadLetterQueue() {
  const dlqMessages = await qstash.dlq.listMessages();

  for (const msg of dlqMessages.messages) {
    console.log({
      messageId: msg.messageId,
      url: msg.url,
      responseStatus: msg.responseStatus,
      responseBody: msg.responseBody,
      maxRetries: msg.maxRetries,
    });
  }

  return dlqMessages;
}

// Retry a specific DLQ message
export async function retryDlqMessage(dlqId: string) {
  await qstash.dlq.delete(dlqId);
  // Re-publish the message manually after fixing the issue
}

Best Practices

  • Always verify request signatures in your receiver endpoints. Without verification, anyone can call your endpoints pretending to be QStash.
  • Use callbacks for workflow orchestration. The success callback lets you chain jobs without the producer needing to know about downstream steps.
  • Set appropriate retry counts. Transient failures need 3-5 retries; permanent failures (bad data) should fail fast. Use the failure callback to handle exhausted retries.
  • Use deduplication IDs for operations triggered by user actions that might fire multiple times (button clicks, webhook retries).
  • Use topics for fan-out instead of publishing multiple individual messages. Topics ensure atomicity — either all endpoints receive the message or none do.
  • Keep payloads under 1MB. QStash has payload size limits. Store large data elsewhere and pass a reference.
  • Monitor the DLQ regularly. Dead-lettered messages indicate systemic issues that need attention.
  • Use the notBefore parameter for precise scheduling instead of computing delay durations manually.

Anti-Patterns

  • Not verifying signatures in production. Your endpoints are public URLs. Without signature verification, they are open to abuse.
  • Using QStash for real-time communication. QStash has delivery latency measured in seconds. Use WebSockets or server-sent events for real-time needs.
  • Ignoring the dead letter queue. Failed messages accumulate silently. Set up a failure callback or periodically review the DLQ.
  • Publishing to localhost URLs. QStash calls your endpoints over the public internet. Use tunneling tools (ngrok) for local development or use the Upstash QStash simulator.
  • Relying on message ordering. QStash does not guarantee ordered delivery. If order matters, include a sequence number and handle reordering in your consumer.
  • Embedding secrets in message bodies. Messages are stored on QStash servers. Pass references to secrets stored in your own database or secret manager.

Install this skill directly: skilldb add background-jobs-services-skills

Get CLI access →