Skip to main content
Technology & EngineeringBackground Jobs Services305 lines

Trigger.dev

"Trigger.dev: background jobs for Next.js/Node, long-running tasks, integrations, retry, cron, dashboard, v3 SDK"

Quick Summary28 lines
Trigger.dev v3 provides long-running background jobs for serverless and Node.js environments. Unlike traditional serverless functions that time out after seconds, Trigger.dev tasks can run for minutes or hours. The v3 SDK embraces a "just write async functions" approach — you define tasks as plain TypeScript functions, and Trigger.dev handles execution, retries, scheduling, and observability. Tasks run on dedicated infrastructure, so you get the simplicity of serverless with the power of long-running processes.

## Key Points

- **Use `logger` from the SDK** instead of console.log. It integrates with the Trigger.dev dashboard for structured, searchable logs.
- **Set appropriate machine presets** for resource-intensive tasks. Default is fine for I/O work; use larger presets for CPU/memory-bound tasks.
- **Use batchTriggerAndWait** for fan-out patterns instead of triggering tasks in a loop. It is more efficient and provides aggregated results.
- **Keep payloads small and serializable.** Pass IDs and look up data inside the task. Payloads are stored and must be JSON-compatible.
- **Use the dashboard** to monitor runs, inspect logs, and debug failures. Every run has a detailed trace view.
- **Separate concerns into distinct tasks.** Compose complex workflows by having tasks trigger subtasks rather than putting everything in one function.
- **Test locally with `trigger.dev dev`** before deploying. The dev CLI connects your local code to the Trigger.dev platform for real execution.
- **Use `schedules.task`** for cron jobs instead of external cron services. The schedule is managed alongside your code.
- **Passing large blobs in payloads.** Trigger.dev serializes payloads; sending megabytes of data causes slowdowns. Upload to storage and pass a reference.
- **Not handling idempotency.** Tasks can retry. If your task charges a credit card, ensure you use idempotency keys to prevent double-charging.
- **Using setTimeout for delays.** Use `wait.for({ seconds: 30 })` from the SDK for durable waits that survive process restarts.
- **Ignoring the retry configuration.** Default retries may not suit every task. Tune maxAttempts and backoff for your use case.

## Quick Example

```typescript
// Start the dev server to connect local tasks to Trigger.dev
// npx trigger.dev@latest dev

// Deploy tasks to production
// npx trigger.dev@latest deploy
```
skilldb get background-jobs-services-skills/Trigger.devFull skill: 305 lines
Paste into your CLAUDE.md or agent config

Trigger.dev

Core Philosophy

Trigger.dev v3 provides long-running background jobs for serverless and Node.js environments. Unlike traditional serverless functions that time out after seconds, Trigger.dev tasks can run for minutes or hours. The v3 SDK embraces a "just write async functions" approach — you define tasks as plain TypeScript functions, and Trigger.dev handles execution, retries, scheduling, and observability. Tasks run on dedicated infrastructure, so you get the simplicity of serverless with the power of long-running processes.

The key differentiator is that tasks execute on Trigger.dev's managed workers (or self-hosted), removing the serverless timeout constraint entirely. This makes it ideal for AI pipelines, data processing, file generation, and any work that exceeds typical edge/serverless limits.

Setup

Installation and Project Init

// Initialize Trigger.dev in your project
// npx trigger.dev@latest init

// trigger.config.ts (project root)
import { defineConfig } from "@trigger.dev/sdk/v3";

export default defineConfig({
  project: "proj_your_project_id",
  runtime: "node",
  logLevel: "log",
  retries: {
    enabledInDev: false,
    default: {
      maxAttempts: 3,
      minTimeoutInMs: 1000,
      maxTimeoutInMs: 30000,
      factor: 2,
    },
  },
  dirs: ["./src/trigger"],
});

Environment Configuration

// .env.local
TRIGGER_SECRET_KEY=tr_dev_xxx  // Development key
// Production key set in deployment environment

// Set environment variables in the Trigger.dev dashboard
// for secrets like API keys that tasks need at runtime

Development Workflow

// Start the dev server to connect local tasks to Trigger.dev
// npx trigger.dev@latest dev

// Deploy tasks to production
// npx trigger.dev@latest deploy

Key Techniques

Defining Tasks

// src/trigger/process-upload.ts
import { task, logger } from "@trigger.dev/sdk/v3";
import { S3Client, GetObjectCommand } from "@aws-sdk/client-s3";
import sharp from "sharp";

export const processUpload = task({
  id: "process-upload",
  retry: { maxAttempts: 3 },
  run: async (payload: { fileKey: string; userId: string }) => {
    logger.info("Processing upload", { fileKey: payload.fileKey });

    // Download from S3
    const s3 = new S3Client({ region: "us-east-1" });
    const response = await s3.send(
      new GetObjectCommand({ Bucket: "uploads", Key: payload.fileKey })
    );
    const buffer = Buffer.from(await response.Body!.transformToByteArray());

    // Resize image — this can take a long time for large files
    const resized = await sharp(buffer)
      .resize(1200, 800, { fit: "inside" })
      .webp({ quality: 80 })
      .toBuffer();

    // Upload processed version
    await s3.send(
      new PutObjectCommand({
        Bucket: "processed",
        Key: `${payload.fileKey}.webp`,
        Body: resized,
      })
    );

    // Update database
    await db.upload.update({
      where: { fileKey: payload.fileKey },
      data: { status: "processed", processedKey: `${payload.fileKey}.webp` },
    });

    logger.info("Upload processed successfully");
    return { processedKey: `${payload.fileKey}.webp` };
  },
});

Triggering Tasks from Your App

// app/api/uploads/route.ts
import { processUpload } from "@/src/trigger/process-upload";
import { tasks } from "@trigger.dev/sdk/v3";

export async function POST(req: Request) {
  const { fileKey, userId } = await req.json();

  // Trigger and forget — returns immediately
  const handle = await tasks.trigger<typeof processUpload>("process-upload", {
    fileKey,
    userId,
  });

  return Response.json({ runId: handle.id, status: "queued" });
}

// Alternative: trigger and wait for result
export async function PUT(req: Request) {
  const { fileKey, userId } = await req.json();

  const result = await tasks.triggerAndPoll<typeof processUpload>("process-upload", {
    fileKey,
    userId,
  });

  return Response.json({ result: result.output });
}

Scheduled / Cron Tasks

// src/trigger/daily-cleanup.ts
import { schedules, logger } from "@trigger.dev/sdk/v3";

export const dailyCleanup = schedules.task({
  id: "daily-cleanup",
  cron: "0 3 * * *", // 3 AM UTC daily
  run: async (payload) => {
    logger.info("Running daily cleanup", {
      scheduledTime: payload.timestamp,
    });

    // Delete expired sessions
    const deletedSessions = await db.session.deleteMany({
      where: { expiresAt: { lt: new Date() } },
    });

    // Clean up orphaned uploads older than 7 days
    const cutoff = new Date(Date.now() - 7 * 24 * 60 * 60 * 1000);
    const orphanedUploads = await db.upload.deleteMany({
      where: { status: "pending", createdAt: { lt: cutoff } },
    });

    logger.info("Cleanup complete", {
      deletedSessions: deletedSessions.count,
      orphanedUploads: orphanedUploads.count,
    });

    return {
      deletedSessions: deletedSessions.count,
      orphanedUploads: orphanedUploads.count,
    };
  },
});

Subtasks and Batch Processing

// src/trigger/generate-reports.ts
import { task, logger } from "@trigger.dev/sdk/v3";

export const generateReport = task({
  id: "generate-report",
  run: async (payload: { orgId: string; month: string }) => {
    const data = await db.analytics.findMany({
      where: { orgId: payload.orgId, month: payload.month },
    });
    const pdf = await renderReportPdf(data);
    await storage.upload(`reports/${payload.orgId}/${payload.month}.pdf`, pdf);
    return { url: `reports/${payload.orgId}/${payload.month}.pdf` };
  },
});

export const generateAllReports = task({
  id: "generate-all-reports",
  run: async (payload: { month: string }) => {
    const orgs = await db.org.findMany({ where: { active: true } });

    // Trigger subtasks in batch — they run in parallel on separate workers
    const results = await generateReport.batchTriggerAndWait(
      orgs.map((org) => ({
        payload: { orgId: org.id, month: payload.month },
      }))
    );

    logger.info(`Generated ${results.runs.length} reports`);

    const successful = results.runs.filter((r) => r.ok);
    const failed = results.runs.filter((r) => !r.ok);

    if (failed.length > 0) {
      logger.error(`${failed.length} reports failed`);
    }

    return { total: orgs.length, successful: successful.length, failed: failed.length };
  },
});

Long-Running AI Pipeline

// src/trigger/ai-pipeline.ts
import { task, logger, wait } from "@trigger.dev/sdk/v3";

export const aiContentPipeline = task({
  id: "ai-content-pipeline",
  retry: { maxAttempts: 2 },
  machine: { preset: "medium-2x" }, // More CPU/memory for AI work
  run: async (payload: { documentId: string }) => {
    const doc = await db.document.findUniqueOrThrow({
      where: { id: payload.documentId },
    });

    // Step 1: Extract text (may take minutes for large PDFs)
    logger.info("Extracting text");
    const text = await extractTextFromPdf(doc.fileUrl);

    // Step 2: Generate embeddings
    logger.info("Generating embeddings");
    const chunks = splitIntoChunks(text, 512);
    const embeddings = await openai.embeddings.create({
      model: "text-embedding-3-small",
      input: chunks,
    });

    // Step 3: Store in vector database
    logger.info("Storing embeddings");
    await vectorDb.upsert({
      namespace: doc.id,
      vectors: embeddings.data.map((e, i) => ({
        id: `${doc.id}-${i}`,
        values: e.embedding,
        metadata: { text: chunks[i], documentId: doc.id },
      })),
    });

    // Step 4: Generate summary
    logger.info("Generating summary");
    const summary = await openai.chat.completions.create({
      model: "gpt-4o",
      messages: [{ role: "user", content: `Summarize:\n\n${text.slice(0, 8000)}` }],
    });

    await db.document.update({
      where: { id: doc.id },
      data: {
        status: "processed",
        summary: summary.choices[0].message.content,
        chunkCount: chunks.length,
      },
    });

    return { documentId: doc.id, chunks: chunks.length };
  },
});

Best Practices

  • Use logger from the SDK instead of console.log. It integrates with the Trigger.dev dashboard for structured, searchable logs.
  • Set appropriate machine presets for resource-intensive tasks. Default is fine for I/O work; use larger presets for CPU/memory-bound tasks.
  • Use batchTriggerAndWait for fan-out patterns instead of triggering tasks in a loop. It is more efficient and provides aggregated results.
  • Keep payloads small and serializable. Pass IDs and look up data inside the task. Payloads are stored and must be JSON-compatible.
  • Use the dashboard to monitor runs, inspect logs, and debug failures. Every run has a detailed trace view.
  • Separate concerns into distinct tasks. Compose complex workflows by having tasks trigger subtasks rather than putting everything in one function.
  • Test locally with trigger.dev dev before deploying. The dev CLI connects your local code to the Trigger.dev platform for real execution.
  • Use schedules.task for cron jobs instead of external cron services. The schedule is managed alongside your code.

Anti-Patterns

  • Passing large blobs in payloads. Trigger.dev serializes payloads; sending megabytes of data causes slowdowns. Upload to storage and pass a reference.
  • Not handling idempotency. Tasks can retry. If your task charges a credit card, ensure you use idempotency keys to prevent double-charging.
  • Using setTimeout for delays. Use wait.for({ seconds: 30 }) from the SDK for durable waits that survive process restarts.
  • Ignoring the retry configuration. Default retries may not suit every task. Tune maxAttempts and backoff for your use case.
  • Running tasks synchronously in API routes. Use tasks.trigger (fire-and-forget) for most cases. Only use triggerAndPoll when the caller truly needs to wait.
  • Deploying without testing locally. The dev CLI catches most issues. Deploying untested tasks wastes time debugging in production.

Install this skill directly: skilldb add background-jobs-services-skills

Get CLI access →