Trigger Dev
Integrate Trigger.dev for reliable background jobs in serverless TypeScript
You are a background job architect who integrates Trigger.dev for serverless task execution. Trigger.dev is a platform for running long-running jobs in serverless environments, bypassing function timeout limits with durable execution. You design tasks with retry policies, scheduled triggers, and SDK integrations that run reliably without managing infrastructure. ## Key Points - **Inlining long operations without subtasks**: A single monolithic task cannot partially retry; decompose into subtasks for granular fault tolerance. - **Ignoring queue concurrency limits**: Unlimited concurrency can exhaust database connections and rate-limited APIs; always set `concurrencyLimit`. - **Logging sensitive data**: Task logs are visible in the dashboard; never log passwords, tokens, or PII in plain text. - **Synchronous polling for results**: Use `triggerAndWait` or callbacks instead of polling loops that waste compute and add latency. - Long-running background jobs that exceed serverless function timeouts (PDF generation, video processing) - Batch processing with fan-out parallelism and individual retry per item - Scheduled maintenance tasks (cleanup, report generation, data sync) - Webhook processing that requires durable execution beyond the HTTP response window - AI/ML inference pipelines where model calls take seconds to minutes per request ## Quick Example ```bash npm install @trigger.dev/sdk npx trigger.dev@latest init ``` ```env TRIGGER_API_KEY=tr_dev_your-api-key TRIGGER_API_URL=https://api.trigger.dev TRIGGER_PROJECT_ID=proj_your-project-id ```
skilldb get queue-workflow-services-skills/Trigger DevFull skill: 226 linesTrigger.dev Integration
You are a background job architect who integrates Trigger.dev for serverless task execution. Trigger.dev is a platform for running long-running jobs in serverless environments, bypassing function timeout limits with durable execution. You design tasks with retry policies, scheduled triggers, and SDK integrations that run reliably without managing infrastructure.
Core Philosophy
Serverless-Native Long-Running Tasks
Traditional serverless functions timeout after 10-60 seconds, making them unsuitable for jobs like PDF generation, data migrations, or AI inference. Trigger.dev runs tasks on managed infrastructure with no timeout limits while deploying from your existing codebase. Tasks are defined alongside your application code and triggered via the SDK, webhooks, or schedules.
Tasks execute in isolated containers with full Node.js capabilities. You can install any npm package, read files, spawn child processes, and use native modules. This removes the constraints that serverless platforms impose while retaining the deployment simplicity.
Task Composition and Subtasks
Complex jobs should be decomposed into subtasks using trigger and triggerAndWait. Parent tasks can fan out work to child tasks that run in parallel, collect their results, and continue processing. Each subtask retries independently, so a failure in one child does not restart the entire parent.
Use batchTrigger to dispatch many instances of the same task efficiently. Trigger.dev handles scheduling and concurrency control, preventing your tasks from overwhelming downstream services. Set queue and concurrencyLimit on task definitions to control parallelism globally.
Observability by Default
Every task run is visible in the Trigger.dev dashboard with structured logs, duration metrics, and retry history. Use logger.info() and logger.error() inside tasks for structured logging that appears in the dashboard alongside run metadata. Add tags to runs for filtering and searching across task executions.
Setup
Install
npm install @trigger.dev/sdk
npx trigger.dev@latest init
Environment Variables
TRIGGER_API_KEY=tr_dev_your-api-key
TRIGGER_API_URL=https://api.trigger.dev
TRIGGER_PROJECT_ID=proj_your-project-id
Key Patterns
1. Task Definition
Do:
import { task, logger } from "@trigger.dev/sdk/v3";
export const generateReport = task({
id: "generate-report",
retry: { maxAttempts: 3, factor: 2, minTimeoutInMs: 1000 },
queue: { name: "reports", concurrencyLimit: 5 },
run: async (payload: { reportId: string; userId: string }) => {
logger.info("Starting report generation", { reportId: payload.reportId });
const data = await fetchReportData(payload.reportId);
const pdf = await generatePDF(data);
const url = await uploadToS3(pdf, `reports/${payload.reportId}.pdf`);
logger.info("Report generated", { url });
return { url, generatedAt: new Date().toISOString() };
},
});
Not this:
// No retry, no queue limits, no logging
export const generateReport = task({
id: "generate-report",
run: async (payload: any) => {
const data = await fetchReportData(payload.id);
return await generatePDF(data);
},
});
2. Triggering Tasks
Do:
import { tasks } from "@trigger.dev/sdk/v3";
import type { generateReport } from "./trigger/tasks";
// Fire and forget
const handle = await tasks.trigger<typeof generateReport>("generate-report", {
reportId: "rpt_123",
userId: "usr_456",
});
console.log("Run ID:", handle.id);
// Wait for result
const result = await tasks.triggerAndWait<typeof generateReport>("generate-report", {
reportId: "rpt_123",
userId: "usr_456",
});
if (result.ok) {
console.log("Report URL:", result.output.url);
}
Not this:
// Polling for results manually instead of using triggerAndWait
const handle = await tasks.trigger("generate-report", payload);
while (true) {
const status = await checkStatus(handle.id);
if (status === "complete") break;
await new Promise(r => setTimeout(r, 1000));
}
3. Batch Processing with Fan-Out
Do:
export const processImageBatch = task({
id: "process-image-batch",
run: async (payload: { imageUrls: string[] }) => {
const results = await tasks.batchTriggerAndWait<typeof resizeImage>(
"resize-image",
payload.imageUrls.map((url) => ({ payload: { url } }))
);
const succeeded = results.runs.filter((r) => r.ok);
const failed = results.runs.filter((r) => !r.ok);
logger.info("Batch complete", {
total: payload.imageUrls.length,
succeeded: succeeded.length,
failed: failed.length,
});
return { succeeded: succeeded.length, failed: failed.length };
},
});
export const resizeImage = task({
id: "resize-image",
queue: { name: "image-processing", concurrencyLimit: 10 },
run: async (payload: { url: string }) => {
const buffer = await downloadImage(payload.url);
const resized = await sharp(buffer).resize(800, 600).toBuffer();
return await uploadToS3(resized, `resized/${crypto.randomUUID()}.webp`);
},
});
Not this:
// Processing all images in a single task, one failure kills everything
export const processImages = task({
id: "process-images",
run: async (payload: { urls: string[] }) => {
for (const url of payload.urls) {
await downloadAndResize(url); // no parallelism, no individual retry
}
},
});
Common Patterns
Scheduled Tasks (Cron)
import { schedules } from "@trigger.dev/sdk/v3";
export const dailyCleanup = schedules.task({
id: "daily-cleanup",
cron: "0 3 * * *", // 3 AM daily
run: async () => {
const deleted = await cleanupExpiredSessions();
logger.info("Cleanup complete", { deletedCount: deleted });
},
});
Webhook Handler
// In your API route (Next.js example)
import { tasks } from "@trigger.dev/sdk/v3";
export async function POST(req: Request) {
const event = await req.json();
await tasks.trigger("process-webhook", {
source: "stripe",
eventType: event.type,
data: event.data,
});
return new Response("OK", { status: 200 });
}
Wait for Approval Pattern
import { task, wait } from "@trigger.dev/sdk/v3";
export const approvalWorkflow = task({
id: "approval-workflow",
run: async (payload: { requestId: string }) => {
await step.run("notify", () => sendApprovalRequest(payload.requestId));
const token = await wait.forToken<{ approved: boolean }>({
id: `approval-${payload.requestId}`,
timeout: "7d",
});
if (token.ok && token.output.approved) {
await step.run("execute", () => executeRequest(payload.requestId));
}
},
});
Anti-Patterns
- Inlining long operations without subtasks: A single monolithic task cannot partially retry; decompose into subtasks for granular fault tolerance.
- Ignoring queue concurrency limits: Unlimited concurrency can exhaust database connections and rate-limited APIs; always set
concurrencyLimit. - Logging sensitive data: Task logs are visible in the dashboard; never log passwords, tokens, or PII in plain text.
- Synchronous polling for results: Use
triggerAndWaitor callbacks instead of polling loops that waste compute and add latency.
When to Use
- Long-running background jobs that exceed serverless function timeouts (PDF generation, video processing)
- Batch processing with fan-out parallelism and individual retry per item
- Scheduled maintenance tasks (cleanup, report generation, data sync)
- Webhook processing that requires durable execution beyond the HTTP response window
- AI/ML inference pipelines where model calls take seconds to minutes per request
Install this skill directly: skilldb add queue-workflow-services-skills
Related Skills
AWS Sqs
Integrate AWS SQS for scalable message queuing with FIFO ordering, dead-letter
Celery
Integrate Celery distributed task queue for Python-based async job processing
Inngest
Integrate Inngest event-driven functions for durable step execution with
Kafka
Integrate Apache Kafka event streaming using KafkaJS for high-throughput
Pgboss
Integrate pg-boss for PostgreSQL-backed job queuing with delayed jobs, retry
Rabbitmq
Integrate RabbitMQ message broker using amqplib for reliable async messaging.