Graphile Worker
"Graphile Worker: PostgreSQL-backed job queue, no Redis needed, cron jobs, batch jobs, task isolation, migrations, LISTEN/NOTIFY, Node.js"
Graphile Worker is a high-performance job queue for Node.js that uses PostgreSQL as its sole backing store. It leverages PostgreSQL's LISTEN/NOTIFY for near-instant job delivery and SKIP LOCKED for reliable, concurrent job fetching — no Redis or additional infrastructure required. If you already run PostgreSQL, Graphile Worker adds background jobs with zero extra dependencies. ## Key Points - Always enqueue jobs inside the same database transaction as the triggering business logic. This is Graphile Worker's strongest advantage — use it to avoid ghost jobs or missing jobs after crashes. - Use `jobKey` for operations that should be deduplicated or replaceable (e.g., a scheduled report). Re-adding a job with the same key updates the existing job rather than creating a duplicate. - Set `max_attempts` per job type based on whether the operation is idempotent. For non-idempotent work, keep attempts low and alert on permanent failures. ## Quick Example ```bash npm install graphile-worker # Graphile Worker manages its own schema (graphile_worker). # Run migrations against your database: npx graphile-worker --connection postgres://localhost/mydb --schema-only ``` ```bash # CLI — discovers tasks from the tasks/ directory npx graphile-worker --connection postgres://localhost/mydb # Or programmatically: ```
skilldb get background-jobs-services-skills/Graphile WorkerFull skill: 239 linesGraphile Worker
Core Philosophy
Graphile Worker is a high-performance job queue for Node.js that uses PostgreSQL as its sole backing store. It leverages PostgreSQL's LISTEN/NOTIFY for near-instant job delivery and SKIP LOCKED for reliable, concurrent job fetching — no Redis or additional infrastructure required. If you already run PostgreSQL, Graphile Worker adds background jobs with zero extra dependencies.
Jobs are stored in the same database as your application data, which means you can enqueue jobs inside the same transaction as your business logic. This transactional enqueuing guarantees that jobs are only created when the transaction commits, eliminating a whole class of consistency bugs common with external queue systems.
Setup
Installation
npm install graphile-worker
# Graphile Worker manages its own schema (graphile_worker).
# Run migrations against your database:
npx graphile-worker --connection postgres://localhost/mydb --schema-only
Configuration File
// graphile.config.js (or .ts with ts-node)
/** @type {import("graphile-worker").RunnerOptions} */
module.exports = {
connectionString: process.env.DATABASE_URL,
concurrency: 10,
pollInterval: 1000, // Fallback poll (LISTEN/NOTIFY handles most delivery)
noHandleSignals: false, // Graceful shutdown on SIGTERM/SIGINT
taskDirectory: `${__dirname}/tasks`,
};
Defining Tasks as Files
// tasks/send-email.ts
// Filename becomes the task identifier: "send-email"
import type { Task } from "graphile-worker";
import { sendEmail } from "../lib/email";
const task: Task = async (payload, helpers) => {
const { to, subject, body } = payload as {
to: string;
subject: string;
body: string;
};
helpers.logger.info(`Sending email to ${to}`);
await sendEmail(to, subject, body);
};
export default task;
Starting the Worker
# CLI — discovers tasks from the tasks/ directory
npx graphile-worker --connection postgres://localhost/mydb
# Or programmatically:
// worker.ts
import { run } from "graphile-worker";
async function main() {
const runner = await run({
connectionString: process.env.DATABASE_URL,
concurrency: 10,
taskDirectory: `${__dirname}/tasks`,
});
// To gracefully shut down:
await runner.promise; // Resolves when runner stops
}
main().catch((err) => {
console.error(err);
process.exit(1);
});
Core Patterns
Enqueuing Jobs from Application Code
import { quickAddJob } from "graphile-worker";
// Simple job
await quickAddJob(
{ connectionString: process.env.DATABASE_URL },
"send-email",
{ to: "user@example.com", subject: "Welcome", body: "Hello!" }
);
// Delayed job — runs after 10 minutes
await quickAddJob(
{ connectionString: process.env.DATABASE_URL },
"send-email",
{ to: "user@example.com", subject: "Follow-up", body: "Hey!" },
{ runAt: new Date(Date.now() + 10 * 60 * 1000) }
);
// Job with a unique key — prevents duplicates
await quickAddJob(
{ connectionString: process.env.DATABASE_URL },
"generate-report",
{ reportId: "monthly-2025-06" },
{ jobKey: "report-monthly-2025-06" }
);
Transactional Enqueuing (the Killer Feature)
import { makeWorkerUtils } from "graphile-worker";
import { pool } from "./db";
const workerUtils = await makeWorkerUtils({
connectionString: process.env.DATABASE_URL,
});
// Enqueue a job inside a transaction — if the transaction
// rolls back, the job is never created.
const client = await pool.connect();
try {
await client.query("BEGIN");
const result = await client.query(
`INSERT INTO orders (user_id, total) VALUES ($1, $2) RETURNING id`,
[userId, total]
);
const orderId = result.rows[0].id;
// This uses the SAME transaction — atomic with the insert
await workerUtils.addJob("process-order", { orderId }, { pgPool: client });
await client.query("COMMIT");
} catch (err) {
await client.query("ROLLBACK");
throw err;
} finally {
client.release();
}
Direct SQL Enqueuing
-- You can add jobs directly from SQL, triggers, or other services
SELECT graphile_worker.add_job(
'send-email',
json_build_object('to', 'user@example.com', 'subject', 'Hello'),
run_at := now() + interval '5 minutes',
max_attempts := 5,
job_key := 'welcome-email-42'
);
Cron Jobs (Recurring Schedules)
// worker.ts
import { run, parseCrontab } from "graphile-worker";
const runner = await run({
connectionString: process.env.DATABASE_URL,
taskDirectory: `${__dirname}/tasks`,
parsedCronItems: parseCrontab([
// task identifier | cron schedule | options | payload
"daily-digest 0 8 * * * ?fill=1d {}",
"cleanup-expired */15 * * * * ?id=cleanup {}",
]),
});
# Or use a crontab file: crontab
# min hour dom mon dow task_identifier ?options payload
0 8 * * * daily-digest ?fill=1d {}
*/15 * * * * cleanup-expired {}
Using Helpers Inside Tasks
// tasks/process-batch.ts
import type { Task } from "graphile-worker";
const task: Task = async (payload, helpers) => {
const { batchId } = payload as { batchId: string };
// Structured logging — includes job ID, task name
helpers.logger.info(`Processing batch ${batchId}`);
// Add a follow-up job from inside a task
await helpers.addJob("send-batch-report", { batchId });
// Access the database connection used by this worker
await helpers.query("UPDATE batches SET status = 'done' WHERE id = $1", [
batchId,
]);
};
export default task;
Best Practices
- Always enqueue jobs inside the same database transaction as the triggering business logic. This is Graphile Worker's strongest advantage — use it to avoid ghost jobs or missing jobs after crashes.
- Use
jobKeyfor operations that should be deduplicated or replaceable (e.g., a scheduled report). Re-adding a job with the same key updates the existing job rather than creating a duplicate. - Set
max_attemptsper job type based on whether the operation is idempotent. For non-idempotent work, keep attempts low and alert on permanent failures.
Common Pitfalls
- Running
quickAddJoboutside of a transaction negates the transactional guarantee. For production use, prefermakeWorkerUtilswith a shared pool or use the SQLgraphile_worker.add_job()function inside your existing transactions. - Forgetting to run the worker process in production. Graphile Worker is not a serverless callback system — you must run a persistent Node.js process (or container) that polls the database. Use a process manager or container orchestrator to keep it alive.
Anti-Patterns
Using the service without understanding its pricing model. Cloud services bill differently — per request, per GB, per seat. Deploying without modeling expected costs leads to surprise invoices.
Hardcoding configuration instead of using environment variables. API keys, endpoints, and feature flags change between environments. Hardcoded values break deployments and leak secrets.
Ignoring the service's rate limits and quotas. Every external API has throughput limits. Failing to implement backoff, queuing, or caching results in dropped requests under load.
Treating the service as always available. External services go down. Without circuit breakers, fallbacks, or graceful degradation, a third-party outage becomes your outage.
Coupling your architecture to a single provider's API. Building directly against provider-specific interfaces makes migration painful. Wrap external services in thin adapter layers.
Install this skill directly: skilldb add background-jobs-services-skills
Related Skills
BullMQ
"BullMQ: Redis-based job queue, workers, delayed jobs, rate limiting, job priorities, repeatable jobs, concurrency, dashboard"
Faktory
"Faktory: polyglot background job server, language-agnostic workers, job priorities, retries, scheduled jobs, batches, middleware, Web UI"
Inngest
"Inngest: event-driven functions, durable workflows, step functions, retries, cron, fan-out, sleep, Next.js integration"
Quirrel
"Quirrel: job queue for serverless/edge, cron jobs, delayed jobs, repeat scheduling, Next.js/Remix/SvelteKit integration, type-safe queues"
Temporal
"Temporal: durable execution, workflows, activities, signals, queries, retries, timers, TypeScript SDK"
Trigger.dev
"Trigger.dev: background jobs for Next.js/Node, long-running tasks, integrations, retry, cron, dashboard, v3 SDK"