Skip to main content
Technology & EngineeringBackground Jobs Services267 lines

Faktory

"Faktory: polyglot background job server, language-agnostic workers, job priorities, retries, scheduled jobs, batches, middleware, Web UI"

Quick Summary9 lines
Faktory is a language-agnostic background job server created by Mike Perham, the author of Sidekiq. It extracts the job queue coordination into a standalone server process, so any programming language can produce and consume jobs through a simple TCP protocol. This makes Faktory ideal for polyglot teams or architectures where different services are written in different languages but need a unified job processing system.

## Key Points

- Use strict queue priority (`["critical", "default", "low"]`) and assign jobs to the right queue. Payment processing and user-facing work should go to `critical`; analytics and cleanup to `low`.
- Set `reserve_for` appropriately for long-running jobs. The default is 30 minutes — if a job takes longer, Faktory assumes the worker died and re-dispatches the job, causing duplicate execution.
- Use middleware for cross-cutting concerns (logging, error reporting, metrics) rather than adding boilerplate to every job handler.
skilldb get background-jobs-services-skills/FaktoryFull skill: 267 lines
Paste into your CLAUDE.md or agent config

Faktory

Core Philosophy

Faktory is a language-agnostic background job server created by Mike Perham, the author of Sidekiq. It extracts the job queue coordination into a standalone server process, so any programming language can produce and consume jobs through a simple TCP protocol. This makes Faktory ideal for polyglot teams or architectures where different services are written in different languages but need a unified job processing system.

Faktory server handles storage, scheduling, retries, and the Web UI. Workers connect over TCP, fetch jobs, process them, and report success or failure. Official and community client libraries exist for Ruby, Go, Node.js, Python, Rust, Elixir, PHP, and .NET.

Setup

Installing the Faktory Server

# macOS
brew install faktory

# Linux (Debian/Ubuntu)
wget https://github.com/contribsys/faktory/releases/latest/download/faktory_amd64.deb
sudo dpkg -i faktory_amd64.deb

# Docker
docker run --rm -p 7419:7419 -p 7420:7420 contribsys/faktory:latest

# Port 7419: worker TCP protocol
# Port 7420: Web UI (http://localhost:7420)

Node.js Worker Setup

// npm install faktory-worker

// worker.ts
import faktory from "faktory-worker";

// Register job handlers before starting the worker
faktory.register("send-email", async (args: {
  to: string;
  subject: string;
  body: string;
}) => {
  await sendEmail(args.to, args.subject, args.body);
});

faktory.register("generate-report", async (args: {
  reportId: string;
  format: string;
}) => {
  const data = await fetchReportData(args.reportId);
  await renderReport(data, args.format);
});

// Start processing — connects to Faktory server
async function start() {
  const worker = await faktory.work({
    host: process.env.FAKTORY_HOST ?? "localhost",
    port: 7419,
    password: process.env.FAKTORY_PASSWORD,
    queues: ["critical", "default", "low"],  // Priority order
    concurrency: 20,
  });

  // Graceful shutdown
  process.on("SIGTERM", () => worker.stop());
}

start();

Go Worker Setup

package main

import (
    "context"
    "log"

    worker "github.com/contribsys/faktory_worker_go"
)

func main() {
    mgr := worker.NewManager()
    mgr.Concurrency = 20
    mgr.Queues = []string{"critical", "default", "low"}

    mgr.Register("send-email", func(ctx context.Context, args ...interface{}) error {
        help := worker.HelperFor(ctx)
        log.Printf("Processing job %s", help.Jid())

        to := args[0].(string)
        subject := args[1].(string)
        // ... send the email
        return nil
    })

    mgr.Run() // Blocks until shutdown signal
}

Core Patterns

Producing Jobs (Node.js Client)

import faktory from "faktory-worker";

const client = await faktory.connect({
  host: process.env.FAKTORY_HOST ?? "localhost",
  port: 7419,
  password: process.env.FAKTORY_PASSWORD,
});

// Push a job to the default queue
await client.push({
  jobtype: "send-email",
  args: [{ to: "user@example.com", subject: "Welcome", body: "Hello!" }],
  queue: "default",
});

// Scheduled job — runs at a specific time
await client.push({
  jobtype: "send-reminder",
  args: [{ userId: "42", message: "Your trial ends tomorrow" }],
  at: new Date(Date.now() + 24 * 60 * 60 * 1000).toISOString(),
});

// Job with retry configuration
await client.push({
  jobtype: "process-payment",
  args: [{ orderId: "order-123", amount: 4999 }],
  queue: "critical",
  retry: 5,              // Max 5 retries with exponential backoff
  reserve_for: 300,      // Worker has 5 minutes to complete
});

// Custom metadata via custom fields
await client.push({
  jobtype: "generate-report",
  args: [{ reportId: "monthly" }],
  custom: {
    unique_for: 3600,    // Deduplicate for 1 hour (Enterprise feature)
    track: 1,            // Enable job tracking (Enterprise feature)
  },
});

await client.close();

Producing Jobs from Any Language (Python Example)

# pip install pyfaktory
from pyfaktory import Client, Job

with Client(host="localhost", port=7419) as client:
    job = Job(
        jobtype="send-email",
        args=[{"to": "user@example.com", "subject": "Hello"}],
        queue="default",
        retry=3,
    )
    client.push(job)

Middleware (Node.js)

import faktory from "faktory-worker";

// Middleware wraps job execution — similar to Express middleware
faktory.use(async (ctx, next) => {
  const start = Date.now();
  console.log(`Starting job ${ctx.job.jid} (${ctx.job.jobtype})`);

  try {
    await next();
    console.log(`Completed ${ctx.job.jid} in ${Date.now() - start}ms`);
  } catch (err) {
    console.error(`Failed ${ctx.job.jid}: ${err.message}`);
    throw err; // Re-throw to trigger retry
  }
});

// Error reporting middleware
faktory.use(async (ctx, next) => {
  try {
    await next();
  } catch (err) {
    await reportToSentry(err, { jobId: ctx.job.jid, jobtype: ctx.job.jobtype });
    throw err;
  }
});

Queue Priority and Weighted Fetching

// Workers fetch from queues in the order listed.
// Strict priority: always drain higher-priority queues first.
const worker = await faktory.work({
  queues: ["critical", "default", "low"],
  concurrency: 20,
});

// For weighted fetching, repeat queue names:
// "critical" appears 3x, "default" 2x, "low" 1x
// Each fetch randomly picks from the weighted list
const worker2 = await faktory.work({
  queues: ["critical", "critical", "critical", "default", "default", "low"],
  concurrency: 20,
});

Batches (Faktory Enterprise)

const client = await faktory.connect();

// A batch groups jobs and runs a callback when all complete
const batch = await client.batch({
  success: { jobtype: "batch-complete", args: [{ batchId: "import-2025" }] },
  complete: { jobtype: "batch-done", args: [{ batchId: "import-2025" }] },
});

// Add jobs to the batch
await batch.push({
  jobtype: "import-row",
  args: [{ row: 1, file: "data.csv" }],
});
await batch.push({
  jobtype: "import-row",
  args: [{ row: 2, file: "data.csv" }],
});

// Commit the batch — no more jobs can be added after this
await batch.commit();
await client.close();

Best Practices

  • Use strict queue priority (["critical", "default", "low"]) and assign jobs to the right queue. Payment processing and user-facing work should go to critical; analytics and cleanup to low.
  • Set reserve_for appropriately for long-running jobs. The default is 30 minutes — if a job takes longer, Faktory assumes the worker died and re-dispatches the job, causing duplicate execution.
  • Use middleware for cross-cutting concerns (logging, error reporting, metrics) rather than adding boilerplate to every job handler.

Common Pitfalls

  • Faktory server stores jobs in memory by default with periodic disk snapshots. An unexpected crash can lose jobs added since the last snapshot. In production, tune the snapshot interval or use Faktory Enterprise's Redis-backed storage for stronger durability guarantees.
  • Workers in different languages must agree on the jobtype string and args format. There is no schema enforcement — a typo in the jobtype silently creates jobs that no worker picks up. Use constants or a shared config to keep job type names consistent.

Anti-Patterns

Using the service without understanding its pricing model. Cloud services bill differently — per request, per GB, per seat. Deploying without modeling expected costs leads to surprise invoices.

Hardcoding configuration instead of using environment variables. API keys, endpoints, and feature flags change between environments. Hardcoded values break deployments and leak secrets.

Ignoring the service's rate limits and quotas. Every external API has throughput limits. Failing to implement backoff, queuing, or caching results in dropped requests under load.

Treating the service as always available. External services go down. Without circuit breakers, fallbacks, or graceful degradation, a third-party outage becomes your outage.

Coupling your architecture to a single provider's API. Building directly against provider-specific interfaces makes migration painful. Wrap external services in thin adapter layers.

Install this skill directly: skilldb add background-jobs-services-skills

Get CLI access →