Sidekiq
Background job processing with Sidekiq, including job design, error handling, queues, and performance tuning in Rails.
You are an expert in Sidekiq for building reliable background job processing systems in Rails applications. ## Key Points - [critical, 6] - [default, 3] - **Pass simple arguments**: Only pass primitive types (IDs, strings, numbers) as job arguments, never full Ruby objects. Sidekiq serializes arguments as JSON. - **Make jobs idempotent**: Jobs may run more than once due to retries. Design them so re-execution is safe. - **Keep jobs small**: Break large tasks into smaller jobs. A single job should ideally complete in under 30 seconds. - **Use appropriate queues**: Separate urgent work (emails, payments) from bulk work (reports, imports) using different queues and priorities. - **Monitor your queues**: Use the Sidekiq Web UI, and alert on queue latency and dead-set growth. - **Set memory limits**: Use `MALLOC_ARENA_MAX=2` and consider the `sidekiq-limit_fetch` gem to prevent memory bloat. - **Handle `ActiveRecord::RecordNotFound`**: Records may be deleted between enqueue and execution. Decide whether to retry or silently discard. - **Passing ActiveRecord objects**: Sidekiq serializes to JSON. Pass `user.id`, not `user`. - **Non-idempotent jobs**: Sending duplicate emails or double-charging because a job retried after a transient failure. - **Unbounded queue growth**: Not monitoring queues lets them grow silently until Redis runs out of memory.
skilldb get ruby-rails-skills/SidekiqFull skill: 302 linesBackground Jobs with Sidekiq — Ruby on Rails
You are an expert in Sidekiq for building reliable background job processing systems in Rails applications.
Overview
Sidekiq uses Redis-backed multithreaded job processing to handle work asynchronously. It is the most widely used background job library in the Rails ecosystem, handling everything from email delivery and data imports to scheduled maintenance tasks and webhook processing.
Core Philosophy
Background jobs exist to move work out of the request cycle, but they introduce a fundamentally different execution model. A job may run seconds, minutes, or hours after it was enqueued. The database record it references may have been deleted. The external service it calls may be down. Designing for this uncertainty — not just the happy path — is what separates reliable job processing from a ticking time bomb.
Idempotency is the single most important property of a background job. Because jobs can be retried due to transient failures, process crashes, or Redis hiccups, every job must produce the same result whether it runs once or five times. This means checking whether work has already been done before doing it, using database constraints to prevent duplicates, and never relying on the assumption that a job runs exactly once.
Sidekiq's threading model means your jobs share memory within a process. This is efficient but demands thread-safe code: no mutable global state, no class-level instance variables modified during execution, and careful attention to connection pool sizing for databases and external services. The concurrency that makes Sidekiq fast is also what makes poorly written jobs fail in subtle, hard-to-reproduce ways.
Anti-Patterns
-
Serializing ActiveRecord Objects: Passing full model instances as job arguments instead of IDs. Sidekiq serializes arguments to JSON and stores them in Redis. Pass
user.id, then look up the record insideperform. This also naturally handles the case where the record has changed between enqueue and execution. -
Fire-and-Forget Without Monitoring: Enqueuing jobs without monitoring queue latency, dead set growth, or retry counts. A silently growing queue means work is not getting done, and a growing dead set means jobs are failing permanently without anyone noticing.
-
The Unbounded Batch Import: Processing an entire CSV or API response in a single massive job that runs for minutes. If the job fails at row 9,999 of 10,000, it restarts from zero. Break large imports into small, resumable chunks that each complete in seconds.
-
Shared Mutable State: Using class-level variables, global caches, or thread-unsafe gems inside job code. Sidekiq runs multiple jobs concurrently in the same process, so any shared state is a race condition waiting to happen.
-
Retry Without Backoff Strategy: Accepting Sidekiq's default retry behavior without considering the failure mode. Retrying a job that fails because an external API is down every few seconds for 25 attempts just hammers the already-struggling service. Configure custom retry intervals with exponential backoff and jitter.
Core Concepts
Basic Job (Worker)
# app/sidekiq/hard_worker.rb (Rails 7+ convention)
# or app/workers/hard_worker.rb (traditional)
class HardWorker
include Sidekiq::Job
sidekiq_options queue: :default, retry: 5
def perform(user_id, action)
user = User.find(user_id)
user.send(action)
end
end
# Enqueue
HardWorker.perform_async(42, "activate")
HardWorker.perform_in(5.minutes, 42, "activate")
HardWorker.perform_at(1.hour.from_now, 42, "activate")
ActiveJob Integration
# app/jobs/process_order_job.rb
class ProcessOrderJob < ApplicationJob
queue_as :critical
retry_on ActiveRecord::Deadlocked, wait: 5.seconds, attempts: 3
discard_on ActiveJob::DeserializationError
def perform(order)
order.process!
OrderMailer.confirmation(order).deliver_later
end
end
# config/application.rb
config.active_job.queue_adapter = :sidekiq
Queue Configuration
# config/sidekiq.yml
:concurrency: 10
:queues:
- [critical, 6]
- [default, 3]
- [low, 1]
:limits:
critical: 10
default: 5
# Process-specific configurations
production:
:concurrency: 25
Implementation Patterns
Idempotent Jobs
class ChargeSubscriptionJob
include Sidekiq::Job
sidekiq_options queue: :critical, retry: 3
def perform(subscription_id, billing_period)
subscription = Subscription.find(subscription_id)
# Idempotency check: skip if already billed for this period
return if subscription.billed_for?(billing_period)
ActiveRecord::Base.transaction do
charge = subscription.create_charge!(billing_period)
PaymentGateway.charge(charge)
subscription.mark_billed!(billing_period)
end
end
end
Batch Processing
class ImportUsersJob
include Sidekiq::Job
sidekiq_options queue: :low, retry: 2
BATCH_SIZE = 1000
def perform(file_path, offset = 0)
rows = CSV.read(file_path, headers: true)
batch = rows[offset, BATCH_SIZE]
return if batch.nil? || batch.empty?
users = batch.map do |row|
{ name: row["name"], email: row["email"], created_at: Time.current }
end
User.insert_all(users)
# Enqueue next batch if there are more rows
if offset + BATCH_SIZE < rows.size
ImportUsersJob.perform_async(file_path, offset + BATCH_SIZE)
end
end
end
Error Handling and Retries
class WebhookDeliveryJob
include Sidekiq::Job
sidekiq_options retry: 10 # Exponential backoff by default
sidekiq_retry_in do |count, exception|
case exception
when Net::ReadTimeout
10 * (count + 1) # Linear backoff for timeouts
else
(count ** 4) + 15 + (rand(10) * (count + 1)) # Exponential with jitter
end
end
sidekiq_retries_exhausted do |msg, exception|
WebhookEndpoint.find(msg["args"].first).mark_failed!
ErrorTracker.notify(exception, context: msg)
end
def perform(endpoint_id, payload_json)
endpoint = WebhookEndpoint.find(endpoint_id)
response = HTTP.timeout(10).post(endpoint.url, json: JSON.parse(payload_json))
unless response.status.success?
raise "Webhook delivery failed: #{response.status}"
end
endpoint.update!(last_delivered_at: Time.current)
end
end
Unique Jobs (with sidekiq-unique-jobs or built-in)
class ReindexProductJob
include Sidekiq::Job
# Using Sidekiq Enterprise or sidekiq-unique-jobs gem
sidekiq_options lock: :until_executed,
lock_ttl: 1.hour,
on_conflict: :reject
def perform(product_id)
product = Product.find(product_id)
product.reindex!
end
end
Scheduled / Recurring Jobs (with sidekiq-cron or sidekiq-scheduler)
# config/initializers/sidekiq_cron.rb
Sidekiq::Cron::Job.load_from_hash(
"daily_report" => {
"cron" => "0 9 * * *",
"class" => "DailyReportJob",
"queue" => "low"
},
"cleanup_expired_sessions" => {
"cron" => "*/30 * * * *",
"class" => "CleanupSessionsJob"
}
)
Middleware
# config/initializers/sidekiq.rb
Sidekiq.configure_server do |config|
config.redis = { url: ENV.fetch("REDIS_URL", "redis://localhost:6379/0") }
config.server_middleware do |chain|
chain.add CustomLoggingMiddleware
end
end
Sidekiq.configure_client do |config|
config.redis = { url: ENV.fetch("REDIS_URL", "redis://localhost:6379/0") }
end
# Custom middleware example
class CustomLoggingMiddleware
def call(job_instance, msg, queue)
start = Process.clock_gettime(Process::CLOCK_MONOTONIC)
yield
ensure
elapsed = Process.clock_gettime(Process::CLOCK_MONOTONIC) - start
Rails.logger.info("#{msg['class']} completed in #{elapsed.round(3)}s")
end
end
Testing
# spec/sidekiq/hard_worker_spec.rb
require "rails_helper"
RSpec.describe HardWorker do
describe "#perform" do
it "activates the user" do
user = create(:user, active: false)
HardWorker.new.perform(user.id, "activate!")
expect(user.reload).to be_active
end
end
describe "enqueueing" do
include Sidekiq::Testing
before { Sidekiq::Testing.fake! }
it "enqueues the job" do
expect {
HardWorker.perform_async(1, "activate!")
}.to change(HardWorker.jobs, :size).by(1)
end
it "processes the job inline" do
Sidekiq::Testing.inline! do
# Job executes immediately
HardWorker.perform_async(user.id, "activate!")
expect(user.reload).to be_active
end
end
end
end
Best Practices
- Pass simple arguments: Only pass primitive types (IDs, strings, numbers) as job arguments, never full Ruby objects. Sidekiq serializes arguments as JSON.
- Make jobs idempotent: Jobs may run more than once due to retries. Design them so re-execution is safe.
- Keep jobs small: Break large tasks into smaller jobs. A single job should ideally complete in under 30 seconds.
- Use appropriate queues: Separate urgent work (emails, payments) from bulk work (reports, imports) using different queues and priorities.
- Monitor your queues: Use the Sidekiq Web UI, and alert on queue latency and dead-set growth.
- Set memory limits: Use
MALLOC_ARENA_MAX=2and consider thesidekiq-limit_fetchgem to prevent memory bloat. - Handle
ActiveRecord::RecordNotFound: Records may be deleted between enqueue and execution. Decide whether to retry or silently discard.
Common Pitfalls
- Passing ActiveRecord objects: Sidekiq serializes to JSON. Pass
user.id, notuser. - Non-idempotent jobs: Sending duplicate emails or double-charging because a job retried after a transient failure.
- Unbounded queue growth: Not monitoring queues lets them grow silently until Redis runs out of memory.
- Long-running jobs blocking threads: A single slow job ties up a Sidekiq thread. Use timeouts or break work into smaller units.
- Testing with
perform_asyncin unit tests: UseSidekiq::Testing.fake!orinline!modes. Do not rely on a running Redis in unit tests. - Ignoring the dead set: Failed jobs that exhaust retries go to the dead set. Monitor and handle them.
Install this skill directly: skilldb add ruby-rails-skills
Related Skills
Active Record
ActiveRecord query patterns, associations, validations, callbacks, and performance optimization for Rails applications.
API Mode
Building JSON APIs with Rails API mode, serialization, versioning, authentication, and rate limiting.
Concerns Modules
ActiveSupport::Concern patterns, module design, and code organization strategies for maintainable Rails applications.
Deployment
Deploying Rails applications with Kamal, Docker, and production best practices for infrastructure and operations.
Hotwire Turbo
Hotwire and Turbo Drive, Frames, and Streams for building reactive Rails frontends without heavy JavaScript.
Stimulus
Stimulus.js controller patterns for adding interactive behavior to server-rendered Rails HTML.