Skip to main content
Technology & EngineeringPhp Laravel301 lines

Queues Jobs

Laravel queues, jobs, and background task processing with Redis, SQS, and Horizon

Quick Summary26 lines
You are an expert in Laravel queues and background jobs for building Laravel applications. You design jobs to be idempotent, observable, and resilient to failure, treating the queue as production infrastructure that demands the same monitoring and operational rigor as the web layer.

## Key Points

- Keep jobs small and focused on a single task; chain or batch when you need orchestration.
- Always set `$tries`, `$timeout`, and `$backoff` so failed jobs do not retry indefinitely.
- Use `ShouldBeUnique` to prevent duplicate processing of the same data.
- Store only serializable identifiers (model IDs) in jobs, not large objects or file contents.
- Use multiple named queues (`high`, `default`, `low`) and prioritize workers accordingly.
- Monitor queue depth and worker health with Horizon, or with CloudWatch for SQS.
- Use `DB::transaction()` in the dispatching code so the job is only dispatched if the transaction commits (or use `afterCommit()`).
- **Long-running jobs exceeding `retry_after`**: If a job runs longer than `retry_after`, the queue re-dispatches it, causing duplicate processing. Set `$timeout` below `retry_after`.
- **Forgetting `--queue` flag**: Workers default to the `default` queue; jobs dispatched to other queues sit unprocessed.
- **Memory leaks in long-running workers**: Use `--max-jobs` or `--max-time` to restart workers periodically.

## Quick Example

```php
Bus::chain([
    new ProcessUpload($file),
    new GenerateThumbnails($file),
    new NotifyUser($user, $file),
])->onQueue('uploads')->dispatch();
```
skilldb get php-laravel-skills/Queues JobsFull skill: 301 lines
Paste into your CLAUDE.md or agent config

Queues & Background Jobs — PHP/Laravel

You are an expert in Laravel queues and background jobs for building Laravel applications. You design jobs to be idempotent, observable, and resilient to failure, treating the queue as production infrastructure that demands the same monitoring and operational rigor as the web layer.

Core Philosophy

Queues exist to move work out of the request cycle so that users get fast responses while expensive operations happen in the background. But "background" does not mean "unmonitored" or "best-effort." A job that silently fails, retries indefinitely, or processes duplicate data is worse than a slow synchronous request, because the failure is invisible until a user reports missing data or a billing discrepancy. Every job must have explicit retry limits, timeout boundaries, failure handlers, and monitoring. Laravel provides all of these mechanisms; the team must use them intentionally rather than relying on defaults.

Idempotency is the single most important property of a queued job. Queues provide at-least-once delivery, which means a job may execute more than once due to worker crashes, timeout-triggered re-dispatches, or infrastructure failures. If a job sends an email, it may send two. If a job charges a credit card, it may charge twice. Designing for idempotency -- using unique constraints, checking state before acting, and using ShouldBeUnique -- prevents these double-processing bugs. Assuming a job will execute exactly once is a correctness error that becomes a production incident under load.

Job design should follow the same principles as function design: small, focused, and composable. A job that downloads a file, parses it, validates every row, inserts records, sends notifications, and updates a dashboard is untestable, unreliable, and impossible to partially retry. Break complex workflows into chains or batches of focused jobs, each responsible for one step. When step three of five fails, only step three needs to be retried, and the team can diagnose the failure by looking at a single, focused job class rather than a 200-line handle() method.

Overview

Laravel's queue system provides a unified API across different backends (Redis, Amazon SQS, database, Beanstalkd) for deferring time-consuming tasks such as sending emails, processing uploads, or syncing with external APIs. Jobs run in separate worker processes, keeping web requests fast.

Core Concepts

Defining a Job

namespace App\Jobs;

use App\Models\Order;
use App\Services\InvoiceGenerator;
use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Bus\Dispatchable;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;

class GenerateInvoice implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

    public int $tries = 3;
    public int $backoff = 60; // seconds between retries
    public int $timeout = 120;

    public function __construct(
        public readonly Order $order,
    ) {}

    public function handle(InvoiceGenerator $generator): void
    {
        $pdf = $generator->generate($this->order);

        $this->order->update([
            'invoice_path' => $pdf->store('invoices', 's3'),
            'invoice_generated_at' => now(),
        ]);
    }

    public function failed(\Throwable $exception): void
    {
        // Notify admin, log, or clean up
        Log::error('Invoice generation failed', [
            'order_id'  => $this->order->id,
            'exception' => $exception->getMessage(),
        ]);
    }
}

Dispatching Jobs

use App\Jobs\GenerateInvoice;

// Dispatch to default queue
GenerateInvoice::dispatch($order);

// Dispatch to a specific queue
GenerateInvoice::dispatch($order)->onQueue('invoices');

// Delay execution
GenerateInvoice::dispatch($order)->delay(now()->addMinutes(10));

// Dispatch after response is sent (sync but non-blocking to user)
GenerateInvoice::dispatchAfterResponse($order);

// Conditional dispatch
GenerateInvoice::dispatchIf($order->isPaid(), $order);

// Dispatch from a controller
class OrderController extends Controller
{
    public function store(StoreOrderRequest $request)
    {
        $order = Order::create($request->validated());
        GenerateInvoice::dispatch($order);

        return redirect()->route('orders.show', $order);
    }
}

Queue Configuration

// config/queue.php
'connections' => [
    'redis' => [
        'driver'     => 'redis',
        'connection' => env('REDIS_QUEUE_CONNECTION', 'default'),
        'queue'      => env('REDIS_QUEUE', 'default'),
        'retry_after' => 90,
        'block_for'  => null,
    ],

    'sqs' => [
        'driver' => 'sqs',
        'key'    => env('AWS_ACCESS_KEY_ID'),
        'secret' => env('AWS_SECRET_ACCESS_KEY'),
        'prefix' => env('SQS_PREFIX', 'https://sqs.us-east-1.amazonaws.com/your-account-id'),
        'queue'  => env('SQS_QUEUE', 'default'),
        'region' => env('AWS_DEFAULT_REGION', 'us-east-1'),
    ],
],

Implementation Patterns

Job Middleware

namespace App\Jobs\Middleware;

use Illuminate\Support\Facades\Redis;

class RateLimited
{
    public function handle(object $job, callable $next): void
    {
        Redis::throttle('api-calls')
            ->block(0)
            ->allow(30)
            ->every(60)
            ->then(
                fn () => $next($job),
                fn () => $job->release(30), // retry in 30 seconds
            );
    }
}

// Apply in the job class
class SyncWithExternalApi implements ShouldQueue
{
    public function middleware(): array
    {
        return [
            new RateLimited(),
            (new WithoutOverlapping($this->account->id))->releaseAfter(60),
        ];
    }
}

Job Batching

use Illuminate\Bus\Batch;
use Illuminate\Support\Facades\Bus;

$jobs = $users->map(fn (User $user) => new SendWeeklyDigest($user));

Bus::batch($jobs)
    ->name('weekly-digest')
    ->onQueue('emails')
    ->allowFailures()
    ->then(function (Batch $batch) {
        Log::info("Batch {$batch->id} completed.");
    })
    ->catch(function (Batch $batch, \Throwable $e) {
        Log::error("Batch {$batch->id} had failures.", ['error' => $e->getMessage()]);
    })
    ->finally(function (Batch $batch) {
        Notification::send($admins, new BatchCompletedNotification($batch));
    })
    ->dispatch();

// Inside a batched job, you can add more jobs
public function handle(): void
{
    // ... process
    if ($this->batch()) {
        $this->batch()->add(new FollowUpJob($this->user));
    }
}

Job Chaining

Bus::chain([
    new ProcessUpload($file),
    new GenerateThumbnails($file),
    new NotifyUser($user, $file),
])->onQueue('uploads')->dispatch();

Unique Jobs

use Illuminate\Contracts\Queue\ShouldBeUnique;

class RecalculateMetrics implements ShouldQueue, ShouldBeUnique
{
    public int $uniqueFor = 3600; // unique lock for 1 hour

    public function __construct(
        public readonly int $teamId,
    ) {}

    public function uniqueId(): string
    {
        return (string) $this->teamId;
    }
}

Running Workers

# Basic worker
php artisan queue:work --queue=high,default,low

# Process a single job (useful for testing)
php artisan queue:work --once

# With memory and timeout limits
php artisan queue:work redis --memory=128 --timeout=60 --tries=3

# Laravel Horizon (Redis-specific dashboard and management)
php artisan horizon

Horizon Configuration

// config/horizon.php
'environments' => [
    'production' => [
        'supervisor-1' => [
            'connection'  => 'redis',
            'queue'       => ['high', 'default', 'low'],
            'balance'     => 'auto',
            'minProcesses' => 1,
            'maxProcesses' => 10,
            'balanceMaxShift' => 3,
            'tries'       => 3,
            'timeout'     => 300,
        ],
    ],
    'local' => [
        'supervisor-1' => [
            'connection'  => 'redis',
            'queue'       => ['default'],
            'balance'     => 'simple',
            'processes'   => 3,
            'tries'       => 3,
        ],
    ],
],

Best Practices

  • Keep jobs small and focused on a single task; chain or batch when you need orchestration.
  • Always set $tries, $timeout, and $backoff so failed jobs do not retry indefinitely.
  • Use ShouldBeUnique to prevent duplicate processing of the same data.
  • Store only serializable identifiers (model IDs) in jobs, not large objects or file contents.
  • Use multiple named queues (high, default, low) and prioritize workers accordingly.
  • Monitor queue depth and worker health with Horizon, or with CloudWatch for SQS.
  • Use DB::transaction() in the dispatching code so the job is only dispatched if the transaction commits (or use afterCommit()).

Common Pitfalls

  • Serialization of deleted models: If a model is deleted between dispatch and processing, SerializesModels throws ModelNotFoundException. Use DeleteWhenMissingModels trait or handle in failed().
  • Long-running jobs exceeding retry_after: If a job runs longer than retry_after, the queue re-dispatches it, causing duplicate processing. Set $timeout below retry_after.
  • Forgetting --queue flag: Workers default to the default queue; jobs dispatched to other queues sit unprocessed.
  • Memory leaks in long-running workers: Use --max-jobs or --max-time to restart workers periodically.
  • Testing with Queue::fake(): Remember that fake prevents actual execution; use Queue::assertPushed() and test job logic separately via (new GenerateInvoice($order))->handle(app(InvoiceGenerator::class)).

Anti-Patterns

  • The fire-and-forget job — dispatching a job without setting $tries, $timeout, or a failed() method. When the job fails, it retries indefinitely, fills the failed-jobs table, or silently disappears. Every job must define its retry policy, timeout, and failure handling explicitly.

  • Fat jobs with mixed concerns — a single job class that fetches data from an API, transforms it, writes to the database, generates a PDF, uploads to S3, and sends a notification email. Any failure requires rerunning the entire sequence. Decompose into chained or batched jobs so each step can succeed or fail independently.

  • Non-idempotent side effects — a job that inserts a database record without checking for duplicates, so a retry creates duplicate rows. Or a job that sends an email on every execution, so a retry sends the email twice. Use database unique constraints, updateOrCreate, or state checks to ensure repeated execution produces the same result.

  • Storing large payloads in job properties — serializing file contents, large arrays, or entire model collections into the job constructor instead of storing an identifier and fetching fresh data in handle(). Large payloads slow serialization, bloat the queue backend, and become stale if the underlying data changes before the job executes.

  • Ignoring queue depth monitoring — running queue workers without monitoring how many jobs are pending. When the dispatch rate exceeds the processing rate, the queue grows silently until jobs are hours or days old. Monitor queue depth and set alerts so the team can scale workers before the backlog becomes a user-facing problem.

Install this skill directly: skilldb add php-laravel-skills

Get CLI access →