Skip to main content
Technology & EngineeringServerless260 lines

Event Triggers

Expert guidance for building event-driven serverless architectures with S3, SQS, and EventBridge triggers

Quick Summary8 lines
You are an expert in event-driven triggers (S3, SQS, EventBridge) for building serverless applications. You design decoupled, resilient event-driven architectures where producers and consumers evolve independently, failures are isolated, and every message is accounted for.

## Key Points

- Always configure a dead-letter queue (DLQ) on SQS triggers so that messages that repeatedly fail processing are captured for inspection instead of being retried indefinitely and blocking the queue.
- S3 event notifications can trigger recursive loops if a Lambda writes back to the same bucket prefix that triggers it — always use separate prefixes or buckets for input and output.
skilldb get serverless-skills/Event TriggersFull skill: 260 lines
Paste into your CLAUDE.md or agent config

Event Triggers — Serverless

You are an expert in event-driven triggers (S3, SQS, EventBridge) for building serverless applications. You design decoupled, resilient event-driven architectures where producers and consumers evolve independently, failures are isolated, and every message is accounted for.

Core Philosophy

Event-driven architecture is about decoupling intent from execution. When a service emits an event ("order created"), it should not know or care what happens next — whether an email is sent, inventory is updated, or an audit log is written. This decoupling is the source of both the architecture's power and its complexity. Embrace it by designing events as facts about what happened (past tense, immutable) rather than commands about what should happen. This lets you add new consumers without modifying producers.

Reliability in event-driven systems comes from assuming failure at every step. Messages will be delivered more than once, consumers will crash mid-processing, and queues will back up. Design every consumer to be idempotent (safe to process the same message twice), configure dead-letter queues to capture poison messages, and use partial batch failure reporting so one bad message does not block an entire batch. The goal is not to prevent failures but to ensure every failure is visible, recoverable, and non-destructive.

Observability is harder in event-driven systems than in synchronous request-response architectures because there is no single request thread to trace. Propagate correlation IDs through every event, log the event source and message ID in every consumer, and set up CloudWatch alarms on DLQ depth and consumer error rates. If you cannot trace an event from producer to every consumer that processed it, your observability is insufficient.

Anti-Patterns

  • Recursive event loops — An S3-triggered Lambda that writes back to the same bucket prefix it listens on creates an infinite loop that scales exponentially and generates unbounded cost. Always use separate input and output prefixes or buckets.
  • Missing dead-letter queues — Without a DLQ, messages that repeatedly fail processing are retried until they expire or block the queue. Failed messages disappear silently with no way to inspect, debug, or replay them.
  • Processing entire SQS batches as all-or-nothing — Without ReportBatchItemFailures, a single failed message causes the entire batch to be retried, re-processing messages that already succeeded. This wastes compute, risks side-effect duplication, and can cause cascading failures.
  • Tight coupling through event schemas — Adding required fields to an event schema breaks all existing consumers that do not handle the new field. Use schema evolution practices: new fields should be optional, consumers should ignore unknown fields, and breaking changes require a new event type.
  • Synchronous fan-out via direct Lambda invocations — Calling multiple Lambda functions synchronously from a producer creates tight coupling and means the producer blocks until all consumers finish. Use SNS fan-out or EventBridge rules so the producer fires and forgets.

Overview

Event-driven architecture is fundamental to serverless systems. Instead of synchronous request-response, services emit events that trigger downstream processing. AWS provides several event sources: S3 notifications for object changes, SQS for reliable message queuing with at-least-once delivery, and EventBridge for routing events across services with content-based filtering. These triggers decouple producers from consumers and enable scalable, resilient workflows.

Setup & Configuration

S3 event trigger (SAM)

Resources:
  ImageProcessorFunction:
    Type: AWS::Serverless::Function
    Properties:
      Handler: src/image-processor.handler
      Runtime: nodejs20.x
      MemorySize: 1024
      Timeout: 60
      Policies:
        - S3ReadPolicy:
            BucketName: !Ref UploadBucket
        - S3CrudPolicy:
            BucketName: !Ref ProcessedBucket
      Events:
        S3Upload:
          Type: S3
          Properties:
            Bucket: !Ref UploadBucket
            Events: s3:ObjectCreated:*
            Filter:
              S3Key:
                Rules:
                  - Name: suffix
                    Value: .jpg

  UploadBucket:
    Type: AWS::S3::Bucket

  ProcessedBucket:
    Type: AWS::S3::Bucket

SQS trigger (SAM)

Resources:
  OrderProcessorFunction:
    Type: AWS::Serverless::Function
    Properties:
      Handler: src/order-processor.handler
      Runtime: nodejs20.x
      Events:
        OrderQueue:
          Type: SQS
          Properties:
            Queue: !GetAtt OrderQueue.Arn
            BatchSize: 10
            MaximumBatchingWindowInSeconds: 5
            FunctionResponseTypes:
              - ReportBatchItemFailures

  OrderQueue:
    Type: AWS::SQS::Queue
    Properties:
      VisibilityTimeout: 300
      RedrivePolicy:
        deadLetterTargetArn: !GetAtt OrderDLQ.Arn
        maxReceiveCount: 3

  OrderDLQ:
    Type: AWS::SQS::Queue

EventBridge rule (SAM)

Resources:
  AuditFunction:
    Type: AWS::Serverless::Function
    Properties:
      Handler: src/audit.handler
      Runtime: nodejs20.x
      Events:
        OrderCreated:
          Type: EventBridgeRule
          Properties:
            EventBusName: !Ref AppEventBus
            Pattern:
              source:
                - "myapp.orders"
              detail-type:
                - "OrderCreated"
              detail:
                total:
                  - numeric: [">=", 1000]

  AppEventBus:
    Type: AWS::Events::EventBus
    Properties:
      Name: myapp-events

Core Patterns

S3 event handler

import { S3Event } from 'aws-lambda';
import { S3Client, GetObjectCommand, PutObjectCommand } from '@aws-sdk/client-s3';
import sharp from 'sharp';

const s3 = new S3Client({});

export const handler = async (event: S3Event) => {
  for (const record of event.Records) {
    const bucket = record.s3.bucket.name;
    const key = decodeURIComponent(record.s3.object.key.replace(/\+/g, ' '));

    const { Body } = await s3.send(new GetObjectCommand({ Bucket: bucket, Key: key }));
    const input = await Body!.transformToByteArray();

    const thumbnail = await sharp(input).resize(300, 300, { fit: 'cover' }).jpeg().toBuffer();

    await s3.send(new PutObjectCommand({
      Bucket: process.env.PROCESSED_BUCKET!,
      Key: `thumbnails/${key}`,
      Body: thumbnail,
      ContentType: 'image/jpeg',
    }));
  }
};

SQS batch processing with partial failure reporting

import { SQSBatchResponse, SQSEvent, SQSRecord } from 'aws-lambda';

export const handler = async (event: SQSEvent): Promise<SQSBatchResponse> => {
  const batchItemFailures: { itemIdentifier: string }[] = [];

  const results = await Promise.allSettled(
    event.Records.map((record) => processRecord(record))
  );

  results.forEach((result, index) => {
    if (result.status === 'rejected') {
      batchItemFailures.push({
        itemIdentifier: event.Records[index].messageId,
      });
    }
  });

  return { batchItemFailures };
};

async function processRecord(record: SQSRecord): Promise<void> {
  const order = JSON.parse(record.body);
  await saveOrder(order);
  await notifyFulfillment(order);
}

Publishing and consuming EventBridge events

import { EventBridgeClient, PutEventsCommand } from '@aws-sdk/client-eventbridge';

const eventBridge = new EventBridgeClient({});

// Publisher
export async function publishOrderCreated(order: Order) {
  await eventBridge.send(new PutEventsCommand({
    Entries: [
      {
        EventBusName: 'myapp-events',
        Source: 'myapp.orders',
        DetailType: 'OrderCreated',
        Detail: JSON.stringify({
          orderId: order.id,
          customerId: order.customerId,
          total: order.total,
          items: order.items,
        }),
      },
    ],
  }));
}

// Consumer — separate Lambda subscribed via EventBridge rule
export const handler = async (event: any) => {
  const detail = event.detail;
  console.log(`Auditing order ${detail.orderId} for $${detail.total}`);
  await writeAuditLog(detail);
};

Fan-out with SNS to multiple SQS queues

Resources:
  OrderTopic:
    Type: AWS::SNS::Topic

  EmailQueue:
    Type: AWS::SQS::Queue
  AnalyticsQueue:
    Type: AWS::SQS::Queue

  EmailSubscription:
    Type: AWS::SNS::Subscription
    Properties:
      TopicArn: !Ref OrderTopic
      Protocol: sqs
      Endpoint: !GetAtt EmailQueue.Arn

  AnalyticsSubscription:
    Type: AWS::SNS::Subscription
    Properties:
      TopicArn: !Ref OrderTopic
      Protocol: sqs
      Endpoint: !GetAtt AnalyticsQueue.Arn

Best Practices

  • Always configure a dead-letter queue (DLQ) on SQS triggers so that messages that repeatedly fail processing are captured for inspection instead of being retried indefinitely and blocking the queue.
  • Use ReportBatchItemFailures with SQS Lambda triggers to retry only the failed messages in a batch rather than the entire batch — this prevents successfully processed messages from being re-processed.
  • Prefer EventBridge over SNS for new event routing: EventBridge supports content-based filtering, schema discovery, archive and replay, and cross-account routing without managing topic subscriptions.

Common Pitfalls

  • S3 event notifications can trigger recursive loops if a Lambda writes back to the same bucket prefix that triggers it — always use separate prefixes or buckets for input and output.
  • SQS visibility timeout must be at least 6 times the Lambda function timeout; if the function takes longer than the visibility timeout, the message becomes visible again and triggers a duplicate invocation.

Install this skill directly: skilldb add serverless-skills

Get CLI access →