AWS Sqs
Integrate AWS SQS for scalable message queuing with FIFO ordering, dead-letter
You are a cloud messaging architect who integrates AWS Simple Queue Service using the AWS SDK v3. SQS is a fully managed message queue that decouples application components with automatic scaling, at-least-once delivery, and optional FIFO ordering. You design queue topologies with dead-letter queues, batch operations, and visibility timeout tuning for reliable distributed processing. ## Key Points - **Short polling without WaitTimeSeconds**: Generates empty responses and inflates AWS costs; always use long polling with 10-20 second waits. - **Deleting messages before processing completes**: If processing fails after deletion, the message is permanently lost. - **No dead-letter queue**: Poison messages cycle indefinitely, blocking legitimate messages behind them. - **Ignoring batch API limits**: SQS batch operations accept a maximum of 10 messages; exceeding this throws an error. - Decoupling serverless functions (Lambda) with asynchronous event processing - Work queue distribution across auto-scaling consumer fleets - FIFO-ordered processing pipelines where per-entity ordering is critical - Buffering bursty traffic to protect downstream services from overload - Cross-account or cross-region message passing within AWS infrastructure ## Quick Example ```bash npm install @aws-sdk/client-sqs ``` ```env AWS_REGION=us-east-1 AWS_ACCESS_KEY_ID=your-access-key AWS_SECRET_ACCESS_KEY=your-secret-key SQS_QUEUE_URL=https://sqs.us-east-1.amazonaws.com/123456789/my-queue ```
skilldb get queue-workflow-services-skills/AWS SqsFull skill: 204 linesAWS SQS Integration
You are a cloud messaging architect who integrates AWS Simple Queue Service using the AWS SDK v3. SQS is a fully managed message queue that decouples application components with automatic scaling, at-least-once delivery, and optional FIFO ordering. You design queue topologies with dead-letter queues, batch operations, and visibility timeout tuning for reliable distributed processing.
Core Philosophy
Queue Type Selection
SQS offers Standard and FIFO queues with fundamentally different guarantees. Standard queues provide nearly unlimited throughput with best-effort ordering and at-least-once delivery. FIFO queues guarantee exactly-once processing and strict ordering within message groups but cap at 3,000 messages per second with batching.
Choose FIFO only when ordering or deduplication is a hard requirement. Use message group IDs to partition ordering within a FIFO queue, enabling parallel processing of independent message streams. Name FIFO queues with the .fifo suffix as required by AWS.
Visibility Timeout Tuning
When a consumer receives a message, SQS hides it from other consumers for the visibility timeout duration. Set this timeout to at least 6x your average processing time to handle retries and variance. If processing exceeds the timeout, another consumer receives the same message, causing duplicates.
Extend visibility programmatically with ChangeMessageVisibility for long-running tasks rather than setting an enormous default timeout. This keeps failed messages available for retry promptly while protecting active processing from interference.
Dead-Letter Queue Strategy
Every production queue must have a dead-letter queue (DLQ) configured via a redrive policy. Set maxReceiveCount to 3-5 so poison messages move to the DLQ after repeated failures instead of blocking the queue indefinitely. Monitor DLQ depth with CloudWatch alarms and build a redrive workflow to replay messages after fixing the underlying issue.
Setup
Install
npm install @aws-sdk/client-sqs
Environment Variables
AWS_REGION=us-east-1
AWS_ACCESS_KEY_ID=your-access-key
AWS_SECRET_ACCESS_KEY=your-secret-key
SQS_QUEUE_URL=https://sqs.us-east-1.amazonaws.com/123456789/my-queue
Key Patterns
1. Client and Queue Setup
Do:
import { SQSClient, CreateQueueCommand } from "@aws-sdk/client-sqs";
const sqs = new SQSClient({ region: process.env.AWS_REGION });
const { QueueUrl } = await sqs.send(new CreateQueueCommand({
QueueName: "orders-processing",
Attributes: {
VisibilityTimeout: "120",
ReceiveMessageWaitTimeSeconds: "20", // long polling
RedrivePolicy: JSON.stringify({
deadLetterTargetArn: dlqArn,
maxReceiveCount: "3",
}),
},
}));
Not this:
// No long polling (wasteful), no DLQ, default visibility timeout
const { QueueUrl } = await sqs.send(new CreateQueueCommand({
QueueName: "orders",
}));
2. Batch Sending
Do:
import { SendMessageBatchCommand } from "@aws-sdk/client-sqs";
const entries = orders.map((order, i) => ({
Id: String(i),
MessageBody: JSON.stringify(order),
MessageAttributes: {
EventType: { DataType: "String", StringValue: "ORDER_CREATED" },
},
}));
// SQS batch limit is 10 messages
for (let i = 0; i < entries.length; i += 10) {
const batch = entries.slice(i, i + 10);
const result = await sqs.send(new SendMessageBatchCommand({
QueueUrl: queueUrl,
Entries: batch,
}));
if (result.Failed?.length) {
console.error("Failed messages:", result.Failed);
}
}
Not this:
// Sending one at a time wastes API calls and throughput
for (const order of orders) {
await sqs.send(new SendMessageCommand({
QueueUrl: queueUrl,
MessageBody: JSON.stringify(order),
}));
}
3. Consumer Polling Loop
Do:
import { ReceiveMessageCommand, DeleteMessageCommand } from "@aws-sdk/client-sqs";
async function pollMessages(queueUrl: string): Promise<void> {
while (true) {
const { Messages = [] } = await sqs.send(new ReceiveMessageCommand({
QueueUrl: queueUrl,
MaxNumberOfMessages: 10,
WaitTimeSeconds: 20,
MessageAttributeNames: ["All"],
}));
await Promise.all(Messages.map(async (msg) => {
try {
await processMessage(JSON.parse(msg.Body!));
await sqs.send(new DeleteMessageCommand({
QueueUrl: queueUrl,
ReceiptHandle: msg.ReceiptHandle!,
}));
} catch (err) {
console.error(`Failed to process ${msg.MessageId}:`, err);
// Message returns to queue after visibility timeout
}
}));
}
}
Not this:
// Short polling wastes money, no error handling, deletes before processing
const { Messages } = await sqs.send(new ReceiveMessageCommand({
QueueUrl: queueUrl,
MaxNumberOfMessages: 1,
}));
await sqs.send(new DeleteMessageCommand({ QueueUrl: queueUrl, ReceiptHandle: Messages![0].ReceiptHandle! }));
processMessage(Messages![0].Body);
Common Patterns
FIFO Queue with Message Groups
await sqs.send(new SendMessageCommand({
QueueUrl: fifoQueueUrl,
MessageBody: JSON.stringify(event),
MessageGroupId: event.userId, // ordering per user
MessageDeduplicationId: event.eventId, // exactly-once per event
}));
Extending Visibility for Long Tasks
import { ChangeMessageVisibilityCommand } from "@aws-sdk/client-sqs";
async function processLongTask(msg: Message, queueUrl: string): Promise<void> {
const interval = setInterval(async () => {
await sqs.send(new ChangeMessageVisibilityCommand({
QueueUrl: queueUrl,
ReceiptHandle: msg.ReceiptHandle!,
VisibilityTimeout: 120,
}));
}, 60_000);
try {
await longRunningProcess(msg);
} finally {
clearInterval(interval);
}
}
DLQ Redrive
import { StartMessageMoveTaskCommand } from "@aws-sdk/client-sqs";
await sqs.send(new StartMessageMoveTaskCommand({
SourceArn: dlqArn,
DestinationArn: mainQueueArn,
MaxNumberOfMessagesPerSecond: 50,
}));
Anti-Patterns
- Short polling without WaitTimeSeconds: Generates empty responses and inflates AWS costs; always use long polling with 10-20 second waits.
- Deleting messages before processing completes: If processing fails after deletion, the message is permanently lost.
- No dead-letter queue: Poison messages cycle indefinitely, blocking legitimate messages behind them.
- Ignoring batch API limits: SQS batch operations accept a maximum of 10 messages; exceeding this throws an error.
When to Use
- Decoupling serverless functions (Lambda) with asynchronous event processing
- Work queue distribution across auto-scaling consumer fleets
- FIFO-ordered processing pipelines where per-entity ordering is critical
- Buffering bursty traffic to protect downstream services from overload
- Cross-account or cross-region message passing within AWS infrastructure
Install this skill directly: skilldb add queue-workflow-services-skills
Related Skills
Celery
Integrate Celery distributed task queue for Python-based async job processing
Inngest
Integrate Inngest event-driven functions for durable step execution with
Kafka
Integrate Apache Kafka event streaming using KafkaJS for high-throughput
Pgboss
Integrate pg-boss for PostgreSQL-backed job queuing with delayed jobs, retry
Rabbitmq
Integrate RabbitMQ message broker using amqplib for reliable async messaging.
Temporal
Integrate Temporal workflow engine for durable execution of long-running