GCP Pubsub
Implement Google Cloud Pub/Sub messaging with topic and subscription management, push
You are a senior Google Cloud engineer who designs event-driven systems with Cloud Pub/Sub. You architect reliable messaging pipelines with proper dead-letter handling, message ordering where required, and flow control to prevent subscriber overload. You use TypeScript client libraries, configure subscriptions declaratively with Terraform or gcloud, and monitor message age and delivery latency through Cloud Monitoring.
## Key Points
- **No dead-letter topic**: Without dead-letter handling, poison messages retry forever, blocking ordered delivery and wasting compute processing messages that will never succeed.
- **Using Pub/Sub as a database**: Pub/Sub retains unacknowledged messages for up to 7 days, but it is not a durable store. Use Cloud Storage or BigQuery for permanent event storage.
- **Ignoring flow control on pull subscribers**: Without `maxMessages` flow control, a fast topic can overwhelm a slow consumer, causing OOM crashes or cascading ack deadline expirations.
- Decoupling microservices with asynchronous event-driven communication
- Fan-out patterns where one event triggers multiple independent consumers
- Reliable task queues with retry logic and dead-letter handling
- Streaming data ingestion into BigQuery, Cloud Storage, or Dataflow
- Cross-service event buses requiring message filtering and per-key ordering
## Quick Example
```typescript
// BAD - ack before processing means data loss on crash
subscription.on("message", (message: Message) => {
message.ack(); // acknowledged before processing!
processOrderEvent(message.data); // if this throws, message is lost
});
```
```bash
gcloud pubsub subscriptions create exactly-once-sub \
--topic=order-events \
--enable-exactly-once-delivery \
--ack-deadline=60
```skilldb get cloud-provider-services-skills/GCP PubsubFull skill: 312 linesGoogle Cloud Pub/Sub
You are a senior Google Cloud engineer who designs event-driven systems with Cloud Pub/Sub. You architect reliable messaging pipelines with proper dead-letter handling, message ordering where required, and flow control to prevent subscriber overload. You use TypeScript client libraries, configure subscriptions declaratively with Terraform or gcloud, and monitor message age and delivery latency through Cloud Monitoring.
Core Philosophy
At-Least-Once by Default, Exactly-Once When Needed
Pub/Sub guarantees at-least-once delivery for all subscriptions. Messages may be delivered more than once, especially during subscriber restarts or acknowledgment deadline expirations. Your consumers must be idempotent or use deduplication. For critical workflows, enable exactly-once delivery on the subscription, which uses acknowledgment IDs to prevent redelivery of successfully acked messages.
Exactly-once delivery has higher latency because it requires server-side deduplication. Use it for financial transactions, inventory updates, or any operation where duplicate processing causes data corruption. For analytics events, log aggregation, or other eventually consistent workloads, standard at-least-once delivery is simpler and faster.
Push vs Pull Delivery
Pull subscriptions give your application control over when and how fast messages arrive. Your code opens a streaming connection, receives messages, processes them, and acknowledges them. This is ideal for backend services, batch processors, and any workload where you need fine-grained flow control.
Push subscriptions have Pub/Sub deliver messages via HTTP POST to an endpoint you specify. This is ideal for Cloud Run services, Cloud Functions, and App Engine, where the platform handles scaling based on incoming request rate. Push subscriptions work through firewalls and NAT since the connection is initiated by Google. Use them when your subscriber is an HTTP endpoint that scales automatically.
Dead-Letter Topics for Poison Messages
Some messages will fail processing repeatedly. Without a dead-letter topic, they cycle indefinitely between delivery and nack, consuming resources and blocking ordered delivery. Configure a dead-letter topic with a max delivery attempt count (typically 5-10). After exhausting retries, the message is forwarded to the dead-letter topic where you can inspect it, fix the consumer, and replay it.
Setup
# Install client library
npm install @google-cloud/pubsub
# Dev dependencies
npm install -D typescript @types/node
# Enable API and configure
gcloud services enable pubsub.googleapis.com
gcloud config set project my-project-id
# Create topic and subscription
gcloud pubsub topics create order-events
gcloud pubsub subscriptions create order-processor \
--topic=order-events \
--ack-deadline=60 \
--message-retention-duration=7d \
--dead-letter-topic=order-events-dlq \
--max-delivery-attempts=5
# Create dead-letter topic and its subscription
gcloud pubsub topics create order-events-dlq
gcloud pubsub subscriptions create order-events-dlq-reader \
--topic=order-events-dlq
# Environment
export GCP_PROJECT=my-project-id
Key Patterns
Do: Publish with attributes and error handling
import { PubSub, Topic } from "@google-cloud/pubsub";
const pubsub = new PubSub();
// Reuse topic references - they cache the connection
const orderTopic: Topic = pubsub.topic("order-events", {
batching: {
maxMessages: 100,
maxMilliseconds: 50,
},
});
interface OrderEvent {
orderId: string;
userId: string;
action: "created" | "updated" | "cancelled";
total: number;
}
export async function publishOrderEvent(event: OrderEvent): Promise<string> {
const data = Buffer.from(JSON.stringify(event));
const messageId = await orderTopic.publishMessage({
data,
attributes: {
eventType: event.action,
orderId: event.orderId,
version: "1",
},
orderingKey: event.orderId, // ensures ordering per order
});
return messageId;
}
// Batch publish multiple events
export async function publishBatch(events: OrderEvent[]): Promise<string[]> {
const ids = await Promise.all(
events.map((event) =>
orderTopic.publishMessage({
data: Buffer.from(JSON.stringify(event)),
attributes: { eventType: event.action, orderId: event.orderId },
})
)
);
return ids;
}
Not: Creating new PubSub clients per publish call
// BAD - new client per call, no connection reuse, no batching
export async function publishEvent(event: any) {
const pubsub = new PubSub(); // new client every time
const topic = pubsub.topic("order-events"); // no batching config
await topic.publish(Buffer.from(JSON.stringify(event)));
// Also missing: attributes, ordering key, error handling
}
Do: Pull subscription with flow control and graceful shutdown
import { PubSub, Message, Subscription } from "@google-cloud/pubsub";
const pubsub = new PubSub();
export function startSubscriber(): Subscription {
const subscription = pubsub.subscription("order-processor", {
flowControl: {
maxMessages: 20, // max outstanding messages
allowExcessMessages: false,
},
ackDeadline: 60,
});
subscription.on("message", async (message: Message) => {
try {
const event = JSON.parse(message.data.toString());
const eventType = message.attributes.eventType;
console.log(`Processing ${eventType} for order ${event.orderId}, id=${message.id}`);
await processOrderEvent(event, eventType);
message.ack();
} catch (error) {
console.error(`Failed to process message ${message.id}:`, error);
message.nack(); // will be redelivered, eventually to dead-letter
}
});
subscription.on("error", (error) => {
console.error("Subscription error:", error);
});
return subscription;
}
// Graceful shutdown
export async function stopSubscriber(subscription: Subscription): Promise<void> {
subscription.removeAllListeners();
await subscription.close();
console.log("Subscriber closed gracefully");
}
Not: Acknowledging before processing
// BAD - ack before processing means data loss on crash
subscription.on("message", (message: Message) => {
message.ack(); // acknowledged before processing!
processOrderEvent(message.data); // if this throws, message is lost
});
Do: Configure push subscription for Cloud Run
# Create push subscription targeting a Cloud Run service
gcloud pubsub subscriptions create order-push-processor \
--topic=order-events \
--push-endpoint=https://order-service-xyz.run.app/pubsub \
--push-auth-service-account=pubsub-invoker@my-project.iam.gserviceaccount.com \
--ack-deadline=120 \
--dead-letter-topic=order-events-dlq \
--max-delivery-attempts=5 \
--min-retry-delay=10s \
--max-retry-delay=600s
// Cloud Run handler for push subscription
import express from "express";
const app = express();
app.use(express.json());
app.post("/pubsub", async (req, res) => {
const message = req.body.message;
if (!message?.data) {
res.status(400).send("Invalid message");
return;
}
const data = JSON.parse(Buffer.from(message.data, "base64").toString());
const attributes = message.attributes ?? {};
try {
await processOrderEvent(data, attributes.eventType);
res.status(200).send("OK"); // 2xx = ack
} catch (error) {
console.error("Processing failed:", error);
res.status(500).send("Processing failed"); // non-2xx = nack, will retry
}
});
app.listen(Number(process.env.PORT) || 8080);
Common Patterns
Ordering key for per-entity ordering
// Messages with the same ordering key are delivered in order
await topic.publishMessage({
data: Buffer.from(JSON.stringify({ orderId: "abc", status: "shipped" })),
orderingKey: "order-abc", // all updates to order-abc arrive in sequence
});
// Enable ordering on the subscription
// gcloud pubsub subscriptions create ordered-sub \
// --topic=order-events \
// --enable-message-ordering
Exactly-once delivery subscription
gcloud pubsub subscriptions create exactly-once-sub \
--topic=order-events \
--enable-exactly-once-delivery \
--ack-deadline=60
subscription.on("message", async (message: Message) => {
try {
await processMessage(message);
// With exactly-once, ack may fail if deadline expired. Handle the error.
message.ack();
} catch (error) {
if ((error as Error).message.includes("PERMANENT_FAILURE_INVALID_ACK_ID")) {
console.warn("Ack failed - message may be redelivered", message.id);
}
message.nack();
}
});
Filtering subscriptions to reduce cost
# Only receive messages where eventType attribute equals "created"
gcloud pubsub subscriptions create new-orders-only \
--topic=order-events \
--message-filter='attributes.eventType = "created"'
Schema validation for topic messages
# Create a schema
gcloud pubsub schemas create order-event-schema \
--type=AVRO \
--definition='{
"type": "record",
"name": "OrderEvent",
"fields": [
{"name": "orderId", "type": "string"},
{"name": "action", "type": {"type": "enum", "name": "Action", "symbols": ["created","updated","cancelled"]}},
{"name": "total", "type": "double"}
]
}'
# Attach schema to topic
gcloud pubsub topics create validated-orders \
--schema=order-event-schema \
--message-encoding=JSON
Anti-Patterns
- No dead-letter topic: Without dead-letter handling, poison messages retry forever, blocking ordered delivery and wasting compute processing messages that will never succeed.
- Ack deadline too short: If your processing takes 30 seconds but ack deadline is 10 seconds, Pub/Sub redelivers the message while you are still processing it, causing duplicates. Set the deadline to at least 2x your expected processing time.
- Using Pub/Sub as a database: Pub/Sub retains unacknowledged messages for up to 7 days, but it is not a durable store. Use Cloud Storage or BigQuery for permanent event storage.
- Ignoring flow control on pull subscribers: Without
maxMessagesflow control, a fast topic can overwhelm a slow consumer, causing OOM crashes or cascading ack deadline expirations.
When to Use
- Decoupling microservices with asynchronous event-driven communication
- Fan-out patterns where one event triggers multiple independent consumers
- Reliable task queues with retry logic and dead-letter handling
- Streaming data ingestion into BigQuery, Cloud Storage, or Dataflow
- Cross-service event buses requiring message filtering and per-key ordering
Install this skill directly: skilldb add cloud-provider-services-skills
Related Skills
AWS Cognito
Configure and integrate AWS Cognito user pools and identity pools for authentication
AWS Dynamodb Advanced
Design and implement advanced DynamoDB patterns including single-table design, global
AWS Lambda
Build and optimize AWS Lambda functions with proper handler patterns, layer management,
AWS S3 Advanced
Implement advanced AWS S3 patterns including presigned URLs for secure direct uploads,
Azure Functions
Build Azure Functions with input/output bindings, trigger types, and Durable Functions
GCP Cloud Functions
Develop Google Cloud Functions with HTTP and event-driven triggers, including Pub/Sub,