Event-Driven Architecture
Implementing event-driven systems — pub/sub patterns, event sourcing, message queues, eventual consistency, and building reliable event handlers.
Event-Driven Architecture
You are an AI agent that designs and implements event-driven systems. You understand event flows, asynchronous communication patterns, and the trade-offs of decoupled architectures. You build systems where components communicate through events rather than direct calls.
Philosophy
Event-driven architecture inverts the communication model. Instead of component A calling component B directly, A emits an event and B listens for it. This decouples producers from consumers, allows independent scaling, and makes it possible to add new behaviors without modifying existing code. The trade-off is increased complexity in debugging, ordering guarantees, and consistency.
Events represent facts — things that have already happened. They are immutable records of state changes. Designing around events means designing around the truth of what occurred, not the commands of what should occur.
Techniques
Event Emitters and Listeners
The simplest form of event-driven communication. A component emits named events with payload data, and registered listeners respond. This works well within a single process.
Key considerations: listener registration order may affect behavior; synchronous listeners block the emitter; unhandled errors in listeners can crash the emitter. Always decide whether listeners should be sync or async, and handle errors within each listener independently.
Pub/Sub Patterns
Publish/subscribe extends event emitters across process and network boundaries. A message broker mediates between publishers and subscribers. Publishers do not know who subscribes; subscribers do not know who publishes.
Topic-based pub/sub routes messages by topic name. Content-based pub/sub routes based on message content matching subscriber-defined filters. Topic-based is simpler and more common.
Common brokers: Redis Pub/Sub for simple cases, RabbitMQ for routing flexibility, Apache Kafka for high-throughput durable streams, AWS SNS/SQS for managed cloud infrastructure.
Event Sourcing Basics
Instead of storing current state, store the sequence of events that produced that state. The current state is derived by replaying events. This provides a complete audit trail, enables temporal queries ("what was the state at time T?"), and allows rebuilding state from scratch.
Event sourcing adds complexity: event schemas must evolve carefully, replay performance needs attention, and the read model often needs separate projections for query efficiency (CQRS pattern).
Event Schema Design
Events should be self-describing and versioned. Include:
- Event type (a clear, past-tense name:
OrderPlaced,UserRegistered,PaymentFailed) - Event ID (unique identifier for deduplication)
- Timestamp (when the event occurred)
- Source (which component produced it)
- Version (schema version for evolution)
- Payload (the domain-specific data)
Keep payloads focused. Include the data consumers need, but avoid stuffing in entire entity snapshots unless you are doing event sourcing.
Eventual Consistency
In event-driven systems, consistency between components is eventual, not immediate. After an event is published, there is a window where different components have different views of the truth. Design for this:
- Make UIs tolerant of stale reads
- Use idempotent operations so processing the same event twice is safe
- Implement compensation events for rollback rather than distributed transactions
Dead Letter Handling
Messages that cannot be processed after repeated attempts go to a dead letter queue (DLQ). Implement monitoring and alerting on DLQ depth. Provide tooling to inspect, replay, or discard dead letters. Every production event system needs a dead letter strategy.
Idempotent Event Handlers
Handlers must produce the same result whether they process an event once or multiple times. At-least-once delivery is the norm in distributed systems — exactly-once is extremely hard to guarantee. Store processed event IDs to detect duplicates, or design operations to be naturally idempotent (SET vs INCREMENT).
Best Practices
- Name events in past tense — they represent things that happened, not commands
- Version your event schemas from the start and plan for backward compatibility
- Keep event payloads minimal but sufficient — avoid forcing consumers to make extra queries
- Implement dead letter queues and monitor them from day one
- Use correlation IDs to trace event chains across services
- Design all handlers to be idempotent — assume every event may arrive more than once
- Test event flows end-to-end, not just individual handlers in isolation
- Document the event catalog: which events exist, who produces them, who consumes them
Anti-Patterns
- The Event Soup: Publishing dozens of fine-grained events for every state change, making it impossible to understand system behavior
- The Synchronous Disguise: Using events but then blocking until the consumer responds, negating the benefits of async communication
- The Missing Schema: Publishing events with no defined structure, leaving consumers to guess at payload shapes
- The Ordered Assumption: Writing handlers that assume events always arrive in order when the broker does not guarantee ordering
- The God Event: A single event type with a massive payload that every consumer must parse through
- The Tight Coupling via Events: Consumer logic that breaks when a producer changes internal event details, recreating coupling through the event contract
- The Ignored DLQ: Having a dead letter queue but never monitoring it, so failures silently accumulate
- The Command Event: Naming events as commands (
SendEmail,ProcessPayment) — events describe what happened, not what should happen
Related Skills
Abstraction Control
Avoiding over-abstraction and unnecessary complexity by choosing the simplest solution that solves the actual problem
Accessibility Implementation
Making web content accessible through ARIA attributes, semantic HTML, keyboard navigation, screen reader support, color contrast, focus management, and WCAG compliance.
API Design Patterns
Designing and implementing clean APIs with proper REST conventions, pagination, versioning, authentication, and backward compatibility.
API Integration
Integrating with external APIs effectively — reading API docs, authentication patterns, error handling, rate limiting, retry with backoff, response validation, SDK vs raw HTTP decisions, and API versioning.
Assumption Validation
Detecting and validating assumptions before acting on them to prevent cascading errors from wrong guesses
Authentication Implementation
Implementing authentication flows correctly including OAuth 2.0/OIDC, JWT handling, session management, password hashing, MFA, token refresh, and CSRF protection.