Subscriptions
Real-time GraphQL subscriptions with WebSockets and server-sent events
You are an expert in GraphQL subscriptions, helping developers implement real-time data streaming using WebSockets, server-sent events, and pub/sub patterns with Apollo and graphql-ws. ## Key Points 1. **Use `graphql-ws` not `subscriptions-transport-ws`** — the latter is unmaintained. The `graphql-ws` library implements the newer, more robust protocol. 2. **Authenticate on connection, not per-message** — validate tokens in `onConnect` and reject unauthorized connections early. 3. **Use Redis PubSub in production** — in-memory PubSub fails with multiple server instances. Redis, Kafka, or similar message brokers are required for horizontal scaling. 4. **Scope subscriptions narrowly** — `commentAdded(postId: ID!)` is far more efficient than a global `commentAdded` that clients filter client-side. 5. **Combine query + subscription** — fetch initial data with `useQuery`, then subscribe for updates. Do not rely on subscriptions alone for initial state. 6. **Handle reconnection** — configure the `graphql-ws` client with retry logic and re-authentication on reconnect. - **Using in-memory PubSub in production** — events published on one server instance are not received by subscribers connected to another instance. - **Not cleaning up subscriptions** — React's `useSubscription` handles cleanup automatically, but manual subscriptions must be unsubscribed to prevent memory leaks. - **Publishing too much data** — subscription payloads should be minimal. Clients can use the subscription event as a signal to refetch detailed data if needed. - **Missing the `resolve` function** — the payload shape from `pubsub.publish` must match the subscription field name, or you need a custom `resolve` function to extract the correct value. - **WebSocket connection exhaustion** — each browser tab opens a new WebSocket. Consider multiplexing subscriptions over a single connection and implementing connection limits.
skilldb get graphql-skills/SubscriptionsFull skill: 339 linesSubscriptions — GraphQL
You are an expert in GraphQL subscriptions, helping developers implement real-time data streaming using WebSockets, server-sent events, and pub/sub patterns with Apollo and graphql-ws.
Overview
GraphQL subscriptions enable clients to receive real-time updates when data changes on the server. Unlike queries and mutations that follow a request-response pattern, subscriptions establish a persistent connection over which the server pushes events. The modern ecosystem uses the graphql-ws protocol (replacing the deprecated subscriptions-transport-ws).
Core Concepts
Schema Definition
Subscriptions are defined as root-level fields in the schema:
type Subscription {
postCreated: Post!
postUpdated(id: ID!): Post!
commentAdded(postId: ID!): Comment!
notificationReceived: Notification!
}
PubSub Mechanism
The server uses a publish-subscribe system to emit events to subscribed clients:
import { PubSub } from "graphql-subscriptions";
// In-memory PubSub — suitable for single-server setups only
const pubsub = new PubSub();
// Event name constants
const EVENTS = {
POST_CREATED: "POST_CREATED",
POST_UPDATED: "POST_UPDATED",
COMMENT_ADDED: "COMMENT_ADDED",
} as const;
Subscription Resolvers
Subscription resolvers have a subscribe function that returns an AsyncIterator:
const resolvers = {
Subscription: {
postCreated: {
subscribe: () => pubsub.asyncIterableIterator([EVENTS.POST_CREATED]),
},
postUpdated: {
subscribe: (_, { id }) => {
return pubsub.asyncIterableIterator([`${EVENTS.POST_UPDATED}.${id}`]);
},
},
commentAdded: {
subscribe: (_, { postId }) => {
return pubsub.asyncIterableIterator([`${EVENTS.COMMENT_ADDED}.${postId}`]);
},
// Optional resolve function to transform the payload
resolve: (payload) => payload.comment,
},
},
};
Implementation Patterns
Server Setup with graphql-ws
import { ApolloServer } from "@apollo/server";
import { expressMiddleware } from "@apollo/server/express4";
import { ApolloServerPluginDrainHttpServer } from "@apollo/server/plugin/drainHttpServer";
import { makeExecutableSchema } from "@graphql-tools/schema";
import { WebSocketServer } from "ws";
import { useServer } from "graphql-ws/lib/use/ws";
import express from "express";
import http from "http";
const schema = makeExecutableSchema({ typeDefs, resolvers });
const app = express();
const httpServer = http.createServer(app);
// WebSocket server for subscriptions
const wsServer = new WebSocketServer({
server: httpServer,
path: "/graphql",
});
const wsServerCleanup = useServer(
{
schema,
context: async (ctx) => {
// Authenticate WebSocket connections via connectionParams
const token = ctx.connectionParams?.authToken as string | undefined;
const currentUser = token ? await verifyToken(token) : null;
return { currentUser, pubsub };
},
onConnect: async (ctx) => {
const token = ctx.connectionParams?.authToken;
if (!token) return false; // Reject unauthenticated connections
},
onDisconnect: () => {
console.log("Client disconnected");
},
},
wsServer
);
const server = new ApolloServer({
schema,
plugins: [
ApolloServerPluginDrainHttpServer({ httpServer }),
{
async serverWillStart() {
return {
async drainServer() {
await wsServerCleanup.dispose();
},
};
},
},
],
});
await server.start();
app.use("/graphql", cors(), express.json(), expressMiddleware(server));
httpServer.listen(4000);
Publishing Events from Mutations
const resolvers = {
Mutation: {
createPost: async (_, { input }, { dataSources, currentUser, pubsub }) => {
const post = await dataSources.posts.create({
...input,
authorId: currentUser.id,
});
// Publish to all subscribers of postCreated
await pubsub.publish(EVENTS.POST_CREATED, { postCreated: post });
return { post, errors: [] };
},
addComment: async (_, { input }, { dataSources, currentUser, pubsub }) => {
const comment = await dataSources.comments.create({
...input,
authorId: currentUser.id,
});
// Publish scoped to the specific post
await pubsub.publish(`${EVENTS.COMMENT_ADDED}.${input.postId}`, {
comment,
});
return { comment, errors: [] };
},
},
};
Production PubSub with Redis
The in-memory PubSub only works with a single server instance. For production, use Redis:
import { RedisPubSub } from "graphql-redis-subscriptions";
import Redis from "ioredis";
const options = {
host: process.env.REDIS_HOST,
port: parseInt(process.env.REDIS_PORT ?? "6379"),
retryStrategy: (times: number) => Math.min(times * 50, 2000),
};
const pubsub = new RedisPubSub({
publisher: new Redis(options),
subscriber: new Redis(options),
});
Filtering Subscriptions
Use withFilter to send events only to relevant subscribers:
import { withFilter } from "graphql-subscriptions";
const resolvers = {
Subscription: {
commentAdded: {
subscribe: withFilter(
() => pubsub.asyncIterableIterator([EVENTS.COMMENT_ADDED]),
(payload, variables, context) => {
// Only send to clients subscribed to this specific post
return payload.comment.postId === variables.postId;
}
),
resolve: (payload) => payload.comment,
},
notificationReceived: {
subscribe: withFilter(
() => pubsub.asyncIterableIterator(["NOTIFICATION"]),
(payload, _, context) => {
// Only send notifications to the intended recipient
return payload.notification.recipientId === context.currentUser.id;
}
),
},
},
};
Client-Side Subscription with Apollo Client
import { GraphQLWsLink } from "@apollo/client/link/subscriptions";
import { createClient } from "graphql-ws";
import { split } from "@apollo/client";
import { getMainDefinition } from "@apollo/client/utilities";
const wsLink = new GraphQLWsLink(
createClient({
url: "ws://localhost:4000/graphql",
connectionParams: {
authToken: getAuthToken(),
},
})
);
// Split traffic: subscriptions over WebSocket, everything else over HTTP
const splitLink = split(
({ query }) => {
const definition = getMainDefinition(query);
return definition.kind === "OperationDefinition" && definition.operation === "subscription";
},
wsLink,
httpLink
);
const client = new ApolloClient({
link: splitLink,
cache: new InMemoryCache(),
});
// React component using useSubscription
const COMMENT_SUBSCRIPTION = gql`
subscription OnCommentAdded($postId: ID!) {
commentAdded(postId: $postId) {
id
body
author {
id
displayName
}
createdAt
}
}
`;
function PostComments({ postId }: { postId: string }) {
const { data: queryData } = useQuery(GET_POST_COMMENTS, {
variables: { postId },
});
useSubscription(COMMENT_SUBSCRIPTION, {
variables: { postId },
onData: ({ client, data }) => {
// Update the cache with the new comment
client.cache.modify({
id: client.cache.identify({ __typename: "Post", id: postId }),
fields: {
comments(existing = []) {
const newRef = client.cache.writeFragment({
data: data.data.commentAdded,
fragment: COMMENT_FRAGMENT,
});
return [...existing, newRef];
},
},
});
},
});
return <CommentList comments={queryData?.post.comments ?? []} />;
}
Core Philosophy
Subscriptions bridge the gap between the request-response model of queries and mutations and the real-time nature of user-facing applications. They should be used for data that genuinely changes in response to external events — a new comment on a post, a status update in a workflow, a price tick in a trading system. They are not a replacement for polling when the update frequency is low, and they are not a replacement for queries for initial data loading. The best subscription architectures combine an initial query for current state with a subscription for incremental updates.
The pub/sub layer is the backbone of subscription scalability, and choosing the right one is a production-critical decision. The in-memory PubSub that ships with graphql-subscriptions works for development and single-process deployments, but it cannot deliver events across multiple server instances. Production systems need an external message broker — Redis, Kafka, or a managed pub/sub service — so that events published by any server instance reach subscribers connected to any other instance. This architectural decision should be made at the start of the project, not after users report missed events.
WebSocket connections are long-lived and stateful, which changes the operational model compared to stateless HTTP queries. Each connection consumes server resources (memory, file descriptors, keepalive traffic), and the server must handle reconnection, authentication refresh, and graceful shutdown differently than it does for HTTP. Connection management — limiting concurrent connections per user, authenticating on connect rather than per message, and multiplexing multiple subscriptions over a single connection — is as important as the subscription logic itself.
Anti-Patterns
-
Using subscriptions for everything — replacing all polling and refetching with subscriptions creates unnecessary WebSocket connections and server-side state for data that changes infrequently. Reserve subscriptions for genuinely real-time data; use polling or cache-and-network for the rest.
-
Broadcasting unfiltered events to all subscribers — publishing a global event and letting every client receive it wastes bandwidth and CPU. Use
withFilterto narrow delivery to the subscribers who actually need each event, based on their subscription arguments and identity. -
Using in-memory PubSub in a multi-instance deployment — events published on one server instance never reach subscribers connected to other instances. This manifests as intermittently missing updates that are nearly impossible to reproduce in development.
-
Authenticating per message instead of per connection — verifying tokens on every subscription event adds latency and load. Authenticate once in the
onConnecthandler and reject unauthenticated connections before any subscriptions begin. -
Publishing entire entity payloads — subscription events that include every field of the mutated entity create large WebSocket frames and tight coupling between the publisher and subscriber schemas. Publish minimal event payloads (ID and change type) and let clients refetch details if needed.
Best Practices
- Use
graphql-wsnotsubscriptions-transport-ws— the latter is unmaintained. Thegraphql-wslibrary implements the newer, more robust protocol. - Authenticate on connection, not per-message — validate tokens in
onConnectand reject unauthorized connections early. - Use Redis PubSub in production — in-memory PubSub fails with multiple server instances. Redis, Kafka, or similar message brokers are required for horizontal scaling.
- Scope subscriptions narrowly —
commentAdded(postId: ID!)is far more efficient than a globalcommentAddedthat clients filter client-side. - Combine query + subscription — fetch initial data with
useQuery, then subscribe for updates. Do not rely on subscriptions alone for initial state. - Handle reconnection — configure the
graphql-wsclient with retry logic and re-authentication on reconnect.
Common Pitfalls
- Using in-memory PubSub in production — events published on one server instance are not received by subscribers connected to another instance.
- Not cleaning up subscriptions — React's
useSubscriptionhandles cleanup automatically, but manual subscriptions must be unsubscribed to prevent memory leaks. - Publishing too much data — subscription payloads should be minimal. Clients can use the subscription event as a signal to refetch detailed data if needed.
- Missing the
resolvefunction — the payload shape frompubsub.publishmust match the subscription field name, or you need a customresolvefunction to extract the correct value. - WebSocket connection exhaustion — each browser tab opens a new WebSocket. Consider multiplexing subscriptions over a single connection and implementing connection limits.
Install this skill directly: skilldb add graphql-skills
Related Skills
Apollo Client
Apollo Client with React for querying, mutating, and caching GraphQL data
Apollo Server
Apollo Server setup, configuration, plugins, and production deployment patterns
Authentication
Authentication and authorization patterns for securing GraphQL APIs
Code Generation
Type-safe GraphQL development with graphql-codegen for TypeScript
Pagination
Cursor-based pagination following the Relay connection specification
Resolvers
Resolver patterns, data loading strategies, and the N+1 problem in GraphQL