Structured Logging Patterns
Structured logging patterns for TypeScript — correlation IDs, request context, log levels, error serialization, sensitive data redaction, and observability best practices
Structured logging treats log entries as **data**, not text. Every log line is a machine-parseable object (typically JSON) with consistent fields, enabling reliable search, aggregation, and alerting across distributed systems. The shift from `console.log("User 42 placed order 789")` to `{ level: "info", userId: 42, orderId: 789, msg: "order placed" }` is the single highest-leverage improvement a team can make for production observability.
## Key Points
- **Logs are events, not strings** — every entry is a structured record with typed fields.
- **Context propagates automatically** — correlation IDs, tenant IDs, and user IDs flow through the call stack without manual threading.
- **Static messages, dynamic fields** — the message is a category label; variable data lives in named fields so aggregators can group and filter.
- **Defense in depth for sensitive data** — redaction is systematic and declarative, never ad-hoc.
- **Logs, traces, and metrics converge** — structured logs with correlation IDs bridge the gap to distributed tracing and metric dashboards.
1. **Use AsyncLocalStorage for context propagation** — avoids passing a logger through every function signature. The request context (correlation ID, user, tenant) is always available.
2. **Keep log messages as static string literals** — `"order placed"` not `` `order ${id} placed` ``. This enables log grouping and pattern detection in aggregators.
3. **Establish team-wide log level semantics** — write them down and enforce via code review. The difference between WARN and ERROR must be unambiguous.
4. **Propagate correlation IDs across service boundaries** — forward `x-request-id` and `x-trace-id` headers in every outbound HTTP call.
5. **Serialize errors consistently** — use a shared `serializeError` function that captures stack, cause chain, and custom properties.
6. **Redact declaratively** — configure redaction paths at the logger level rather than relying on developers to remember per call site.
7. **Include timing data** — always log `durationMs` for operations. This turns logs into a lightweight performance monitoring tool.skilldb get logging-services-skills/Structured Logging PatternsFull skill: 421 linesStructured Logging Patterns
Core Philosophy
Structured logging treats log entries as data, not text. Every log line is a machine-parseable object (typically JSON) with consistent fields, enabling reliable search, aggregation, and alerting across distributed systems. The shift from console.log("User 42 placed order 789") to { level: "info", userId: 42, orderId: 789, msg: "order placed" } is the single highest-leverage improvement a team can make for production observability.
Key tenets:
- Logs are events, not strings — every entry is a structured record with typed fields.
- Context propagates automatically — correlation IDs, tenant IDs, and user IDs flow through the call stack without manual threading.
- Static messages, dynamic fields — the message is a category label; variable data lives in named fields so aggregators can group and filter.
- Defense in depth for sensitive data — redaction is systematic and declarative, never ad-hoc.
- Logs, traces, and metrics converge — structured logs with correlation IDs bridge the gap to distributed tracing and metric dashboards.
Setup
Logger Abstraction Layer
// lib/logger.ts — thin abstraction over any structured logger
import pino from "pino";
export interface LogContext {
requestId?: string;
userId?: string;
tenantId?: string;
traceId?: string;
spanId?: string;
[key: string]: unknown;
}
export interface AppLogger {
debug(msg: string, ctx?: Record<string, unknown>): void;
info(msg: string, ctx?: Record<string, unknown>): void;
warn(msg: string, ctx?: Record<string, unknown>): void;
error(msg: string, ctx?: Record<string, unknown>): void;
child(bindings: LogContext): AppLogger;
}
const baseLogger = pino({
level: process.env.LOG_LEVEL ?? "info",
timestamp: pino.stdTimeFunctions.isoTime,
formatters: {
level(label) {
return { level: label };
},
},
redact: {
paths: [
"password",
"*.password",
"token",
"*.token",
"authorization",
"req.headers.authorization",
"req.headers.cookie",
"*.ssn",
"*.creditCard",
],
censor: "[REDACTED]",
},
});
// Wrap pino to match AppLogger interface
function wrapLogger(pinoInst: pino.Logger): AppLogger {
return {
debug: (msg, ctx) => pinoInst.debug(ctx ?? {}, msg),
info: (msg, ctx) => pinoInst.info(ctx ?? {}, msg),
warn: (msg, ctx) => pinoInst.warn(ctx ?? {}, msg),
error: (msg, ctx) => pinoInst.error(ctx ?? {}, msg),
child: (bindings) => wrapLogger(pinoInst.child(bindings)),
};
}
export const logger: AppLogger = wrapLogger(baseLogger);
AsyncLocalStorage for Request Context
// lib/context.ts
import { AsyncLocalStorage } from "async_hooks";
import { logger, type AppLogger, type LogContext } from "./logger";
interface RequestContext {
logger: AppLogger;
requestId: string;
userId?: string;
tenantId?: string;
startTime: number;
}
export const requestStorage = new AsyncLocalStorage<RequestContext>();
export function getRequestLogger(): AppLogger {
const ctx = requestStorage.getStore();
return ctx?.logger ?? logger;
}
export function getRequestContext(): RequestContext | undefined {
return requestStorage.getStore();
}
export function runWithContext(
context: LogContext,
fn: () => Promise<void> | void,
): Promise<void> | void {
const requestId = (context.requestId as string) ?? crypto.randomUUID();
const childLogger = logger.child({ requestId, ...context });
return requestStorage.run(
{
logger: childLogger,
requestId,
userId: context.userId as string | undefined,
tenantId: context.tenantId as string | undefined,
startTime: Date.now(),
},
fn,
);
}
Key Techniques
Correlation IDs Across Services
import express, { type Request, type Response, type NextFunction } from "express";
import { runWithContext, getRequestLogger } from "./lib/context";
import { randomUUID } from "crypto";
const app = express();
// Middleware: extract or generate correlation ID, bind to logger
app.use((req: Request, res: Response, next: NextFunction) => {
const requestId = (req.headers["x-request-id"] as string) ?? randomUUID();
const traceId = (req.headers["x-trace-id"] as string) ?? randomUUID();
// Propagate downstream
res.setHeader("x-request-id", requestId);
res.setHeader("x-trace-id", traceId);
runWithContext(
{
requestId,
traceId,
method: req.method,
path: req.path,
userAgent: req.headers["user-agent"],
},
() => next(),
);
});
// When calling another service, forward the correlation headers
async function callDownstreamService(url: string, body: unknown): Promise<Response> {
const ctx = getRequestLogger();
const store = requestStorage.getStore();
ctx.info("calling downstream service", { url });
return fetch(url, {
method: "POST",
headers: {
"Content-Type": "application/json",
"x-request-id": store?.requestId ?? "",
"x-trace-id": (store as Record<string, unknown>)?.traceId as string ?? "",
},
body: JSON.stringify(body),
});
}
Error Serialization
// lib/errors.ts
interface SerializedError {
message: string;
name: string;
stack?: string;
code?: string;
cause?: SerializedError;
[key: string]: unknown;
}
export function serializeError(error: unknown): SerializedError {
if (!(error instanceof Error)) {
return { message: String(error), name: "UnknownError" };
}
const serialized: SerializedError = {
message: error.message,
name: error.name,
stack: error.stack,
};
// Preserve custom properties (e.g., HTTP status codes)
for (const key of Object.getOwnPropertyNames(error)) {
if (!(key in serialized)) {
serialized[key] = (error as Record<string, unknown>)[key];
}
}
// Recursively serialize cause chain
if (error.cause) {
serialized.cause = serializeError(error.cause);
}
return serialized;
}
// Usage
import { getRequestLogger } from "./lib/context";
async function processPayment(orderId: string) {
const log = getRequestLogger();
try {
await chargeCard(orderId);
log.info("payment processed", { orderId });
} catch (err) {
log.error("payment failed", {
orderId,
error: serializeError(err),
});
throw err;
}
}
Log Level Strategy
// Consistent log-level semantics across the team
//
// FATAL — Process cannot continue. Triggers immediate page.
// Example: database pool exhausted, out of memory.
//
// ERROR — An operation failed and needs investigation.
// Example: payment charge declined, external API 500.
//
// WARN — Something unexpected that the system handled gracefully.
// Example: retry succeeded on second attempt, cache miss fallback.
//
// INFO — Business-significant events for operational awareness.
// Example: order placed, user signed up, deployment started.
//
// DEBUG — Developer-relevant detail for troubleshooting.
// Example: SQL query text, cache hit/miss, parsed config values.
//
// TRACE — Very fine-grained, typically function entry/exit.
// Example: entering parseToken(), loop iteration count.
// Runtime level adjustment without restart
import { getRequestLogger } from "./lib/context";
function setLogLevel(level: string): void {
// Validate
const validLevels = ["fatal", "error", "warn", "info", "debug", "trace"];
if (!validLevels.includes(level)) {
throw new Error(`Invalid log level: ${level}`);
}
// In pino, set on the base instance
(baseLogger as { level: string }).level = level;
getRequestLogger().info("log level changed", { newLevel: level });
}
// Expose via admin endpoint
app.post("/admin/log-level", (req, res) => {
setLogLevel(req.body.level);
res.json({ level: req.body.level });
});
Sensitive Data Redaction
// lib/redaction.ts
const SENSITIVE_PATTERNS: Array<{ pattern: RegExp; replacement: string }> = [
{ pattern: /\b\d{3}-\d{2}-\d{4}\b/g, replacement: "***-**-****" }, // SSN
{ pattern: /\b\d{4}[\s-]?\d{4}[\s-]?\d{4}[\s-]?\d{4}\b/g, replacement: "****-****-****-****" }, // Credit card
{ pattern: /\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b/g, replacement: "[EMAIL]" },
{ pattern: /Bearer\s+[A-Za-z0-9\-._~+/]+=*/g, replacement: "Bearer [REDACTED]" },
];
const SENSITIVE_KEYS = new Set([
"password", "secret", "token", "authorization",
"apiKey", "api_key", "accessToken", "access_token",
"refreshToken", "refresh_token", "ssn", "creditCard",
]);
export function redactObject<T extends Record<string, unknown>>(obj: T): T {
const result = { ...obj };
for (const [key, value] of Object.entries(result)) {
if (SENSITIVE_KEYS.has(key.toLowerCase())) {
(result as Record<string, unknown>)[key] = "[REDACTED]";
} else if (typeof value === "string") {
let redacted = value;
for (const { pattern, replacement } of SENSITIVE_PATTERNS) {
redacted = redacted.replace(pattern, replacement);
}
(result as Record<string, unknown>)[key] = redacted;
} else if (typeof value === "object" && value !== null) {
(result as Record<string, unknown>)[key] = redactObject(value as Record<string, unknown>);
}
}
return result;
}
Observability Integration
// Bridging logs, traces, and metrics
import { trace, context, SpanStatusCode } from "@opentelemetry/api";
import { getRequestLogger } from "./lib/context";
const tracer = trace.getTracer("app");
async function handleOrder(orderId: string): Promise<void> {
const span = tracer.startSpan("handleOrder");
const log = getRequestLogger();
// Inject trace context into log fields
const spanContext = span.spanContext();
const orderLog = log.child({
traceId: spanContext.traceId,
spanId: spanContext.spanId,
orderId,
});
try {
orderLog.info("processing order");
await context.with(trace.setSpan(context.active(), span), async () => {
await validateInventory(orderId, orderLog);
await processPayment(orderId, orderLog);
await sendConfirmation(orderId, orderLog);
});
span.setStatus({ code: SpanStatusCode.OK });
orderLog.info("order completed");
} catch (err) {
span.setStatus({ code: SpanStatusCode.ERROR, message: (err as Error).message });
span.recordException(err as Error);
orderLog.error("order failed", { error: serializeError(err) });
throw err;
} finally {
span.end();
}
}
Request/Response Logging
// Structured HTTP logging middleware
import { type Request, type Response, type NextFunction } from "express";
import { getRequestLogger, getRequestContext } from "./lib/context";
export function requestResponseLogger(req: Request, res: Response, next: NextFunction): void {
const log = getRequestLogger();
const ctx = getRequestContext();
// Log request (omit body for GET, redact for others)
log.info("request received", {
method: req.method,
url: req.originalUrl,
contentLength: req.headers["content-length"],
ip: req.ip,
});
// Capture response
const originalJson = res.json.bind(res);
res.json = function (body: unknown) {
const duration = ctx ? Date.now() - ctx.startTime : undefined;
const level = res.statusCode >= 500 ? "error" : res.statusCode >= 400 ? "warn" : "info";
log[level]("response sent", {
statusCode: res.statusCode,
durationMs: duration,
contentLength: res.getHeader("content-length"),
});
return originalJson(body);
};
next();
}
Best Practices
- Use AsyncLocalStorage for context propagation — avoids passing a logger through every function signature. The request context (correlation ID, user, tenant) is always available.
- Keep log messages as static string literals —
"order placed"not`order ${id} placed`. This enables log grouping and pattern detection in aggregators. - Establish team-wide log level semantics — write them down and enforce via code review. The difference between WARN and ERROR must be unambiguous.
- Propagate correlation IDs across service boundaries — forward
x-request-idandx-trace-idheaders in every outbound HTTP call. - Serialize errors consistently — use a shared
serializeErrorfunction that captures stack, cause chain, and custom properties. - Redact declaratively — configure redaction paths at the logger level rather than relying on developers to remember per call site.
- Include timing data — always log
durationMsfor operations. This turns logs into a lightweight performance monitoring tool. - Log at service boundaries — request received, response sent, outbound call made, outbound call returned. Internal function calls rarely need logging.
- Add trace IDs when using OpenTelemetry — injecting
traceIdandspanIdinto log fields links logs to traces in Grafana, Datadog, or Jaeger.
Anti-Patterns
- Unstructured string logging —
console.log("Error: " + err.message)loses context, breaks search, and cannot be filtered. Always emit JSON with named fields. - Inconsistent field names — using
userIdin one service anduser_idin another prevents cross-service queries. Standardize a field naming convention (camelCase or snake_case) and enforce it. - Logging inside tight loops — generating a log line per array item in a 10,000-element batch saturates transports. Log a summary:
{ msg: "batch processed", count: 10000, failedCount: 3 }. - Missing correlation IDs — without a request ID threading through every log line, correlating events across services during an incident is nearly impossible.
- Logging sensitive data — PII, tokens, and credentials appearing in logs create compliance violations. Redaction must be systematic, not opt-in.
- Using log levels inconsistently — if ERROR sometimes means "transient network blip" and sometimes means "data corruption," alert fatigue is inevitable. Reserve ERROR for issues requiring human action.
- No log rotation or retention policy — unbounded log files fill disks; unbounded retention in a log platform inflates costs. Define retention tiers: 7 days for DEBUG, 30 days for INFO, 90 days for ERROR.
- Treating logs as the only observability signal — logs are one pillar. Pair them with metrics (counters, histograms) and traces for a complete picture. Structured logs with trace IDs bridge all three.
Install this skill directly: skilldb add logging-services-skills
Related Skills
Better Stack / Logtail
Better Stack (Logtail) logging — structured log ingestion, live tail, SQL-based querying, alerting, and uptime monitoring
Datadog Logging
Datadog log management — agent setup, library integration, log pipelines, facets, monitors, and APM correlation
Fluentd
Fluentd unified logging — input/output plugins, routing with tags, buffering, Kubernetes DaemonSet, and Fluent Bit
Logstash / ELK Stack
ELK Stack logging — Logstash pipelines, Elasticsearch indexing, Kibana dashboards, and Filebeat shippers
Papertrail
Papertrail cloud logging — syslog forwarding, live tail, search, alerts, and integration with app frameworks
Pino Logger
Pino: fast JSON logger for Node.js — child loggers, serializers, transports (pino-pretty, pino-http), redaction, Next.js integration, and log levels