Statsig
"Statsig: feature gates, dynamic config, experiments/A/B tests, metrics, layers, Next.js SDK, server/client evaluation"
Statsig treats every feature as a measurable experiment. Rather than just toggling features on and off, Statsig automatically computes the statistical impact of every gate and experiment on your core metrics. The platform's "pulse" system continuously monitors how changes affect key business metrics — no manual analysis needed. Feature gates, dynamic configs, experiments, and layers form a hierarchy of control: gates for boolean decisions, configs for parameterized settings, experiments for rigorous A/B tests, and layers for mutually exclusive experiment allocation. ## Key Points 1. **Use `getClientInitializeResponse` for SSR.** This serializes all evaluated gates and configs on the server, eliminating client-side loading flicker and network round-trips at startup. 2. **Separate server and client SDK keys.** Server keys evaluate all rules locally and have full access. Client keys are safe for browsers and only receive pre-evaluated results. 3. **Define custom IDs for non-user entities.** Use `customIDs` (companyID, teamID, sessionID) to run experiments at the organization or session level rather than per-user. 4. **Log events with both string values and metadata.** The `value` parameter feeds into Statsig's metric aggregation; the `metadata` object provides breakdowns for deeper analysis. 5. **Use layers when running multiple experiments in the same surface area.** Layers prevent interaction effects that invalidate statistical results. 6. **Set up metric guardrails in the Statsig console.** Define metrics that must not regress (latency, error rate) so experiments auto-halt if they cause harm. 7. **Flush events on server shutdown.** Call `Statsig.shutdown()` in your process exit handler to ensure all logged events reach Statsig before the process terminates. 1. **Checking gates inside render loops.** Gate evaluation is synchronous after initialization but still involves object lookups. Evaluate once and store the result. 5. **Running experiments without sufficient sample size.** Statsig will show results as "not enough data" if traffic is too low. Plan experiment duration based on your traffic volume before starting. 7. **Forgetting to call `shutdown()` in serverless functions.** In Lambda or edge functions, the process may freeze before events flush. Always call `Statsig.flush()` at the end of request handling.
skilldb get feature-flags-services-skills/StatsigFull skill: 371 linesStatsig
Core Philosophy
Statsig treats every feature as a measurable experiment. Rather than just toggling features on and off, Statsig automatically computes the statistical impact of every gate and experiment on your core metrics. The platform's "pulse" system continuously monitors how changes affect key business metrics — no manual analysis needed. Feature gates, dynamic configs, experiments, and layers form a hierarchy of control: gates for boolean decisions, configs for parameterized settings, experiments for rigorous A/B tests, and layers for mutually exclusive experiment allocation.
Setup
Server SDK Initialization
import Statsig from "statsig-node";
await Statsig.initialize(process.env.STATSIG_SERVER_KEY!, {
environment: { tier: "production" },
rulesetsSyncIntervalMs: 10_000,
loggingIntervalMs: 60_000,
});
// Create a user object for evaluation
const user: StatsigUser = {
userID: "user-123",
email: "alice@example.com",
custom: {
plan: "pro",
signupDate: "2024-06-15",
company: "acme-corp",
},
customIDs: {
companyID: "acme-corp",
teamID: "team-eng",
},
};
// Check a feature gate
const showNewEditor = Statsig.checkGateSync(user, "new_editor_enabled");
if (showNewEditor) {
renderNewEditor();
} else {
renderLegacyEditor();
}
Next.js Integration
// lib/statsig-server.ts
import Statsig, { StatsigUser } from "statsig-node";
let initialized = false;
export async function getStatsigServer() {
if (!initialized) {
await Statsig.initialize(process.env.STATSIG_SERVER_KEY!);
initialized = true;
}
return Statsig;
}
export function buildStatsigUser(
userId: string,
attributes?: Record<string, unknown>
): StatsigUser {
return {
userID: userId,
custom: attributes ?? {},
};
}
// app/layout.tsx — Server Component bootstrap
import { getStatsigServer, buildStatsigUser } from "@/lib/statsig-server";
import { StatsigProvider } from "@/components/statsig-provider";
export default async function RootLayout({
children,
}: {
children: React.ReactNode;
}) {
const statsig = await getStatsigServer();
const user = buildStatsigUser("anonymous");
// Generate client-side initialization values on the server
const initValues = statsig.getClientInitializeResponse(user);
return (
<html>
<body>
<StatsigProvider initValues={initValues} user={user}>
{children}
</StatsigProvider>
</body>
</html>
);
}
// components/statsig-provider.tsx
"use client";
import { StatsigProvider as Provider, StatsigUser } from "statsig-react";
interface Props {
children: React.ReactNode;
initValues: Record<string, unknown>;
user: StatsigUser;
}
export function StatsigProvider({ children, initValues, user }: Props) {
return (
<Provider
sdkKey={process.env.NEXT_PUBLIC_STATSIG_CLIENT_KEY!}
user={user}
initializeValues={initValues}
options={{
environment: { tier: "production" },
}}
>
{children}
</Provider>
);
}
Key Techniques
Feature Gates with Evaluation Reasons
// Server-side gate check with detailed logging
async function handleRequest(userId: string) {
const user = buildStatsigUser(userId);
const gateResult = Statsig.getFeatureGateSync(user, "new_search_algorithm");
console.log({
gate: "new_search_algorithm",
value: gateResult.value,
ruleID: gateResult.ruleID,
evaluationDetails: gateResult.evaluationDetails,
});
if (gateResult.value) {
return performSemanticSearch(userId);
}
return performKeywordSearch(userId);
}
Dynamic Config for Parameterized Features
// Dynamic config returns typed objects — not just booleans
interface PricingConfig {
trialDays: number;
monthlyPrice: number;
annualDiscount: number;
showBanner: boolean;
bannerText: string;
}
function getPricingConfig(user: StatsigUser): PricingConfig {
const config = Statsig.getConfigSync(user, "pricing_page_config");
return {
trialDays: config.get<number>("trialDays", 14),
monthlyPrice: config.get<number>("monthlyPrice", 29),
annualDiscount: config.get<number>("annualDiscount", 20),
showBanner: config.get<boolean>("showBanner", false),
bannerText: config.get<string>("bannerText", ""),
};
}
// Use in a route handler
export async function GET(req: Request) {
const userId = extractUserId(req);
const user = buildStatsigUser(userId);
const pricing = getPricingConfig(user);
return Response.json({ pricing });
}
Experiments and A/B Tests
// Experiments return parameter values per variant
interface OnboardingExperiment {
steps: number;
showVideo: boolean;
ctaText: string;
layout: "single-page" | "wizard" | "progressive";
}
function getOnboardingVariant(user: StatsigUser): OnboardingExperiment {
const experiment = Statsig.getExperimentSync(user, "onboarding_flow_v2");
return {
steps: experiment.get<number>("steps", 3),
showVideo: experiment.get<boolean>("showVideo", false),
ctaText: experiment.get<string>("ctaText", "Get Started"),
layout: experiment.get<string>("layout", "wizard") as OnboardingExperiment["layout"],
};
}
// React component consuming experiment
"use client";
import { useExperiment } from "statsig-react";
function OnboardingFlow() {
const { config: experiment } = useExperiment("onboarding_flow_v2");
const layout = experiment.get<string>("layout", "wizard");
const showVideo = experiment.get<boolean>("showVideo", false);
return (
<div>
{showVideo && <IntroVideo />}
{layout === "wizard" && <WizardOnboarding />}
{layout === "single-page" && <SinglePageOnboarding />}
{layout === "progressive" && <ProgressiveOnboarding />}
</div>
);
}
Layers for Mutually Exclusive Experiments
// Layers ensure users are in only ONE experiment within a layer
function getSearchExperimentParams(user: StatsigUser) {
const layer = Statsig.getLayerSync(user, "search_experiments");
// User might be in "ranking_v2" or "filter_ux" experiment
// but never both — the layer handles allocation
return {
algorithm: layer.get<string>("algorithm", "bm25"),
boostRecent: layer.get<boolean>("boostRecent", false),
filterPosition: layer.get<string>("filterPosition", "sidebar"),
maxResults: layer.get<number>("maxResults", 20),
};
}
Custom Metrics and Event Logging
// Log events that Statsig automatically correlates with experiments
import Statsig from "statsig-node";
async function trackPurchase(
userId: string,
orderId: string,
revenue: number,
items: string[]
) {
const user = buildStatsigUser(userId);
// Value event for revenue metrics
Statsig.logEvent(user, "purchase_completed", revenue.toString(), {
orderId,
itemCount: items.length.toString(),
currency: "USD",
});
}
async function trackSearchUsage(userId: string, query: string, resultCount: number) {
const user = buildStatsigUser(userId);
Statsig.logEvent(user, "search_performed", query, {
resultCount: resultCount.toString(),
hasResults: (resultCount > 0).toString(),
});
}
// Client-side event logging in React
"use client";
import { useStatsigLogEffect } from "statsig-react";
function ProductCard({ product }: { product: Product }) {
const logEvent = useStatsigLogEffect();
function handleAddToCart() {
addToCart(product.id);
logEvent("add_to_cart", product.price.toString(), {
productId: product.id,
category: product.category,
});
}
return (
<div>
<h3>{product.name}</h3>
<button onClick={handleAddToCart}>Add to Cart</button>
</div>
);
}
Middleware-Based Gate Evaluation
// middleware.ts — evaluate gates at the edge
import { NextRequest, NextResponse } from "next/server";
export async function middleware(req: NextRequest) {
const statsig = await getStatsigServer();
const userId = req.cookies.get("uid")?.value ?? "anonymous";
const user = buildStatsigUser(userId, {
country: req.geo?.country,
pathname: req.nextUrl.pathname,
});
// Route-level feature gate
if (req.nextUrl.pathname.startsWith("/dashboard")) {
const useNewDashboard = Statsig.checkGateSync(user, "new_dashboard_v2");
if (useNewDashboard) {
const url = req.nextUrl.clone();
url.pathname = req.nextUrl.pathname.replace("/dashboard", "/dashboard-v2");
return NextResponse.rewrite(url);
}
}
return NextResponse.next();
}
Best Practices
-
Use
getClientInitializeResponsefor SSR. This serializes all evaluated gates and configs on the server, eliminating client-side loading flicker and network round-trips at startup. -
Separate server and client SDK keys. Server keys evaluate all rules locally and have full access. Client keys are safe for browsers and only receive pre-evaluated results.
-
Define custom IDs for non-user entities. Use
customIDs(companyID, teamID, sessionID) to run experiments at the organization or session level rather than per-user. -
Log events with both string values and metadata. The
valueparameter feeds into Statsig's metric aggregation; themetadataobject provides breakdowns for deeper analysis. -
Use layers when running multiple experiments in the same surface area. Layers prevent interaction effects that invalidate statistical results.
-
Set up metric guardrails in the Statsig console. Define metrics that must not regress (latency, error rate) so experiments auto-halt if they cause harm.
-
Flush events on server shutdown. Call
Statsig.shutdown()in your process exit handler to ensure all logged events reach Statsig before the process terminates. -
Use targeting rules in the console, not in code. Do not wrap gate checks with additional
ifconditions in your application — push that logic into Statsig's targeting rules for centralized control.
Anti-Patterns
-
Checking gates inside render loops. Gate evaluation is synchronous after initialization but still involves object lookups. Evaluate once and store the result.
-
Using experiments without defining a primary metric. Statsig calculates statistical significance automatically, but only if you have events flowing that map to your experiment's success criteria.
-
Passing different user objects for the same user. If you build the StatsigUser with different attributes on different requests, the user may land in different experiment groups, corrupting results.
-
Ignoring evaluation reasons during debugging. When a gate does not behave as expected, check
evaluationDetailsto determine whether the issue is stale rules, an unrecognized user, or a targeting mismatch. -
Running experiments without sufficient sample size. Statsig will show results as "not enough data" if traffic is too low. Plan experiment duration based on your traffic volume before starting.
-
Hardcoding default values inconsistently. If one call site uses
get("maxResults", 20)and another usesget("maxResults", 50), behavior diverges when the config is unavailable. Centralize defaults in a typed function. -
Forgetting to call
shutdown()in serverless functions. In Lambda or edge functions, the process may freeze before events flush. Always callStatsig.flush()at the end of request handling.
Install this skill directly: skilldb add feature-flags-services-skills
Related Skills
ConfigCat
"ConfigCat: feature flags and remote config with percentage rollouts, targeting rules, config-as-code, and cross-platform SDKs"
Flagsmith
"Flagsmith: open-source feature flags, remote config, segments, environments, audit logs, self-hosted, REST API"
Flipt
"Flipt: open-source, self-hosted feature flag platform with GitOps support, boolean and multivariate flags, and GRPC/REST APIs"
GrowthBook
"GrowthBook: open-source feature flags, A/B testing, Bayesian statistics, SDK, targeting, webhooks, self-hosted"
LaunchDarkly
"LaunchDarkly: feature flags, targeting rules, segments, experiments, metrics, Node/React SDK, bootstrap, streaming"
Split.io
"Split.io: feature delivery platform with feature flags, targeting, experimentation, traffic allocation, and metrics integration"