Skip to main content
Technology & EngineeringFeature Flags Services239 lines

Split.io

"Split.io: feature delivery platform with feature flags, targeting, experimentation, traffic allocation, and metrics integration"

Quick Summary11 lines
You are an expert in integrating Split.io for feature flag management.

## Key Points

- **Not handling the `"control"` treatment** -- `"control"` indicates an error condition. Treating it the same as `"on"` or ignoring it entirely means SDK failures enable untested code paths.
- **Forgetting to call `client.destroy()` on shutdown** -- The SDK flushes queued impressions and events on destroy. Skipping this loses tracking data, making experiment results inaccurate.
- **Call `destroy()` on shutdown** — the SDK flushes queued impressions and events on destroy. Skipping this loses tracking data and can affect experimentation accuracy.
- **Use `getTreatments` for batch evaluations** — when checking multiple splits in a single request path, batch calls reduce overhead compared to individual `getTreatment` calls.
- **Evaluating before SDK_READY** — the SDK streams split definitions on startup. Evaluating before `SDK_READY` fires returns `"control"` for all splits, which can silently disable every feature.
skilldb get feature-flags-services-skills/Split.ioFull skill: 239 lines
Paste into your CLAUDE.md or agent config

Split.io — Feature Flags

You are an expert in integrating Split.io for feature flag management.

Core Philosophy

Split.io treats feature flags as experiments, not just toggles. Every split divides users into treatments, and the platform tracks which user saw which treatment (impressions). This impression data, combined with custom events submitted via client.track(), feeds Split's built-in experimentation engine to measure the business impact of each treatment. The difference between a feature flag and an experiment is whether you measure the outcome -- Split is designed to make measurement the default.

The SDK evaluates flags locally with sub-millisecond latency. Split streams flag definitions to the SDK on startup and keeps them synchronized in real time. Every getTreatment call evaluates against an in-memory copy of the rules -- no network round-trip. This architecture means flag evaluation is both fast and resilient: if the Split backend goes down, the SDK continues operating with its last known state. But it also means you must wait for SDK_READY before evaluating, or every flag returns "control".

The "control" treatment is Split's safety valve. It is returned when the SDK is not ready, the split does not exist, or an internal error occurs. Your application must handle "control" as equivalent to your safest default behavior -- typically the same as "off". If you only handle "on" and "off" and ignore "control", SDK initialization failures silently break every feature gate in your application.

Anti-Patterns

  • Evaluating flags before SDK_READY -- The SDK streams split definitions asynchronously on startup. Evaluating before it is ready returns "control" for all splits, which can silently disable every feature.
  • Not handling the "control" treatment -- "control" indicates an error condition. Treating it the same as "on" or ignoring it entirely means SDK failures enable untested code paths.
  • Forgetting to call client.destroy() on shutdown -- The SDK flushes queued impressions and events on destroy. Skipping this loses tracking data, making experiment results inaccurate.
  • Not tracking events for experiments -- Split's experimentation engine relies on event data submitted via client.track(). Without events, experiment metrics stay empty and you cannot measure treatment impact.
  • Using individual getTreatment calls when batch evaluation is available -- When checking multiple splits in a single request path, getTreatments reduces overhead compared to multiple individual calls.

Overview

Split.io (now Harness Feature Flags after acquisition) is a feature delivery platform that combines feature flags with built-in experimentation and metrics. It uses the concept of "splits" — feature flags that divide traffic into treatments (e.g., "on", "off", "variant_a"). Split evaluates flags locally in the SDK using an in-memory rule engine synchronized from the Split cloud, delivering sub-millisecond flag evaluations. The platform tracks impressions (which user saw which treatment) and integrates with analytics tools to measure feature impact.

Setup & Configuration

Node.js Server SDK

import { SplitFactory } from "@splitsoftware/splitio";

const factory = SplitFactory({
  core: {
    authorizationKey: process.env.SPLIT_SERVER_API_KEY!,
  },
  startup: {
    readyTimeout: 10, // seconds
  },
});

const client = factory.client();

await new Promise<void>((resolve, reject) => {
  client.on(client.Event.SDK_READY, resolve);
  client.on(client.Event.SDK_READY_TIMED_OUT, reject);
});

// Evaluate a split
const treatment = client.getTreatment("user-123", "checkout-redesign");

if (treatment === "on") {
  // new checkout
} else {
  // default checkout
}

// Clean up on shutdown
await client.destroy();

React SDK

import { SplitFactoryProvider, useSplitTreatments } from "@splitsoftware/splitio-react";

const sdkConfig = {
  core: {
    authorizationKey: process.env.REACT_APP_SPLIT_CLIENT_KEY!,
    key: currentUser.id,  // user key for targeting
  },
};

function App() {
  return (
    <SplitFactoryProvider config={sdkConfig}>
      <FeatureContent />
    </SplitFactoryProvider>
  );
}

function FeatureContent() {
  const { treatments, isReady } = useSplitTreatments({
    names: ["new-navbar", "pricing-experiment"],
    attributes: { plan: "pro", country: "US" },
  });

  if (!isReady) return <Spinner />;

  return (
    <>
      {treatments["new-navbar"].treatment === "on" && <NewNavbar />}
      <PricingPage variant={treatments["pricing-experiment"].treatment} />
    </>
  );
}

Python SDK

from splitio import get_factory

factory = get_factory("YOUR_SERVER_API_KEY")
factory.block_until_ready(10)  # wait up to 10 seconds

client = factory.client()

treatment = client.get_treatment("user-123", "checkout-redesign", {
    "plan": "enterprise",
    "age": 30,
})

if treatment == "on":
    use_new_checkout()
else:
    use_legacy_checkout()

# Track events for metric analysis
client.track("user-123", "user", "purchase", 49.99, {"item": "widget"})

factory.destroy()

Core Patterns

Treatments and Configurations

Splits return treatments (strings) and optional dynamic configurations (JSON).

const treatmentResult = client.getTreatmentWithConfig("user-123", "onboarding-flow");

const { treatment, config } = treatmentResult;
const parsedConfig = config ? JSON.parse(config) : {};

// treatment: "wizard" | "checklist" | "off"
// config: '{"steps": 5, "skipIntro": true}'

switch (treatment) {
  case "wizard":
    return <WizardOnboarding steps={parsedConfig.steps} />;
  case "checklist":
    return <ChecklistOnboarding skipIntro={parsedConfig.skipIntro} />;
  default:
    return <DefaultOnboarding />;
}

Attributes for Targeting

// Pass attributes for server-side targeting rules
const attributes = {
  plan: "enterprise",
  age: 30,
  registered: new Date("2024-01-15").getTime(),
  features: ["sso", "audit-log"],
};

const treatment = client.getTreatment("user-123", "premium-dashboard", attributes);

Impression Listeners

Track which users see which treatments for debugging and analytics.

const factory = SplitFactory({
  core: { authorizationKey: process.env.SPLIT_SERVER_API_KEY! },
  impressionListener: {
    logImpression(impressionData) {
      console.log({
        feature: impressionData.impression.feature,
        key: impressionData.impression.keyName,
        treatment: impressionData.impression.treatment,
        label: impressionData.impression.label,
        timestamp: impressionData.impression.time,
      });
      // Forward to your analytics pipeline
      analytics.track("split_impression", impressionData.impression);
    },
  },
});

Event Tracking for Experimentation

// Track custom events to measure the impact of treatments
// Split uses these events in its experimentation metrics

// Simple event
client.track("user-123", "user", "page_view");

// Event with numeric value (for sum/average metrics)
client.track("user-123", "user", "purchase", 79.99);

// Event with properties (for filtering)
client.track("user-123", "user", "signup", undefined, {
  source: "organic",
  plan: "pro",
});

Evaluating Multiple Splits at Once

// Batch evaluation is more efficient than individual calls
const treatments = client.getTreatments("user-123", [
  "feature-a",
  "feature-b",
  "feature-c",
], attributes);

// treatments = { "feature-a": "on", "feature-b": "off", "feature-c": "v2" }

// With configs
const treatmentsWithConfig = client.getTreatmentsWithConfig("user-123", [
  "feature-a",
  "feature-b",
], attributes);

Best Practices

  • Always handle the control treatment — Split returns "control" when the SDK is not ready, the split does not exist, or an error occurs. Treat "control" as equivalent to your default/safe behavior.
  • Call destroy() on shutdown — the SDK flushes queued impressions and events on destroy. Skipping this loses tracking data and can affect experimentation accuracy.
  • Use getTreatments for batch evaluations — when checking multiple splits in a single request path, batch calls reduce overhead compared to individual getTreatment calls.

Common Pitfalls

  • Evaluating before SDK_READY — the SDK streams split definitions on startup. Evaluating before SDK_READY fires returns "control" for all splits, which can silently disable every feature.
  • Forgetting to track events for experiments — Split's experimentation engine relies on event data submitted via client.track(). Without events, experiment metrics stay empty and you cannot measure the impact of a treatment.

Install this skill directly: skilldb add feature-flags-services-skills

Get CLI access →