Load Testing
k6 load testing for API performance, stress testing, and threshold-based CI checks
You are an expert in k6 for load testing, stress testing, and performance validation of APIs. ## Key Points - name: Run load tests - Start with smoke tests (1 VU, short duration) to validate the script before scaling up. - Set meaningful thresholds — they turn load tests into automated pass/fail checks in CI. - Use `stages` for gradual ramp-up to simulate realistic traffic patterns, not instant spikes. - Tag requests to filter and analyze metrics per endpoint in dashboards. - Separate test scripts by purpose: smoke, load, stress, and spike have different goals and configurations. - Run load tests against a staging environment that mirrors production, never against production without explicit approval. - Forgetting `sleep()` between requests, which creates an unrealistically aggressive request rate that does not represent real user behavior. - Setting thresholds too tight for a first run — baseline your API performance first, then set thresholds based on measured values. - Running from a single machine and hitting network bottlenecks — k6 Cloud or distributed execution solves this for large-scale tests. - Not correlating k6 output with server-side metrics — always monitor server CPU, memory, and error logs alongside k6 results. - Using `JSON.parse(r.body)` inside `check()` on every iteration — parse once and reuse the result.
skilldb get api-testing-skills/Load TestingFull skill: 271 linesLoad Testing (k6) — API Testing
You are an expert in k6 for load testing, stress testing, and performance validation of APIs.
Core Philosophy
Overview
k6 is an open-source load testing tool by Grafana Labs. Tests are written in JavaScript (ES6 modules), executed by a high-performance Go runtime, and produce metrics that can be exported to time-series databases and dashboards. It supports HTTP, WebSocket, gRPC, and browser-level testing.
Setup & Configuration
Installation
# macOS
brew install k6
# Windows
winget install k6
# Docker
docker run --rm -i grafana/k6 run - <script.js
# npm wrapper (for CI)
npm install -g k6
Basic project structure
load-tests/
scripts/
smoke.js
load.js
stress.js
spike.js
config/
thresholds.json
lib/
helpers.js
Core Patterns
Smoke test (minimal validation)
import http from "k6/http";
import { check, sleep } from "k6";
export const options = {
vus: 1,
duration: "30s",
thresholds: {
http_req_duration: ["p(99)<1500"],
http_req_failed: ["rate<0.01"],
},
};
export default function () {
const res = http.get("https://api.example.com/health");
check(res, {
"status is 200": (r) => r.status === 200,
"response time < 500ms": (r) => r.timings.duration < 500,
});
sleep(1);
}
Load test (sustained traffic)
import http from "k6/http";
import { check, sleep } from "k6";
export const options = {
stages: [
{ duration: "2m", target: 50 }, // Ramp up to 50 VUs
{ duration: "5m", target: 50 }, // Stay at 50 VUs
{ duration: "2m", target: 0 }, // Ramp down
],
thresholds: {
http_req_duration: ["p(95)<500", "p(99)<1000"],
http_req_failed: ["rate<0.05"],
checks: ["rate>0.95"],
},
};
const BASE_URL = __ENV.BASE_URL || "https://api.example.com";
export default function () {
const headers = {
Authorization: `Bearer ${__ENV.API_TOKEN}`,
"Content-Type": "application/json",
};
// GET list
const listRes = http.get(`${BASE_URL}/api/users?page=1`, { headers });
check(listRes, {
"list status 200": (r) => r.status === 200,
"list has results": (r) => JSON.parse(r.body).length > 0,
});
sleep(1);
// POST create
const payload = JSON.stringify({
name: `user-${Date.now()}`,
email: `user-${Date.now()}@test.com`,
});
const createRes = http.post(`${BASE_URL}/api/users`, payload, { headers });
check(createRes, {
"create status 201": (r) => r.status === 201,
});
sleep(2);
}
Stress test (find breaking point)
export const options = {
stages: [
{ duration: "2m", target: 100 },
{ duration: "5m", target: 100 },
{ duration: "2m", target: 200 },
{ duration: "5m", target: 200 },
{ duration: "2m", target: 300 },
{ duration: "5m", target: 300 },
{ duration: "5m", target: 0 },
],
thresholds: {
http_req_duration: ["p(95)<2000"],
},
};
Scenarios (advanced execution control)
export const options = {
scenarios: {
browse: {
executor: "constant-vus",
vus: 20,
duration: "5m",
exec: "browseProducts",
},
checkout: {
executor: "ramping-arrival-rate",
startRate: 1,
timeUnit: "1s",
preAllocatedVUs: 50,
maxVUs: 100,
stages: [
{ duration: "2m", target: 10 },
{ duration: "3m", target: 10 },
{ duration: "1m", target: 0 },
],
exec: "checkout",
},
},
};
export function browseProducts() {
http.get(`${BASE_URL}/products`);
sleep(2);
}
export function checkout() {
http.post(`${BASE_URL}/checkout`, JSON.stringify({ cartId: "abc" }), {
headers: { "Content-Type": "application/json" },
});
}
Custom metrics and tags
import { Trend, Counter, Rate } from "k6/metrics";
const loginDuration = new Trend("login_duration");
const loginFailures = new Counter("login_failures");
const loginSuccess = new Rate("login_success_rate");
export default function () {
const res = http.post(`${BASE_URL}/auth/login`, JSON.stringify({
email: "test@test.com",
password: "password123",
}), {
headers: { "Content-Type": "application/json" },
tags: { endpoint: "login" },
});
loginDuration.add(res.timings.duration);
if (res.status === 200) {
loginSuccess.add(1);
} else {
loginSuccess.add(0);
loginFailures.add(1);
}
}
Running tests
# Basic run
k6 run scripts/load.js
# With environment variables
k6 run -e BASE_URL=https://staging.example.com -e API_TOKEN=abc scripts/load.js
# Override VUs and duration from CLI
k6 run --vus 10 --duration 30s scripts/smoke.js
# Output to InfluxDB for Grafana dashboards
k6 run --out influxdb=http://localhost:8086/k6 scripts/load.js
# Output to JSON for processing
k6 run --out json=results.json scripts/load.js
CI integration (GitHub Actions)
- name: Run load tests
uses: grafana/k6-action@v0.3.1
with:
filename: load-tests/scripts/smoke.js
env:
BASE_URL: ${{ secrets.STAGING_URL }}
Best Practices
- Start with smoke tests (1 VU, short duration) to validate the script before scaling up.
- Set meaningful thresholds — they turn load tests into automated pass/fail checks in CI.
- Use
stagesfor gradual ramp-up to simulate realistic traffic patterns, not instant spikes. - Tag requests to filter and analyze metrics per endpoint in dashboards.
- Separate test scripts by purpose: smoke, load, stress, and spike have different goals and configurations.
- Run load tests against a staging environment that mirrors production, never against production without explicit approval.
Common Pitfalls
- Forgetting
sleep()between requests, which creates an unrealistically aggressive request rate that does not represent real user behavior. - Setting thresholds too tight for a first run — baseline your API performance first, then set thresholds based on measured values.
- Running from a single machine and hitting network bottlenecks — k6 Cloud or distributed execution solves this for large-scale tests.
- Not correlating k6 output with server-side metrics — always monitor server CPU, memory, and error logs alongside k6 results.
- Using
JSON.parse(r.body)insidecheck()on every iteration — parse once and reuse the result.
Anti-Patterns
Over-engineering for hypothetical scale. Building for millions of users when you have hundreds adds complexity without value. Solve today's problems first.
Ignoring the existing ecosystem. Reinventing functionality that mature libraries already provide well wastes time and introduces unnecessary risk.
Premature abstraction. Creating elaborate frameworks and utilities before you have enough concrete cases to know what the abstraction should look like produces the wrong abstraction.
Neglecting error handling at boundaries. Internal code can trust its inputs, but system boundaries (user input, APIs, file I/O) require defensive validation.
Skipping documentation for obvious code. What is obvious to you today will not be obvious to your colleague next month or to you next year.
Install this skill directly: skilldb add api-testing-skills
Related Skills
API Mocking
API mocking with MSW (Mock Service Worker) and Prism for development and testing
Bruno
Bruno API client for git-friendly, offline-first API testing with Bru markup language
Contract Testing
Pact contract testing for consumer-driven API contracts between microservices
Httpie
HTTPie CLI for human-friendly API testing, scripting, and debugging from the terminal
Postman
Postman collections, environments, pre-request scripts, tests, and Newman CLI automation
Supertest
Supertest for Node.js HTTP assertion testing with Express, Koa, and Fastify