Postman
Postman collections, environments, pre-request scripts, tests, and Newman CLI automation
You are an expert in Postman for building, testing, and automating API workflows. ## Key Points - name: Run API tests - Use collection variables for values shared within a collection, environment variables for stage-specific values. - Never hard-code secrets in collections; use environment variables and inject them in CI. - Write at minimum a status code check and a schema shape check for every request. - Use folder-level pre-request scripts for shared auth logic instead of duplicating across requests. - Export collections to version control and review changes in PRs. - Name requests descriptively: `POST Create User - valid payload` rather than just `POST /users`. - Forgetting to select the correct environment before running, leading to empty variable references. - Using `pm.globals` when `pm.collectionVariables` or `pm.environment` would be more appropriate and less leak-prone. - Not handling async pre-request scripts properly — `pm.sendRequest` is async and the test may run before it completes if you set variables outside the callback. - Assuming request order in collection runner is the same as folder order — use `postman.setNextRequest()` to control flow explicitly. - Exporting collections with sensitive data still embedded in example responses. ## Quick Example ```bash npm install -g newman # Or as a dev dependency npm install --save-dev newman ``` ```csv email,password,expected_status valid@test.com,correct123,200 invalid@test.com,wrong,401 ,missing,400 ```
skilldb get api-testing-skills/PostmanFull skill: 190 linesPostman — API Testing
You are an expert in Postman for building, testing, and automating API workflows.
Core Philosophy
Overview
Postman is a collaborative API platform for designing, testing, and documenting APIs. It provides a GUI for crafting requests, organizing them into collections, writing test scripts in JavaScript, and running them via the Newman CLI for CI/CD integration.
Setup & Configuration
Installing Newman (CLI runner)
npm install -g newman
# Or as a dev dependency
npm install --save-dev newman
Environment files
Create environment JSON files to manage variables across stages:
{
"id": "dev-env",
"name": "Development",
"values": [
{ "key": "base_url", "value": "http://localhost:3000/api", "enabled": true },
{ "key": "auth_token", "value": "", "enabled": true }
]
}
Collection structure
Organize collections by resource or workflow:
My API Collection/
Auth/
POST Login
POST Refresh Token
Users/
GET List Users
GET Get User
POST Create User
PUT Update User
DELETE Delete User
Core Patterns
Pre-request scripts
Set dynamic values before a request fires:
// Generate a timestamp
pm.variables.set("timestamp", new Date().toISOString());
// Chain auth — grab token from environment
const token = pm.environment.get("auth_token");
if (!token) {
pm.sendRequest({
url: pm.environment.get("base_url") + "/auth/login",
method: "POST",
header: { "Content-Type": "application/json" },
body: {
mode: "raw",
raw: JSON.stringify({
email: pm.environment.get("test_email"),
password: pm.environment.get("test_password")
})
}
}, function (err, res) {
pm.environment.set("auth_token", res.json().token);
});
}
Test scripts
Write assertions in the Tests tab:
pm.test("Status code is 200", function () {
pm.response.to.have.status(200);
});
pm.test("Response has required fields", function () {
const json = pm.response.json();
pm.expect(json).to.have.property("id");
pm.expect(json).to.have.property("email");
pm.expect(json.email).to.be.a("string");
});
pm.test("Response time is under 500ms", function () {
pm.expect(pm.response.responseTime).to.be.below(500);
});
// Store value for next request in the collection
pm.test("Save user ID for chaining", function () {
const json = pm.response.json();
pm.collectionVariables.set("created_user_id", json.id);
});
Running with Newman
# Basic run
newman run collection.json -e dev-environment.json
# With reporters and iteration data
newman run collection.json \
-e dev-environment.json \
-d test-data.csv \
--iteration-count 5 \
--reporters cli,htmlextra \
--reporter-htmlextra-export reports/result.html
# Fail on any test failure (useful for CI)
newman run collection.json -e dev-environment.json --bail
CI/CD integration (GitHub Actions)
- name: Run API tests
run: |
npx newman run tests/collection.json \
-e tests/ci-environment.json \
--reporters cli,junit \
--reporter-junit-export results.xml
Data-driven testing
Use a CSV or JSON data file to iterate over test cases:
email,password,expected_status
valid@test.com,correct123,200
invalid@test.com,wrong,401
,missing,400
Reference in tests:
pm.test("Returns expected status", function () {
pm.response.to.have.status(parseInt(pm.iterationData.get("expected_status")));
});
Best Practices
- Use collection variables for values shared within a collection, environment variables for stage-specific values.
- Never hard-code secrets in collections; use environment variables and inject them in CI.
- Write at minimum a status code check and a schema shape check for every request.
- Use folder-level pre-request scripts for shared auth logic instead of duplicating across requests.
- Export collections to version control and review changes in PRs.
- Name requests descriptively:
POST Create User - valid payloadrather than justPOST /users.
Common Pitfalls
- Forgetting to select the correct environment before running, leading to empty variable references.
- Using
pm.globalswhenpm.collectionVariablesorpm.environmentwould be more appropriate and less leak-prone. - Not handling async pre-request scripts properly —
pm.sendRequestis async and the test may run before it completes if you set variables outside the callback. - Assuming request order in collection runner is the same as folder order — use
postman.setNextRequest()to control flow explicitly. - Exporting collections with sensitive data still embedded in example responses.
Anti-Patterns
Over-engineering for hypothetical scale. Building for millions of users when you have hundreds adds complexity without value. Solve today's problems first.
Ignoring the existing ecosystem. Reinventing functionality that mature libraries already provide well wastes time and introduces unnecessary risk.
Premature abstraction. Creating elaborate frameworks and utilities before you have enough concrete cases to know what the abstraction should look like produces the wrong abstraction.
Neglecting error handling at boundaries. Internal code can trust its inputs, but system boundaries (user input, APIs, file I/O) require defensive validation.
Skipping documentation for obvious code. What is obvious to you today will not be obvious to your colleague next month or to you next year.
Install this skill directly: skilldb add api-testing-skills
Related Skills
API Mocking
API mocking with MSW (Mock Service Worker) and Prism for development and testing
Bruno
Bruno API client for git-friendly, offline-first API testing with Bru markup language
Contract Testing
Pact contract testing for consumer-driven API contracts between microservices
Httpie
HTTPie CLI for human-friendly API testing, scripting, and debugging from the terminal
Load Testing
k6 load testing for API performance, stress testing, and threshold-based CI checks
Supertest
Supertest for Node.js HTTP assertion testing with Express, Koa, and Fastify