Testing
Table-driven tests, benchmarks, fuzzing, test helpers, and testing best practices in Go
You are an expert in Go testing, helping developers write thorough, maintainable tests using the standard `testing` package, table-driven patterns, benchmarks, and fuzz tests. ## Key Points - `func TestXxx(t *testing.T)` — unit and integration tests. - `func BenchmarkXxx(b *testing.B)` — performance benchmarks. - `func FuzzXxx(f *testing.F)` — fuzz tests (Go 1.18+). - `func ExampleXxx()` — runnable documentation verified by `go test`. - Same package (`package foo`) — access unexported identifiers. - External test package (`package foo_test`) — tests the public API as a consumer would. - Use `t.Run` for subtests — enables selective test execution and clear failure output. - Call `t.Helper()` in every helper function. - Use `t.Cleanup()` instead of `defer` for resource teardown — it runs after the test and all subtests. - Use `t.Parallel()` for independent tests to speed up the suite. - Use `testdata/` directories for fixture files (ignored by `go build`). - Use `go-cmp` (`github.com/google/go-cmp/cmp`) for readable struct diffs.
skilldb get go-skills/TestingFull skill: 229 linesTesting — Go Programming
You are an expert in Go testing, helping developers write thorough, maintainable tests using the standard testing package, table-driven patterns, benchmarks, and fuzz tests.
Core Philosophy
Go's testing philosophy is radically simple: tests are just Go code. There is no special assertion library, no test runner configuration, no dependency injection framework. You write functions that start with Test, call methods on *testing.T to report failures, and run them with go test. This simplicity is intentional — it means any Go developer can read any test file without learning a DSL, and the barrier to writing tests is as low as writing any other function.
Table-driven tests are the idiomatic pattern because they separate test logic from test data. You define a slice of test cases, each with a name, inputs, and expected outputs, then loop over them with t.Run. This makes it trivial to add new cases (just append to the slice), produces clear failure output (the case name tells you exactly which scenario failed), and enables selective test execution with go test -run TestFoo/case_name. When you find yourself copy-pasting a test function and changing one value, stop and convert to a table-driven test instead.
Tests should be fast, deterministic, and independent. A test that depends on network access, a running database, or the output of another test is fragile and slow. Use httptest.NewServer for HTTP testing, interfaces and mocks for external dependencies, and t.TempDir() for filesystem operations. Run go test -race ./... in CI to catch data races, and treat test failures as hard blockers — a test suite that is routinely ignored is worse than no tests at all, because it gives false confidence.
Anti-Patterns
-
Assertion libraries that hide failures: Libraries that call
assert.Equal(t, got, want)and produce generic "not equal" messages lose the context of what was being tested. Go's standardt.Errorfwith a descriptive format string —t.Errorf("GetUser(%q) = %v, want %v", id, got, want)— produces failure messages that are immediately actionable without looking at the test code. -
Testing implementation details instead of behavior: Tests that verify private fields, internal method call counts, or exact log output break whenever the implementation changes, even if the behavior is correct. Test the public API: given these inputs, assert these outputs and side effects.
-
Skipping
t.Helper()in helper functions: Withoutt.Helper(), failure messages point to the line inside the helper rather than the line in the test that called it. This forces developers to trace through helper code to find the actual failing test case. Always mark test helpers. -
Shared mutable state between test cases: Parallel subtests that read and write the same map, slice, or struct race against each other. Each subtest should operate on its own copy of the data, or the test should not use
t.Parallel(). -
Calling
t.Fatalfrom a spawned goroutine:t.Fatalcallsruntime.Goexit(), which only exits the current goroutine. Calling it from a goroutine other than the test goroutine does not fail the test — it silently kills the spawned goroutine while the test continues. Use channels to communicate errors back to the test goroutine.
Overview
Go has first-class testing built into its toolchain. Test files live alongside production code with a _test.go suffix. The go test command discovers and runs tests, benchmarks, fuzz targets, and examples automatically. No third-party framework is required.
Core Concepts
Test Functions
func TestXxx(t *testing.T)— unit and integration tests.func BenchmarkXxx(b *testing.B)— performance benchmarks.func FuzzXxx(f *testing.F)— fuzz tests (Go 1.18+).func ExampleXxx()— runnable documentation verified bygo test.
Test Packages
- Same package (
package foo) — access unexported identifiers. - External test package (
package foo_test) — tests the public API as a consumer would.
t.Helper()
Marks a function as a test helper so that failure messages report the caller's line, not the helper's.
Implementation Patterns
Table-Driven Tests
func TestAdd(t *testing.T) {
tests := []struct {
name string
a, b int
expected int
}{
{"positive", 2, 3, 5},
{"negative", -1, -2, -3},
{"zero", 0, 0, 0},
{"mixed", -1, 5, 4},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := Add(tt.a, tt.b)
if got != tt.expected {
t.Errorf("Add(%d, %d) = %d, want %d", tt.a, tt.b, got, tt.expected)
}
})
}
}
Table-Driven Tests with Error Cases
func TestParse(t *testing.T) {
tests := []struct {
name string
input string
want Config
wantErr bool
}{
{
name: "valid",
input: `{"port": 8080}`,
want: Config{Port: 8080},
},
{
name: "invalid json",
input: `{bad`,
wantErr: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := Parse(tt.input)
if (err != nil) != tt.wantErr {
t.Fatalf("Parse() error = %v, wantErr %v", err, tt.wantErr)
}
if !tt.wantErr && got != tt.want {
t.Errorf("Parse() = %v, want %v", got, tt.want)
}
})
}
}
Test Helpers
func newTestServer(t *testing.T) *httptest.Server {
t.Helper()
mux := http.NewServeMux()
mux.HandleFunc("/health", func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
})
srv := httptest.NewServer(mux)
t.Cleanup(func() { srv.Close() })
return srv
}
func mustOpen(t *testing.T, path string) *os.File {
t.Helper()
f, err := os.Open(path)
if err != nil {
t.Fatalf("opening %s: %v", path, err)
}
t.Cleanup(func() { f.Close() })
return f
}
Benchmarks
func BenchmarkFib(b *testing.B) {
for b.Loop() {
Fib(20)
}
}
// Sub-benchmarks
func BenchmarkSort(b *testing.B) {
sizes := []int{100, 1000, 10000}
for _, size := range sizes {
b.Run(fmt.Sprintf("size-%d", size), func(b *testing.B) {
data := generateSlice(size)
b.ResetTimer()
for b.Loop() {
sort.Ints(slices.Clone(data))
}
})
}
}
Fuzz Tests (Go 1.18+)
func FuzzParseURL(f *testing.F) {
// Seed corpus
f.Add("https://example.com")
f.Add("http://localhost:8080/path?q=1")
f.Add("")
f.Fuzz(func(t *testing.T, input string) {
u, err := url.Parse(input)
if err != nil {
return // invalid input is fine
}
// Round-trip check
reparsed, err := url.Parse(u.String())
if err != nil {
t.Errorf("round-trip failed: Parse(%q).String() = %q, re-parse error: %v",
input, u.String(), err)
}
if reparsed.String() != u.String() {
t.Errorf("round-trip mismatch")
}
})
}
Golden Files
func TestRender(t *testing.T) {
got := Render(input)
golden := filepath.Join("testdata", t.Name()+".golden")
if *update {
os.WriteFile(golden, []byte(got), 0o644)
return
}
want, err := os.ReadFile(golden)
if err != nil {
t.Fatalf("reading golden file: %v", err)
}
if diff := cmp.Diff(string(want), got); diff != "" {
t.Errorf("mismatch (-want +got):\n%s", diff)
}
}
Best Practices
- Use
t.Runfor subtests — enables selective test execution and clear failure output. - Call
t.Helper()in every helper function. - Use
t.Cleanup()instead ofdeferfor resource teardown — it runs after the test and all subtests. - Use
t.Parallel()for independent tests to speed up the suite. - Use
testdata/directories for fixture files (ignored bygo build). - Use
go-cmp(github.com/google/go-cmp/cmp) for readable struct diffs. - Run
go test -race ./...in CI to detect data races. - Run
go test -cover ./...to check coverage.
Common Pitfalls
- Not calling
t.Helper(): failure messages point to the helper instead of the actual test line. - Shared state between subtests: parallel subtests that mutate shared state cause races. Copy test case data.
- Ignoring
t.Parallel()scoping: parallel subtests run concurrently within the parent; the parent does not complete until all parallel subtests finish. - Benchmarking compiler-optimized-away code: assign results to a package-level
sinkvariable to prevent dead-code elimination. - Using
t.Fatalin goroutines:t.Fatalcallsruntime.Goexit()and must only be called from the test goroutine, not from spawned goroutines.
Install this skill directly: skilldb add go-skills
Related Skills
Context Patterns
Context usage for cancellation, timeouts, deadlines, and value propagation in Go
Error Handling
Error handling patterns including wrapping, sentinel errors, custom types, and error groups in Go
Generics
Go generics including type parameters, constraints, and generic data structure patterns (Go 1.18+)
Goroutines Channels
Concurrency patterns using goroutines, channels, select, and sync primitives in Go
HTTP Servers
HTTP server patterns using net/http, chi, and gin including middleware, routing, and graceful shutdown
Interfaces
Interface design principles, implicit satisfaction, and composition patterns in Go