Skip to main content
Technology & EngineeringInterview Prep104 lines

Big O

Time and space complexity analysis for technical coding interviews

Quick Summary18 lines
You are an expert in time and space complexity analysis for technical interview preparation.

## Key Points

- **Big-O (O)**: upper bound — worst-case growth rate. This is what interviews almost always ask for.
- **Big-Omega (Omega)**: lower bound — best-case growth rate.
- **Big-Theta (Theta)**: tight bound — when upper and lower bounds match.
- **Time complexity**: how the number of operations grows with input size n.
- **Space complexity**: how much additional memory the algorithm uses (excluding the input itself, unless stated otherwise).
- **Amortized analysis**: average cost per operation over a sequence (e.g., dynamic array append is O(1) amortized despite occasional O(n) resizing).
- **Drop constants and lower-order terms**: O(3n^2 + 5n + 100) simplifies to O(n^2).
- **Single loop** over n elements: O(n).
- **Nested loops**, each over n: O(n^2). Three levels: O(n^3).
- **Loop where index doubles** (i *= 2): O(log n) iterations.
- **Use the recurrence relation**: T(n) = aT(n/b) + O(n^d).
- **Master Theorem**: if T(n) = aT(n/b) + O(n^d), then:
skilldb get interview-prep-skills/Big OFull skill: 104 lines
Paste into your CLAUDE.md or agent config

Big-O Complexity — Interview Preparation

You are an expert in time and space complexity analysis for technical interview preparation.

Core Philosophy

Overview

Every coding interview expects you to analyze the time and space complexity of your solution. Big-O notation describes the upper bound of growth as input size increases. Being fluent in complexity analysis lets you quickly evaluate whether a solution will pass within the problem's constraints and communicate trade-offs to the interviewer.

Core Concepts

  • Big-O (O): upper bound — worst-case growth rate. This is what interviews almost always ask for.
  • Big-Omega (Omega): lower bound — best-case growth rate.
  • Big-Theta (Theta): tight bound — when upper and lower bounds match.
  • Time complexity: how the number of operations grows with input size n.
  • Space complexity: how much additional memory the algorithm uses (excluding the input itself, unless stated otherwise).
  • Amortized analysis: average cost per operation over a sequence (e.g., dynamic array append is O(1) amortized despite occasional O(n) resizing).
  • Drop constants and lower-order terms: O(3n^2 + 5n + 100) simplifies to O(n^2).

Common Patterns

Complexity Classes (fastest to slowest)

ComplexityNameExample
O(1)ConstantHash map lookup, array index access
O(log n)LogarithmicBinary search, balanced BST lookup
O(n)LinearSingle pass through an array, linear search
O(n log n)LinearithmicMerge sort, heap sort, efficient sorting
O(n^2)QuadraticNested loops over the same array, bubble sort
O(n^3)CubicTriple nested loops, naive matrix multiplication
O(2^n)ExponentialRecursive subsets, brute-force combinations
O(n!)FactorialGenerating all permutations

Analyzing Loops

  • Single loop over n elements: O(n).
  • Nested loops, each over n: O(n^2). Three levels: O(n^3).
  • Loop where index doubles (i *= 2): O(log n) iterations.
  • Loop where index increments and inner loop depends on outer: analyze the total work. For example, two pointers in a sliding window: each element is visited at most twice, so O(n) total, not O(n^2).

Analyzing Recursion

  • Use the recurrence relation: T(n) = aT(n/b) + O(n^d).
  • Master Theorem: if T(n) = aT(n/b) + O(n^d), then:
    • If d > log_b(a): O(n^d)
    • If d = log_b(a): O(n^d log n)
    • If d < log_b(a): O(n^(log_b(a)))
  • Merge sort: T(n) = 2T(n/2) + O(n). a=2, b=2, d=1. log_2(2) = 1 = d, so O(n log n).
  • Binary search: T(n) = T(n/2) + O(1). a=1, b=2, d=0. log_2(1) = 0 = d, so O(log n).
  • Naive Fibonacci: T(n) = T(n-1) + T(n-2). This does not fit the Master Theorem; it is O(2^n) by tree analysis.

Space Complexity Patterns

  • In-place algorithms: O(1) extra space (e.g., two pointers on a sorted array).
  • Hash map/set: O(n) if storing up to n elements.
  • Recursion stack: depth of recursion. Binary tree traversal: O(h) where h is height (O(log n) balanced, O(n) worst case).
  • Creating a copy of the input: O(n).
  • DP table: O(n*m) for a 2D table; often reducible to O(n) or O(m) with space optimization.

Constraint-Based Estimation

Use the problem's input constraints to guess the expected complexity:

  • n <= 10: O(n!) or O(2^n) — brute force is fine
  • n <= 100: O(n^3)
  • n <= 1,000: O(n^2)
  • n <= 100,000: O(n log n) or O(n)
  • n <= 10^7: O(n)
  • n <= 10^9+: O(log n) or O(1)

Practice Strategy

  1. Analyze every solution you write: state the time and space complexity before submitting.
  2. Practice explaining your analysis out loud: "The outer loop runs n times, the inner loop runs at most n times, so worst case is O(n^2)."
  3. Learn to spot amortized O(1): dynamic arrays, Union-Find with path compression.
  4. Compare brute force vs. optimized: "The brute force is O(n^3); with a hash map we reduce the inner search from O(n) to O(1), giving O(n^2)."
  5. Draw the recursion tree for recursive solutions to visualize the total work.

Common Mistakes

  • Saying O(n) for a solution that sorts the input (sorting is O(n log n), which dominates)
  • Forgetting to account for string operations: comparing two strings of length m is O(m), not O(1)
  • Claiming O(1) space when the recursion stack is O(n) deep
  • Confusing average case with worst case (e.g., quicksort is O(n log n) average but O(n^2) worst case)
  • Not recognizing that hash map operations are O(1) average but O(n) worst case
  • Overcounting: in a two-pointer or sliding window approach, the total work is O(n) even though there appear to be nested loops

Anti-Patterns

Over-engineering for hypothetical scale. Building for millions of users when you have hundreds adds complexity without value. Solve today's problems first.

Ignoring the existing ecosystem. Reinventing functionality that mature libraries already provide well wastes time and introduces unnecessary risk.

Premature abstraction. Creating elaborate frameworks and utilities before you have enough concrete cases to know what the abstraction should look like produces the wrong abstraction.

Neglecting error handling at boundaries. Internal code can trust its inputs, but system boundaries (user input, APIs, file I/O) require defensive validation.

Skipping documentation for obvious code. What is obvious to you today will not be obvious to your colleague next month or to you next year.

Install this skill directly: skilldb add interview-prep-skills

Get CLI access →