Skip to main content
Technology & EngineeringDevsecops Pipeline131 lines

Threat Modeling in Design Reviews

Run a threat modeling session as part of a design review for any

Quick Summary18 lines
Most security bugs are designed in, not implemented in. The architecture admits the attack; the implementation is faithful. Threat modeling is the discipline of finding the architectural admission before it ships.

## Key Points

- A new system is being designed.
- A significant feature touches authentication, authorization, payment, user data, or external integrations.
- The system handles new types of data (PII, payment, health, child data).
- The system is exposed to a new audience (was internal, now public).
- The system is subject to a new compliance regime (SOC 2, PCI, HIPAA, GDPR).
- **Spoofing** — pretending to be someone else. Stolen credentials, fake identities, impersonation.
- **Tampering** — modifying data or code without authorization.
- **Repudiation** — performing an action without leaving evidence.
- **Information disclosure** — exposing data to unauthorized parties.
- **Denial of service** — disrupting service availability.
- **Elevation of privilege** — gaining permissions beyond what was granted.
- **Impact** — what happens if the threat is realized? Data breach? Service outage? Compliance violation?
skilldb get devsecops-pipeline-skills/Threat Modeling in Design ReviewsFull skill: 131 lines
Paste into your CLAUDE.md or agent config

Most security bugs are designed in, not implemented in. The architecture admits the attack; the implementation is faithful. Threat modeling is the discipline of finding the architectural admission before it ships.

A threat modeling session is a 60–90 minute structured conversation. Held during design review, before the implementation begins. The output is a list of identified threats, each with a mitigation status. The session does not block the design; it informs it.

When to Threat Model

Not every change needs threat modeling. Hold a session when:

  • A new system is being designed.
  • A significant feature touches authentication, authorization, payment, user data, or external integrations.
  • The system handles new types of data (PII, payment, health, child data).
  • The system is exposed to a new audience (was internal, now public).
  • The system is subject to a new compliance regime (SOC 2, PCI, HIPAA, GDPR).

For routine changes — bug fixes, refactors, internal feature additions — threat modeling is not needed. The standard code review and SAST policies cover them.

The STRIDE Framework

STRIDE is the most widely used framework for structured threat identification. Six categories of threat:

  • Spoofing — pretending to be someone else. Stolen credentials, fake identities, impersonation.
  • Tampering — modifying data or code without authorization.
  • Repudiation — performing an action without leaving evidence.
  • Information disclosure — exposing data to unauthorized parties.
  • Denial of service — disrupting service availability.
  • Elevation of privilege — gaining permissions beyond what was granted.

For each component or data flow in the design, walk through STRIDE. "Could this component be spoofed? Could this data be tampered with? Could this action be repudiated?" Most STRIDE questions produce no threats; the ones that do produce threats are the value.

The Process

A typical threat modeling session has five phases:

1. Diagram the System

Draw the system's data flow diagram. Components, trust boundaries, data stores, external entities. Use a whiteboard or a collaborative diagram tool. The diagram is part of the artifact.

The discipline is to make trust boundaries explicit. Where does data cross from one trust zone to another? Untrusted-to-trusted is the most interesting boundary; that's where authentication, authorization, and validation live.

2. Identify Assets

What are the assets the system protects? User data, payment data, internal credentials, system availability, intellectual property. Each asset has a value (sensitivity, regulatory weight, business criticality).

Most threat-model misses come from not naming the asset. The team designs a feature, evaluates its threats, and forgets that this feature is the new protector of the customer's payment information. Naming assets up front anchors the rest of the analysis.

3. Walk Through STRIDE

For each component and data flow, walk through STRIDE. The session leader asks the questions; the room responds. Threats that come up are written down. False alarms are written down too — sometimes a threat is identified, considered, and dismissed; the dismissal is recorded so the question doesn't come up again later.

4. Rank Threats

Each threat has a severity, derived from:

  • Impact — what happens if the threat is realized? Data breach? Service outage? Compliance violation?
  • Likelihood — how plausible is the attack? Easy and common, or theoretical and rare?

A simple two-axis matrix produces severity buckets: critical, high, medium, low. Most threats end up medium or low. Critical and high are the ones the design must address.

5. Plan Mitigations

For each significant threat, decide on a mitigation:

  • Reduce — change the design to reduce the threat (move data behind a stricter trust boundary; add an additional validation; encrypt at rest).
  • Transfer — accept the threat but route the risk elsewhere (a vendor's responsibility under their contract; insurance).
  • Accept — the threat exists, the mitigation cost exceeds the risk, and the team explicitly accepts.
  • Avoid — change the design so the threat doesn't apply (don't store the data; don't expose the endpoint).

Each decision is recorded with rationale. The artifact lives in the design doc.

The Artifact

The threat model produces a written artifact:

  • The system diagram (data flow, trust boundaries).
  • Asset list with sensitivity levels.
  • Threat list, organized by component, with STRIDE category, severity, and mitigation.
  • Open items: threats not fully mitigated, with owner and target date.
  • Sign-off: who reviewed and approved the model.

The artifact lives next to the design doc. It's referenced when the system is implemented; the implementer knows what threats the design accounted for. It's referenced in code review; the reviewer can check that mitigations were implemented. It's referenced in operations; the on-call engineer knows what attacks are most likely.

The Attack Tree

For high-stakes systems, complement STRIDE with attack trees. An attack tree is a goal-directed decomposition: pick an attacker goal (steal customer payment data; disrupt service for a region; impersonate a user) and decompose into the steps an attacker would take.

Attack trees catch threats that STRIDE misses. STRIDE is component-by-component; attack trees are end-to-end. An attacker doesn't care that your data flow diagram has six components; they care about the goal and the path to it.

For each attack tree, identify which steps your defenses cover. The uncovered steps are where the design needs hardening.

The Limits of the Session

A 90-minute threat modeling session does not produce a complete threat model. It produces the team's best-effort at structured threat identification, given the time available. There will be threats not identified. There will be threats identified but not fully mitigated.

This is okay. The session is a forcing function for explicit thinking, not a guarantee. The artifact says what was considered; subsequent reviews (penetration tests, ongoing monitoring, incident response) catch what the session missed.

When to Re-Model

The threat model is a living document. Re-do it when:

  • The system architecture changes significantly.
  • New data types are introduced.
  • The audience changes (private to public, internal to external).
  • A vulnerability of the same class is found in production (the threat model missed it; the model needs updating).
  • Annually, as a calibration exercise.

Don't re-do it for every small change. The friction of re-modeling on every change drives teams to skip it entirely.

Anti-Patterns

Threat modeling as a one-time event. The design changes; the threats change; the model is stale. Re-model when significant changes happen.

STRIDE without diagram. The team walks through STRIDE on a verbal description. Components and trust boundaries are missed. Draw the diagram.

No severity ranking. Every threat is treated equally. Critical and low both get the same scrutiny. Rank.

No mitigation owner. Threats are identified but no one is assigned to mitigate them. Each open threat has an owner and a date.

Security team alone. The threat modeling session is run by security in isolation. The engineers who will build the system aren't there. The model is theoretical. Run with the building team.

Threat model in a separate doc no one reads. Keep it in or next to the design doc. Reference it in code review.

Install this skill directly: skilldb add devsecops-pipeline-skills

Get CLI access →