Skip to main content
Technology & EngineeringPerformance Optimization136 lines

Core Web Vitals

Optimize Largest Contentful Paint, First Input Delay, and Cumulative Layout Shift for better user experience and search rankings.

Quick Summary32 lines
You are an expert in Core Web Vitals for optimizing application performance.

## Key Points

- Measures loading performance — the time until the largest visible content element renders.
- **Good:** <= 2.5 seconds | **Needs Improvement:** <= 4.0s | **Poor:** > 4.0s
- Common culprits: unoptimized hero images, render-blocking resources, slow server response times.
- Measures interactivity — the delay between user input and the browser's response.
- **Good (FID):** <= 100ms | **Good (INP):** <= 200ms
- Common culprits: long JavaScript tasks, heavy main-thread work, large DOM size.
- Measures visual stability — unexpected layout shifts during the page lifecycle.
- **Good:** <= 0.1 | **Needs Improvement:** <= 0.25 | **Poor:** > 0.25
- Common culprits: images without dimensions, dynamically injected content, web fonts causing FOIT/FOUT.
- **Lab tools:** Lighthouse, Chrome DevTools Performance panel, WebPageTest.
- **Field data:** Chrome User Experience Report (CrUX), `web-vitals` JS library, Google Search Console.
- **Continuous monitoring:** Integrate the `web-vitals` library to report metrics to your analytics endpoint.

## Quick Example

```html
<!-- Image discovered late by the browser -->
<img src="/hero.webp" alt="Hero" />
```

```javascript
button.addEventListener('click', () => {
  processLargeDataset(data);   // blocks main thread for 400ms
  updateUI();
});
```
skilldb get performance-optimization-skills/Core Web VitalsFull skill: 136 lines
Paste into your CLAUDE.md or agent config

Core Web Vitals — Performance Optimization

You are an expert in Core Web Vitals for optimizing application performance.

Core Philosophy

Overview

Core Web Vitals are a set of three metrics defined by Google that measure real-world user experience: Largest Contentful Paint (LCP), First Input Delay (FID) / Interaction to Next Paint (INP), and Cumulative Layout Shift (CLS). These metrics directly influence search rankings and user satisfaction.

Core Concepts

Largest Contentful Paint (LCP)

  • Measures loading performance — the time until the largest visible content element renders.
  • Good: <= 2.5 seconds | Needs Improvement: <= 4.0s | Poor: > 4.0s
  • Common culprits: unoptimized hero images, render-blocking resources, slow server response times.

Interaction to Next Paint (INP) / First Input Delay (FID)

  • Measures interactivity — the delay between user input and the browser's response.
  • Good (FID): <= 100ms | Good (INP): <= 200ms
  • Common culprits: long JavaScript tasks, heavy main-thread work, large DOM size.

Cumulative Layout Shift (CLS)

  • Measures visual stability — unexpected layout shifts during the page lifecycle.
  • Good: <= 0.1 | Needs Improvement: <= 0.25 | Poor: > 0.25
  • Common culprits: images without dimensions, dynamically injected content, web fonts causing FOIT/FOUT.

Implementation Patterns

Improving LCP — Preload Critical Resources

Before:

<!-- Image discovered late by the browser -->
<img src="/hero.webp" alt="Hero" />

After:

<head>
  <link rel="preload" as="image" href="/hero.webp" fetchpriority="high" />
</head>
<body>
  <img src="/hero.webp" alt="Hero" fetchpriority="high" />
</body>

Improving INP — Break Up Long Tasks

Before:

button.addEventListener('click', () => {
  processLargeDataset(data);   // blocks main thread for 400ms
  updateUI();
});

After:

button.addEventListener('click', async () => {
  updateUI();  // visual feedback first
  await yieldToMain();
  processLargeDataset(data);
});

function yieldToMain() {
  return new Promise(resolve => setTimeout(resolve, 0));
}

Improving CLS — Reserve Space for Dynamic Content

Before:

<img src="/photo.jpg" alt="Photo" />

After:

<img src="/photo.jpg" alt="Photo" width="800" height="600" />
img {
  aspect-ratio: attr(width) / attr(height);
  width: 100%;
  height: auto;
}

Measurement & Monitoring

  • Lab tools: Lighthouse, Chrome DevTools Performance panel, WebPageTest.
  • Field data: Chrome User Experience Report (CrUX), web-vitals JS library, Google Search Console.
  • Continuous monitoring: Integrate the web-vitals library to report metrics to your analytics endpoint.
import { onLCP, onINP, onCLS } from 'web-vitals';

function sendToAnalytics(metric) {
  navigator.sendBeacon('/analytics', JSON.stringify(metric));
}

onLCP(sendToAnalytics);
onINP(sendToAnalytics);
onCLS(sendToAnalytics);

Best Practices

  • Prioritize field data (real users) over lab data when making optimization decisions.
  • Set performance budgets for LCP, INP, and CLS and enforce them in CI/CD pipelines.
  • Use fetchpriority="high" on the LCP element and loading="lazy" on below-the-fold images.

Common Pitfalls

  • Optimizing only for lab scores while ignoring real-user metrics from CrUX, which reflect actual device and network conditions.
  • Injecting ads, banners, or cookie consent bars without reserving layout space, causing large CLS regressions.

Anti-Patterns

Over-engineering for hypothetical scale. Building for millions of users when you have hundreds adds complexity without value. Solve today's problems first.

Ignoring the existing ecosystem. Reinventing functionality that mature libraries already provide well wastes time and introduces unnecessary risk.

Premature abstraction. Creating elaborate frameworks and utilities before you have enough concrete cases to know what the abstraction should look like produces the wrong abstraction.

Neglecting error handling at boundaries. Internal code can trust its inputs, but system boundaries (user input, APIs, file I/O) require defensive validation.

Skipping documentation for obvious code. What is obvious to you today will not be obvious to your colleague next month or to you next year.

Install this skill directly: skilldb add performance-optimization-skills

Get CLI access →