Web Performance Tactics Cheatsheet

Quick-reference cheatsheet covering business impact, Core Web Vitals thresholds, three types of performance data, optimization tactics organized by metric, and local development setup.

12 min read
Share
Web Performance Tactics Cheatsheet

Most of the material here comes from Todd Gardner’s Web Performance v2 course on Frontend Masters.

Numbers and metrics are a convenient way to track progress, but the real goal is providing a great user experience — as perceived by humans, which is the best we can measure.


Why Performance Matters

Performance isn’t a technical nice-to-have. It has direct, measurable impact on business outcomes and search ranking.

User behavior:

SEO and Google ranking:

Google’s PageRank directly incorporates Core Web Vitals. A site that fails CWV thresholds gets penalized in search results. This makes performance a ranking factor, not just a UX factor. For competitive keywords, being fast can be the tiebreaker.

Weber’s Law (the 20% Rule):

To be perceived as faster than a competitor, you need to be roughly 20% faster. Users don’t notice small differences. This applies to any perceivable change — not just load time.

Business metrics:

Business metrics can be extracted from web analytics tools. They help you understand the user journey within a product: engagement, retention, and feature adoption, measured through session duration, bounce/entry/exit rates, and user environment data such as device type, screen resolution, and location.

This is definitely a complex topic that would require a dedicated post. However, at a high level, these metrics can provide value when analyzing correlations between Core Web Vitals and key business outcomes.

An important note: correlation does not imply causation. However, when interpreted carefully, correlations with Core Web Vitals can reveal meaningful patterns and provide useful directional insights.


Three Principles

Three rules from Todd Gardner’s course:

  1. First things first. Fix the easiest thing on the worst metric using real user data. Don’t optimize what isn’t broken.
  2. Last things never. You can’t fix everything. Every optimization costs time and resources — sometimes “fast enough” is the right target.
  3. Do fewer things. The fastest DOM node is the one you never create.

Core Web Vitals

Google uses p75 (75th percentile) to determine scores. A site “passes” if 75% of visits fall within the “good” range. Not the average — the percentile.

Percentiles, Not Averages

If you’re not familiar with percentiles — they solve the problem that averages hide. If 90% of users load in 1 second and 10% load in 20 seconds, the average says 2.9 seconds. That describes nobody’s actual experience.

p75 means 75% of your users are faster than this value, 25% are slower. Google uses p75 for Core Web Vitals — a site “passes” if 75% of visits fall within the “good” range. p95 catches your worst-off users — often on slow networks or old hardware.

Thresholds

MetricGoodNeeds ImprovementPoor
TTFB (Time to First Byte)≤ 800ms≤ 1800ms> 1800ms
FCP (First Contentful Paint)≤ 1.8s≤ 3.0s> 3.0s
LCP (Largest Contentful Paint)≤ 2.5s≤ 4.0s> 4.0s
INP (Interaction to Next Paint)≤ 200ms≤ 500ms> 500ms
CLS (Cumulative Layout Shift)≤ 0.1≤ 0.25> 0.25

What Each Metric Captures

Browser Caveat

Initially these metrics were only available in Chrome. However, since the end of 2025, the latest versions of Firefox and Safari now support LCP & INP metrics. You can check on the caniuse.com website:

The field data depend on browsers’ target audience: platform (mobile, desktop), geographic area (Safari on mobile will be related to areas where people can buy Apple devices), and whether the browser is compatible with CWV.


Three Types of Performance Data

The strategy: field data first, lab data for diagnostics.

AspectRUM (Real User Monitoring)Lab DataSynthetic Test (Lighthouse, WebPageTest)
SourceReal users, real devicesControlled environment (your machine)Automated script in simulated environment
EnvironmentUnpredictable: network, hardware, geographySemi-controlled (local)Fully controlled (throttling, emulation)
Reproducible?No — varies per userPartiallyYes — same conditions every run
MetricsCore Web Vitals from real users (CrUX, RUM tools)Lighthouse on dev machine, DevToolsLighthouse CI, WebPageTest, PageSpeed Insights
Best forUnderstanding real-world UXDebugging issues locallyBenchmarking, regression tests
LimitationsHarder to isolate issuesCan differ from real-worldNot representative of actual user experience

When to use each:


Optimization Tactics by Metric

Improve TTFB (Server & Network)

Improve FCP (Rendering)

Improve LCP

LCP depends on TTFB and FCP — improving those improves LCP automatically.

Improve INP (Interactivity)

Improve CLS (Layout Stability)


Perception

When you can’t get faster, feel faster.


Local Development Setup

Synthetic tests on your dev machine will always be faster than production. The gap is consistent: typically 33–50% faster locally. Use that ratio to derive local targets.

If you have RUM data: calculate target_local = production_target / 1.5. For a 2.5s LCP production target, aim for < 1.7s locally.

If you don’t have RUM data: use Google’s CWV thresholds as targets directly and treat local results as optimistic estimates.

Realistic Local Testing

  1. Open DevTools in a separate window (don’t shrink the viewport) and use Incognito mode to avoid extra loading from extensions.
  2. Enable Responsive Mode — use a realistic device (e.g., 1366×768 at 1× density for “small laptop”)
  3. Set Network throttling to a realistic profile
  4. Set CPU: 4× slowdown in Performance panel
  5. Run Lighthouse with the desktop preset

Tools


References