Web Performance Beyond Core Web Vitals: Metrics That Predict Churn
Core Web Vitals matter, but they don't tell the full story. We examine the metrics that actually correlate with user retention and revenue impact.
Core Web Vitals get the headlines. But if you're optimizing only for LCP, FID, and CLS, you're missing the metrics that actually predict whether users stick around—or leave for your competitor.
We've spent the last two years analyzing performance data across dozens of client sites, and the pattern is clear: the metrics that move the retention needle aren't always the ones Google ranks. Some are older measurements we've forgotten about. Others are behaviors we've never instrumented.
The Retention Metrics Google Doesn't Measure
Core Web Vitals correlate with user experience, but they don't correlate perfectly with churn. Here's why: a site can pass all three thresholds and still frustrate users. A page might load in 2.5 seconds (good LCP) but then become unresponsive for three seconds while JavaScript initializes. A form might have zero layout shift but require four submissions because of network flakiness.
Time to Interactive (TTI) Still Matters
TTI—the moment when a page becomes fully interactive—dropped out of the Core Web Vitals lineup in 2021, but our data shows it's one of the strongest predictors of whether users complete critical actions.
typescript// Measure when the page is truly usable const observer = new PerformanceObserver((list) => { for (const entry of list.getEntries()) { if (entry.name === 'first-input' || entry.name === 'interaction') { const tti = entry.startTime; analytics.track('time_to_interactive', { duration: tti, sessionId: getCurrentSessionId() }); } } }); observer.observe({ entryTypes: ['first-input', 'interaction'] });
We recommend tracking the gap between your LCP and when users can actually interact with key page elements. Sites where this gap exceeds 1.5 seconds see measurably higher bounce rates on conversion pages.
Error Rate and Network Resilience
CWV doesn't measure failures—only the happy path. But users don't always have happy paths.
JavaScript Error Frequency
Pages that throw JavaScript errors in the first 10 seconds of load lose users at twice the rate of error-free pages, regardless of other performance metrics. This is especially true on mobile.
typescriptwindow.addEventListener('error', (event) => { analytics.track('js_error', { message: event.message, filename: event.filename, lineno: event.lineno, timestamp: performance.now(), // Track whether this error prevents interaction blocking: isBlockingError(event) }); }); window.addEventListener('unhandledrejection', (event) => { analytics.track('unhandled_promise_rejection', { reason: String(event.reason), timestamp: performance.now() }); });
Network Request Failure Rate
Track the percentage of API calls that fail or time out. A site with 98% request success looks fine in aggregate—until you segment by user cohort and find that 8% of users experience repeated failures.
pythondef track_api_health(endpoint, duration_ms, status_code, user_id): """ Log API performance by user to detect cohort-level issues """ is_error = status_code >= 400 is_timeout = duration_ms > 8000 metrics.gauge('api_request_duration', duration_ms, tags=[ f'endpoint:{endpoint}', f'status:{status_code}', f'user_segment:{get_user_segment(user_id)}' ]) if is_error or is_timeout: metrics.increment('api_failure_count', tags=[ f'endpoint:{endpoint}', f'user_id:{user_id}' ])
Session Stability and Repeated Frustration
Here's what surprised us most: users don't churn because of one bad session. They churn because of patterns.
A user who experiences slow performance once might return. A user who experiences it consistently in the same feature? They're gone. Track the variance in performance metrics within individual user sessions, and across sessions for the same user.
bash#!/bin/bash # Compare performance variance across user cohorts # Users with high performance variance show 3x higher churn select user_id, avg(lcp_ms) as avg_lcp, stddev(lcp_ms) as lcp_variance, count(*) as sessions from performance_events where user_id in (select id from users where churned = true) group by user_id having lcp_variance > 1000 order by churn_date desc;
What to Do Monday Morning
Start measuring TTI, JavaScript errors, API failure rates, and performance variance within user sessions. These metrics won't show up in your Lighthouse report. But they'll show up in your churn numbers.
At LavaPi, we've found that teams who instrument these metrics alongside Core Web Vitals catch performance regressions that would otherwise slip into production. They also identify which performance problems actually matter to their users—and which are noise.
Core Web Vitals are a floor, not a ceiling. The metrics that predict churn are often the ones you'll only find by looking at your actual user data.
LavaPi Team
Digital Engineering Company