Edge Caching Personalized Content in Next.js
Serve personalized content at the edge without sacrificing cache hit rates. Learn practical strategies for Next.js apps that balance speed with relevance.
Edge Caching Personalized Content in Next.js
Personalization and caching usually live on opposite ends of a spectrum. Cache aggressively, and everyone sees the same content. Personalize everything, and your edge nodes become useless. But in 2024, you don't have to choose—you can do both.
Next.js 13+ with App Router and Vercel's edge infrastructure (or self-hosted solutions) gives you the tools to serve personalized content from edge locations while maintaining strong cache hit rates. The key is understanding when to cache and when to compute.
Understanding Edge Cache Segments
Edge caching in Next.js works through response headers and route configuration. The critical header is
Cache-Controltypescript// app/dashboard/page.tsx export const revalidate = 3600; // ISR: revalidate every hour export default function Dashboard() { return <div>User-specific content</div>; }
But static revalidation breaks personalization—you'll cache a response for one user and serve it to another. Instead, use dynamic segments with request headers:
typescript// app/api/content/route.ts import { NextRequest, NextResponse } from 'next/server'; export async function GET(request: NextRequest) { const userId = request.headers.get('x-user-id'); const region = request.geo?.country || 'US'; const content = await fetchPersonalizedContent(userId, region); const response = NextResponse.json(content); // Cache by user + region, not globally response.headers.set( 'Cache-Control', 'private, max-age=300' // 5 minutes, per-user ); return response; }
The
privateSegment-Based Caching
Cache Keys Beyond URLs
Your cache key should include more than just the URL. Use headers or cookies to segment requests:
typescript// middleware.ts import { NextRequest, NextResponse } from 'next/server'; export function middleware(request: NextRequest) { const response = NextResponse.next(); // Add cache key segments const tier = request.cookies.get('user-tier')?.value || 'free'; const theme = request.cookies.get('theme')?.value || 'light'; response.headers.set('x-cache-segment', `${tier}:${theme}`); return response; }
On your edge infrastructure, use these headers in cache key construction. Vercel's Edge Config makes this straightforward:
typescript// app/layout.tsx import { geolocation } from '@vercel/functions'; export default async function Layout({ children }) { const geo = geolocation(); return ( <html lang={getLanguageForCountry(geo.country)}> <body>{children}</body> </html> ); }
Stale-While-Revalidate Pattern
Let edges serve stale content briefly while revalidating in the background:
typescript// app/feed/page.tsx export const revalidate = 60; // Revalidate every 60 seconds export async function generateMetadata() { return { headers: { 'Cache-Control': 'public, s-maxage=60, stale-while-revalidate=120' } }; } export default async function Feed() { const posts = await getPosts(); return <FeedContent posts={posts} />; }
This serves cached content for 2 minutes after expiry while quietly fetching fresh data. Users see something instantly; the cache refreshes without blocking.
Practical Implementation
At LavaPi, we've seen real wins combining these approaches:
- Static shells, dynamic islands: Cache the layout/navigation globally. Personalize components via client-side fetches or streaming.
- Session-aware segments: Use user sessions as cache keys when appropriate.
- Regional + tier-based caching: Segment by geo and subscription level simultaneously.
typescript// Example: Multi-segment cache key const cacheKey = `${region}:${userTier}:${contentType}:v${version}`;
The Bottom Line
Edge caching and personalization aren't contradictory—they're complementary when you treat the cache key as more than just a URL. Use headers, cookies, and request metadata to segment your cache, apply appropriate
Cache-ControlThe result: users get personalized content served from the edge, not from your origin, at latencies measured in milliseconds.
LavaPi Team
Digital Engineering Company