Next.js & React
Next.js Caching Strategies: ISR, Revalidation, and What Actually Works
TL;DR
Caching is the difference between a Next.js site that loads in 200ms and one that loads in 2 seconds. I manage caching on every project I ship, and the strategy changes depending on the content type. Blog posts get ISR with a 3600-second revalidation window. CMS-driven pages use revalidateTag triggered by Sanity webhooks, so content updates appear within seconds without rebuilding the entire site. Dynamic dashboards skip the cache entirely. In this article, I break down every caching mechanism Next.js gives you in 2026, show you the exact patterns I use on iamuvin.com and client projects, walk through the Sanity webhook setup I rely on daily, and expose the caching bugs that silently break production sites. If your pages feel slower than they should, your caching strategy is probably wrong.
How Next.js Caching Works in 2026
Next.js has gone through several iterations of its caching model, and the 2026 version is the most explicit one yet. Gone are the days of aggressive default caching that surprised developers. The current model puts you in control, but that means you need to actually understand what each layer does.
There are four caching layers that matter in production:
- Request Memoization. When you call the same
fetchwith the same URL and options multiple times during a single server render, Next.js deduplicates those requests. You get one network call, not five. This happens automatically and you do not need to configure it.
- Data Cache. This is the persistent cache that stores
fetchresponses across requests. When you setrevalidate: 3600on a fetch call, the response gets cached for one hour in the Data Cache. This survives deployments on Vercel and persists until explicitly invalidated.
- Full Route Cache. For statically rendered routes, Next.js caches the entire HTML and RSC payload at build time. This is what ISR operates on. When a request comes in, Next.js serves the cached HTML instantly and revalidates in the background if the time window has passed.
- Router Cache. On the client side, Next.js caches visited route segments in the browser. This makes back-navigation instant but can also serve stale content if you are not careful about invalidation.
The critical thing to understand is that these layers interact. A stale Data Cache means a stale Full Route Cache, which means stale HTML served to users. Invalidating one layer without the others leads to the subtle bugs I cover later in this article.
Request comes in
|
v
Router Cache (client) -- hit? --> serve cached RSC payload
|
v (miss)
Full Route Cache (server) -- hit? --> serve cached HTML
|
v (miss or stale)
Data Cache -- hit? --> use cached fetch responses
|
v (miss or stale)
Origin fetch -- hit the actual API/databaseEvery caching decision you make maps to one or more of these layers. The trick is knowing which layer to target for your specific use case.
Static vs Dynamic -- The Decision
Before configuring any cache, you need to decide whether a route should be static or dynamic. This is the single most impactful architectural decision in a Next.js application.
Static routes are rendered at build time and cached as HTML. They are served from the CDN edge with zero server computation. This is ideal for content that changes infrequently.
Dynamic routes are rendered on every request. They hit your server (or serverless function) each time, which means higher latency but always-fresh data.
Here is my decision framework:
Content changes less than once per hour? --> Static + ISR
Content changes on a known trigger (CMS)? --> Static + On-Demand Revalidation
Content is user-specific? --> Dynamic
Content depends on request headers/cookies? --> Dynamic
Content changes every few seconds? --> Dynamic + Client FetchIn practice, most pages on most sites should be static. Even on iamuvin.com, where I publish articles through Sanity CMS, the pages are statically generated and revalidated on demand. The only dynamic routes are the API endpoints and anything behind authentication.
You force a route to be dynamic with the dynamic export:
// app/dashboard/page.tsx
export const dynamic = 'force-dynamic';And you make a route static with ISR by exporting revalidate:
// app/blog/[slug]/page.tsx
export const revalidate = 3600; // revalidate every hourThe mistake I see constantly: developers defaulting everything to dynamic because they are afraid of stale content. This throws away the biggest performance advantage Next.js gives you. Start static, add dynamism only where you need it.
ISR and revalidate
Incremental Static Regeneration is the workhorse of my caching strategy. It gives you the performance of static pages with the freshness of server-rendered ones.
When you set revalidate: 3600 on a page, here is what happens:
- The first request after a build serves the statically generated page.
- All subsequent requests within 3600 seconds serve the same cached page. Zero server work.
- After 3600 seconds, the next request still serves the stale cached page (so the user gets an instant response), but triggers a background regeneration.
- Once the regeneration completes, subsequent requests get the fresh page.
This is the "stale-while-revalidate" pattern, and it is brilliant because users never wait for a page to render.
// app/blog/[slug]/page.tsx
import { getArticle } from '@/lib/sanity';
export const revalidate = 3600;
export default async function ArticlePage({
params,
}: {
params: Promise<{ slug: string }>;
}) {
const { slug } = await params;
const article = await getArticle(slug);
if (!article) {
notFound();
}
return (
<article>
<h1>{article.title}</h1>
<div>{article.content}</div>
</article>
);
}
export async function generateStaticParams() {
const articles = await getAllArticleSlugs();
return articles.map((slug) => ({ slug }));
}The revalidate value is in seconds, and choosing the right number matters:
Blog posts: 3600 (1 hour) -- content rarely changes
Product pages: 1800 (30 min) -- prices may update
Landing pages: 86400 (24 hours) -- almost never changes
Documentation: 3600 (1 hour) -- updated periodically
Homepage: 600 (10 min) -- features latest contentYou can also set revalidate at the fetch level instead of the page level:
const data = await fetch('https://api.example.com/products', {
next: { revalidate: 1800 },
});When you set it at both the page and fetch level, the shortest revalidation window wins. If your page has revalidate: 3600 but one fetch has revalidate: 60, the page will revalidate every 60 seconds because one of its data sources has a shorter window.
My rule: set revalidate at the page level as the default, and only override at the fetch level when a specific data source needs a different cadence.
On-Demand Revalidation with Tags
Time-based revalidation has an obvious weakness: you wait for the timer to expire. If a client publishes a blog post in their CMS and the revalidation window is one hour, they have to wait up to an hour to see their changes live. That is not acceptable for most projects.
On-demand revalidation solves this. Instead of waiting for a timer, you explicitly tell Next.js to invalidate specific cached data when it changes.
The mechanism is revalidateTag. You tag your fetch requests, and then call revalidateTag('articles') when article data changes:
// lib/sanity.ts
export async function getArticle(slug: string) {
const data = await fetch(
`${SANITY_API_URL}/articles?slug=${slug}`,
{
next: {
tags: ['articles', `article-${slug}`],
revalidate: 3600,
},
}
);
return data.json();
}
export async function getArticleList() {
const data = await fetch(
`${SANITY_API_URL}/articles`,
{
next: {
tags: ['articles'],
revalidate: 3600,
},
}
);
return data.json();
}Now when an article is updated, you invalidate the specific article and the list:
// app/api/revalidate/route.ts
import { revalidateTag } from 'next/cache';
import { NextRequest, NextResponse } from 'next/server';
export async function POST(request: NextRequest) {
const body = await request.json();
const { tag } = body;
if (!tag) {
return NextResponse.json(
{ message: 'Missing tag parameter' },
{ status: 400 }
);
}
revalidateTag(tag);
return NextResponse.json({ revalidated: true, tag });
}The power of tags is granularity. Instead of revalidating an entire page (which re-fetches all data for that page), you revalidate only the data that changed:
revalidateTag('articles') --> invalidates all article fetches
revalidateTag('article-my-slug') --> invalidates only that specific article
revalidateTag('navigation') --> invalidates header/footer data
revalidateTag('settings') --> invalidates site-wide settingsI tag aggressively. Every fetch gets at least one tag. Most get two: a broad category tag and a specific item tag. This gives me the flexibility to invalidate surgically or broadly depending on what changed.
Webhook-Based Revalidation with Sanity
This is the pattern I use on iamuvin.com and every Sanity-powered project I build. When a content editor publishes a document in Sanity Studio, a webhook fires, hits my Next.js API route, and invalidates the relevant cached data. The site updates within seconds, with no rebuild.
Step 1: Create the Webhook in Sanity
In your Sanity project dashboard, create a webhook that fires on document publish:
URL: https://iamuvin.com/api/revalidate
HTTP Method: POST
Trigger: Create, Update, Delete
Filter: _type in ["article", "project", "page"]
Projection: { _type, slug }
Secret: your-webhook-secretThe projection is important. You only send the data you need for invalidation, not the entire document.
Step 2: Build the API Route
// app/api/revalidate/route.ts
import { revalidateTag } from 'next/cache';
import { parseBody } from 'next-sanity/webhook';
import { NextRequest, NextResponse } from 'next/server';
const SANITY_WEBHOOK_SECRET = process.env.SANITY_WEBHOOK_SECRET;
export async function POST(request: NextRequest) {
try {
if (!SANITY_WEBHOOK_SECRET) {
return NextResponse.json(
{ message: 'Missing webhook secret' },
{ status: 500 }
);
}
const { isValidSignature, body } = await parseBody<{
_type: string;
slug?: { current: string };
}>(request, SANITY_WEBHOOK_SECRET);
if (!isValidSignature) {
return NextResponse.json(
{ message: 'Invalid signature' },
{ status: 401 }
);
}
if (!body?._type) {
return NextResponse.json(
{ message: 'Missing document type' },
{ status: 400 }
);
}
const tagsToRevalidate: string[] = [];
switch (body._type) {
case 'article':
tagsToRevalidate.push('articles');
if (body.slug?.current) {
tagsToRevalidate.push(`article-${body.slug.current}`);
}
break;
case 'project':
tagsToRevalidate.push('projects');
if (body.slug?.current) {
tagsToRevalidate.push(`project-${body.slug.current}`);
}
break;
case 'page':
tagsToRevalidate.push('pages');
if (body.slug?.current) {
tagsToRevalidate.push(`page-${body.slug.current}`);
}
break;
default:
tagsToRevalidate.push(body._type);
}
for (const tag of tagsToRevalidate) {
revalidateTag(tag);
}
return NextResponse.json({
revalidated: true,
tags: tagsToRevalidate,
});
} catch (error) {
return NextResponse.json(
{ message: 'Error revalidating' },
{ status: 500 }
);
}
}Step 3: Tag Your Data Fetches
// lib/sanity.ts
import { sanityClient } from './sanity-client';
export async function getArticles() {
return sanityClient.fetch(
`*[_type == "article"] | order(publishedAt desc) {
_id, title, slug, publishedAt, excerpt, coverImage
}`,
{},
{ next: { tags: ['articles'], revalidate: 3600 } }
);
}
export async function getArticleBySlug(slug: string) {
return sanityClient.fetch(
`*[_type == "article" && slug.current == $slug][0] {
_id, title, slug, publishedAt, body, coverImage, author
}`,
{ slug },
{ next: { tags: ['articles', `article-${slug}`], revalidate: 3600 } }
);
}The beauty of this setup: the ISR revalidation window acts as a safety net. If the webhook fails for any reason, the page still refreshes within an hour. But when the webhook fires successfully, the content updates in under five seconds.
I have been running this exact pattern on iamuvin.com for over a year. Sanity publishes happen dozens of times a week, and I have never had a stale page last more than a few seconds. The webhook reliability on Vercel is excellent.
fetch Cache Options
The fetch API in Next.js is extended with caching options that control how individual requests are cached. Understanding these options is essential because they override page-level settings.
// Default: cached indefinitely (static data)
const data = await fetch('https://api.example.com/config');
// Revalidate every 30 minutes
const products = await fetch('https://api.example.com/products', {
next: { revalidate: 1800 },
});
// Never cache (always fresh)
const user = await fetch('https://api.example.com/user', {
cache: 'no-store',
});
// Tag for on-demand revalidation
const posts = await fetch('https://api.example.com/posts', {
next: { tags: ['posts'], revalidate: 3600 },
});The important nuances:
`cache: 'no-store'` makes the entire route dynamic. If even one fetch in a page uses no-store, the page cannot be statically generated. This catches people off guard. You add one uncached fetch for analytics tracking and suddenly your entire blog page is server-rendered on every request.
`cache: 'force-cache'` is the old default. In older versions of Next.js, all fetches were cached by default. In the current version, the default depends on your route configuration. If you want to explicitly opt into caching, use force-cache.
Tags and revalidate can coexist. When a fetch has both tags: ['posts'] and revalidate: 3600, the cache is invalidated either when revalidateTag('posts') is called OR when the 3600-second window expires, whichever comes first. This is the belt-and-suspenders approach I use everywhere.
Headers and cookies make a fetch dynamic. If you read headers() or cookies() before a fetch, Next.js marks the route as dynamic regardless of your cache settings. Move header reads to where they are actually needed.
// BAD: this makes the entire page dynamic
import { cookies } from 'next/headers';
export default async function Page() {
const cookieStore = await cookies(); // forces dynamic
const theme = cookieStore.get('theme');
const articles = await fetch('/api/articles', {
next: { revalidate: 3600 },
}); // this revalidate is ignored because the page is dynamic
return <ArticleList articles={articles} theme={theme} />;
}
// GOOD: isolate dynamic data to a client component
export default async function Page() {
const articles = await fetch('/api/articles', {
next: { revalidate: 3600 },
}); // this works because the page is static
return (
<>
<ArticleList articles={articles} />
<ThemeProvider /> {/* reads cookies on the client */}
</>
);
}unstable_cache for Database Queries
Not everything goes through fetch. If you query a database directly with Prisma, Drizzle, or raw SQL, the fetch cache does not apply. For these cases, Next.js provides unstable_cache (the name is misleading -- it is stable enough for production, the API just has not been finalized).
import { unstable_cache } from 'next/cache';
import { prisma } from '@/lib/prisma';
const getCachedProducts = unstable_cache(
async (categoryId: string) => {
return prisma.product.findMany({
where: { categoryId, status: 'active' },
include: { images: true, category: true },
orderBy: { createdAt: 'desc' },
});
},
['products'],
{
tags: ['products'],
revalidate: 1800,
}
);
export default async function ProductsPage({
params,
}: {
params: Promise<{ categoryId: string }>;
}) {
const { categoryId } = await params;
const products = await getCachedProducts(categoryId);
return <ProductGrid products={products} />;
}The three arguments to unstable_cache:
- The function to cache. This is your actual data-fetching logic.
- Cache key parts. An array of strings that form the cache key. Combined with the function arguments, this determines cache uniqueness.
- Options. Tags for on-demand invalidation and revalidate for time-based invalidation.
Critical gotcha: the function arguments are serialized and become part of the cache key. If you pass an object with a Date instance, the serialization may not work as expected. Stick to strings and numbers for cache key arguments.
// BAD: object argument may serialize inconsistently
const getCachedData = unstable_cache(
async (filters: { from: Date; to: Date }) => { /* ... */ },
['analytics'],
{ revalidate: 300 }
);
// GOOD: primitive arguments serialize predictably
const getCachedData = unstable_cache(
async (fromTimestamp: number, toTimestamp: number) => { /* ... */ },
['analytics'],
{ revalidate: 300 }
);I use unstable_cache for every Prisma query that does not need real-time data. Product catalogs, user profiles viewed by others, settings pages -- all cached with tags for surgical invalidation.
Caching with Redis/Upstash
For applications that need caching beyond what Next.js provides natively -- rate limiting, session data, computed results, or cross-instance cache sharing -- I use Upstash Redis. It is serverless, works perfectly with Vercel, and the free tier is generous enough for most projects.
// lib/cache.ts
import { Redis } from '@upstash/redis';
const redis = Redis.fromEnv();
interface CacheOptions {
ttl?: number; // seconds
tags?: string[];
}
export async function getCached<T>(
key: string,
fetcher: () => Promise<T>,
options: CacheOptions = {}
): Promise<T> {
const { ttl = 3600 } = options;
const cached = await redis.get<T>(key);
if (cached !== null) {
return cached;
}
const fresh = await fetcher();
await redis.set(key, JSON.stringify(fresh), { ex: ttl });
if (options.tags) {
for (const tag of options.tags) {
await redis.sadd(`tag:${tag}`, key);
}
}
return fresh;
}
export async function invalidateTag(tag: string): Promise<void> {
const keys = await redis.smembers(`tag:${tag}`);
if (keys.length > 0) {
await redis.del(...keys);
await redis.del(`tag:${tag}`);
}
}Usage:
// In a Server Component or API route
const products = await getCached(
`products:${categoryId}`,
() => fetchProductsFromAPI(categoryId),
{ ttl: 1800, tags: ['products'] }
);When to use Redis over Next.js built-in caching:
- Shared state across deployments. Next.js Data Cache on Vercel is per-deployment. Redis persists across all deployments.
- Rate limiting. Counting API calls per user per time window.
- Expensive computations. Cache the result of a computation that takes seconds, not milliseconds.
- Cross-service caching. When multiple services need to share cached data.
When to stick with built-in caching:
- Simple page data. ISR and
revalidateTaghandle most CMS-driven sites. - Fetch deduplication. Request memoization is automatic and free.
- Static generation. The Full Route Cache is the fastest possible serving mechanism.
I use both on most projects. Next.js caching for page-level data, Redis for application-level state and cross-cutting concerns.
Common Caching Bugs
I have debugged caching issues on more projects than I can count. These are the bugs that come up repeatedly, and they are almost always silent -- your site works, it is just slower or staler than it should be.
1. Accidental Dynamic Routes
// This single line makes your entire page dynamic
import { headers } from 'next/headers';
export default async function Page() {
const headersList = await headers();
// Even if you only read one header, the page is now dynamic
}Fix: move header/cookie reads to Client Components or isolated Server Components that do not affect the main page cache.
2. Forgetting generateStaticParams
If you have a dynamic route like app/blog/[slug]/page.tsx but do not export generateStaticParams, those pages are not generated at build time. They are generated on first request and cached, but that first visitor gets a cold start.
// Always include this for content-driven dynamic routes
export async function generateStaticParams() {
const slugs = await getAllSlugs();
return slugs.map((slug) => ({ slug }));
}3. Stale Router Cache
The client-side Router Cache can serve stale content after you revalidate on the server. A user navigates to a page, you revalidate the data via webhook, but when they navigate back, they see the old version from the Router Cache.
Fix: use router.refresh() in response to real-time events, or set the staleTimes configuration in next.config.js:
// next.config.js
module.exports = {
experimental: {
staleTimes: {
dynamic: 0,
static: 180,
},
},
};4. Webhook Signature Verification Failures
Your webhook stops working and you do not notice for days because you forgot to update the secret after rotating it, or because the request body parsing changed.
Fix: log every webhook attempt, including failures. Set up monitoring that alerts you when revalidation stops happening.
5. Cache Key Collisions with unstable_cache
If your cache key parts are not specific enough, different data can share the same cache entry:
// BAD: both calls share the same cache key "products"
const getCached = unstable_cache(
async (category: string) => fetchProducts(category),
['products'], // cache key is just "products" for every category
{ revalidate: 3600 }
);
// GOOD: category is part of the function args, making unique keys
// The function arguments are appended to the cache key automatically6. Deploying Without Revalidating
When you deploy a new version with changed data-fetching logic, the old cached data from the previous deployment might still be served. On Vercel, the Data Cache persists across deployments.
Fix: call revalidateTag or revalidatePath in your deployment pipeline, or use an API route that flushes relevant cache entries post-deploy.
My Caching Strategy Per Project Type
After shipping dozens of production Next.js applications, I have settled on these caching strategies by project type. This is not theoretical -- these are the exact configurations running in production right now.
Marketing/Landing Pages
Rendering: Static (build time)
Revalidation: ISR with revalidate: 86400 (24 hours)
CMS Integration: Sanity webhook -> revalidateTag
Cache Layer: Full Route Cache + CDN
Result: Sub-100ms TTFB globallyBlog/Content Sites (iamuvin.com)
Rendering: Static with generateStaticParams
Revalidation: ISR with revalidate: 3600 + Sanity webhook
Tags: ['articles', 'article-{slug}', 'navigation']
Cache Layer: Full Route Cache + Data Cache
Result: Instant page loads, content updates in <5 secondsE-Commerce
Rendering: Static for catalog, dynamic for cart/checkout
Revalidation: ISR with revalidate: 1800 + webhook on price changes
Tags: ['products', 'product-{id}', 'categories']
Cache Layer: Full Route Cache + Redis for inventory
Result: Fast browsing, real-time stock/price accuracySaaS Dashboard
Rendering: Dynamic (user-specific data)
Revalidation: No ISR -- data is always fresh
Tags: None needed (no static cache)
Cache Layer: Redis for expensive queries, client-side React Query
Result: Personalized data, no stale stateDocumentation
Rendering: Static with generateStaticParams
Revalidation: ISR with revalidate: 3600 + GitHub webhook on merge
Tags: ['docs', 'doc-{slug}']
Cache Layer: Full Route Cache + CDN
Result: Fast reads, updates on every merge to mainThe pattern is clear: default to static with ISR, add on-demand revalidation via webhooks for CMS content, and only go fully dynamic when the data is user-specific. Every other combination leaves performance on the table.
Key Takeaways
- Start static, not dynamic. The Full Route Cache is the fastest serving mechanism available. Only opt out when you have user-specific or real-time data requirements.
- Use ISR as a safety net. Even when you have on-demand revalidation via webhooks, set a
revalidatevalue so pages self-heal if a webhook fails.
- Tag everything. Every
fetchandunstable_cachecall should have tags. The cost of tagging is zero, and the benefit of surgical invalidation is enormous.
- Sanity webhooks are the real-time bridge. The combination of ISR plus Sanity webhook-triggered
revalidateTaggives you sub-5-second content updates with the performance of a static site.
- Watch for accidental dynamic routes. A single
headers()orcookies()call makes an entire route dynamic. Audit your pages regularly.
- Redis is for application state, not page data. Use the built-in Next.js caching for page rendering and Redis for rate limiting, sessions, and cross-service caching.
- Monitor your cache. Log webhook calls, track revalidation events, and set up alerts for failures. Silent cache staleness is the worst kind of performance bug.
Caching is not an optimization you add later. It is an architectural decision you make on day one. Get it right and your Next.js application will be fast without trying. Get it wrong and no amount of code optimization will save you.
If you are building a production application and need help designing your caching architecture, get in touch. I have shipped these patterns on dozens of sites and can help you avoid the months of trial and error I went through to arrive at them.
*Uvin Vindula is a Web3 and AI engineer based between Sri Lanka and the UK, building production-grade applications at iamuvin.com↗. Follow the journey on X @IAMUVIN↗.*
Working on a Web3 or AI project?

Uvin Vindula
Web3 and AI engineer based in Sri Lanka and the UK. Author of The Rise of Bitcoin. Director of Blockchain and Software Solutions at Terra Labz. Founder of uvin.lk — Sri Lanka's Bitcoin education platform with 10,000+ learners.