Performance & Optimization
Core Web Vitals for Next.js: How I Hit 95+ on Every Project
TL;DR
Hitting a 95+ Lighthouse score on Next.js is not about sprinkling lazy loads everywhere and hoping for the best. It requires a systematic approach: Server Components by default, aggressive bundle splitting, font preloading with size-adjust, responsive images with next/image, and measuring everything in the field — not just in lab conditions. I have shipped these patterns across multiple production sites, including FreshMart, a UK grocery platform where I brought the mobile Lighthouse score from 54 to 97. This article breaks down the exact techniques I use, with real before/after numbers and the code patterns behind them.
What Core Web Vitals Actually Measure in 2026
If you are building with Next.js and care about core web vitals next.js performance, you need to understand what Google actually measures — and what they changed.
As of March 2024, Google replaced First Input Delay (FID) with Interaction to Next Paint (INP) as a Core Web Vital. This was not a minor tweak. FID only measured the delay of the *first* interaction. INP measures the responsiveness of *every* interaction throughout the entire page lifecycle and reports the worst one (technically the 98th percentile). Sites that passed FID easily started failing INP overnight.
Here are the three metrics that determine your page experience ranking signal:
| Metric | What It Measures | Good | Needs Work | Poor |
|---|---|---|---|---|
| LCP (Largest Contentful Paint) | How fast the main content loads | < 2.5s | 2.5s - 4.0s | > 4.0s |
| INP (Interaction to Next Paint) | How responsive the page feels | < 200ms | 200ms - 500ms | > 500ms |
| CLS (Cumulative Layout Shift) | How stable the layout is | < 0.1 | 0.1 - 0.25 | > 0.25 |
Google says "good" is under 2.5s for LCP and under 200ms for INP. I do not aim for "good." My targets are non-negotiable:
- LCP < 1.5s — One full second under the threshold.
- INP < 100ms — Half the "good" limit. Users should never perceive lag.
- CLS < 0.1 — Zero visible layout shifts. Period.
These are not aspirational. These are the numbers I hit on FreshMart, on client projects I deliver through my services, and on every Next.js site I ship.
LCP — How I Get It Under 1.5 Seconds
LCP is almost always your hero image, your hero heading, or your largest above-the-fold text block. The fix is not complicated, but most developers get it wrong because they optimize the wrong thing.
Step 1: Identify the LCP Element
Before optimizing anything, I open Chrome DevTools, go to the Performance panel, and record a page load. The LCP element is highlighted in the timeline. On FreshMart, it was the hero product image on the homepage and the product thumbnail on category pages.
Most people assume LCP is about image compression. It is not. LCP is a chain of four sub-parts, and you need to fix all of them:
- Time to First Byte (TTFB) — Server response time.
- Resource load delay — Time between TTFB and when the browser starts loading the LCP resource.
- Resource load duration — How long the LCP resource takes to download.
- Element render delay — Time between resource loaded and element painted.
Step 2: Kill the Resource Load Delay
The biggest LCP killer in Next.js apps is that the browser does not know about the hero image until JavaScript executes. This is the fix:
// app/page.tsx — Server Component (default)
import Image from 'next/image'
export default function HomePage() {
return (
<section className="relative h-[600px]">
<Image
src="/images/hero-produce.webp"
alt="Fresh organic groceries delivered to your door"
fill
priority // This is the key — adds preload link to <head>
sizes="100vw"
quality={85}
className="object-cover"
/>
<h1 className="relative z-10 text-5xl font-bold text-white">
Fresh groceries, delivered in 2 hours
</h1>
</section>
)
}The priority prop on next/image generates a <link rel="preload"> tag in the HTML <head>. Without it, the browser discovers the image only after parsing the component tree. On FreshMart, adding priority to the hero image alone cut LCP by 800ms.
Step 3: Optimize TTFB with Static Generation
For pages where the content does not change every request, I use static generation or ISR:
// app/categories/[slug]/page.tsx
export async function generateStaticParams() {
const categories = await getCategories()
return categories.map((cat) => ({ slug: cat.slug }))
}
export const revalidate = 3600 // Regenerate every hourFreshMart LCP numbers (mobile, 4G throttled):
| Change | Before | After |
|---|---|---|
Added priority to hero image | 3.8s | 3.0s |
| Switched hero to WebP (was PNG) | 3.0s | 2.4s |
| Static generation + edge CDN | 2.4s | 1.6s |
| Preconnect to image CDN | 1.6s | 1.3s |
Final mobile LCP: 1.3 seconds. That is 65% faster than where we started.
INP — The New FID Replacement
INP is where most Next.js apps silently fail. You can have a perfect Lighthouse lab score and still get flagged for poor INP in the field, because Lighthouse synthetic tests do not interact with your page the way real users do.
INP measures the time from a user interaction (click, tap, keypress) to the next visual update. Every interaction is measured. The reported value is the worst interaction (98th percentile for pages with 50+ interactions).
The Main Thread is the Enemy
Every time you run JavaScript on the main thread during an interaction, you are adding to INP. Here is what I audit on every project:
1. Move heavy computation off the main thread:
// Before: Filtering 2,000 products on the main thread
function handleSearch(query: string) {
const results = products.filter((p) =>
p.name.toLowerCase().includes(query.toLowerCase())
)
setResults(results)
}
// After: Debounce + Web Worker for heavy filtering
const worker = new Worker(
new URL('../workers/search.worker.ts', import.meta.url)
)
function handleSearch(query: string) {
worker.postMessage({ query, products })
}
worker.onmessage = (e: MessageEvent<Product[]>) => {
setResults(e.data)
}2. Break up long tasks with `startTransition`:
import { startTransition } from 'react'
function handleCategoryChange(categoryId: string) {
// Urgent: update the selected state immediately
setSelectedCategory(categoryId)
// Non-urgent: filter and re-render the product grid
startTransition(() => {
const filtered = filterProductsByCategory(categoryId)
setDisplayedProducts(filtered)
})
}3. Avoid layout thrashing in event handlers:
// Bad: forces synchronous layout recalculation
function handleClick(e: React.MouseEvent) {
const height = element.offsetHeight // Forces layout
element.style.height = `${height + 100}px` // Triggers layout again
}
// Good: batch reads and writes
function handleClick(e: React.MouseEvent) {
requestAnimationFrame(() => {
const height = element.offsetHeight
element.style.height = `${height + 100}px`
})
}FreshMart INP numbers (field data, Chrome UX Report):
| Change | Before (p75) | After (p75) |
|---|---|---|
| Baseline (hydration-heavy) | 380ms | — |
| Moved to Server Components | — | 210ms |
startTransition on filters | 210ms | 120ms |
| Web Worker for search | 120ms | 68ms |
Final field INP at p75: 68ms. Well under my 100ms target.
CLS — Zero Layout Shifts
CLS is the most preventable metric and the most annoying when it fails. Every time a user is about to tap a button and the layout shifts because a font loaded or an ad appeared, you lose trust.
The Three CLS Killers
1. Images without dimensions:
// Bad: no dimensions = layout shift when image loads
<img src="/product.jpg" alt="Product" />
// Good: next/image requires dimensions or uses fill
<Image
src="/product.jpg"
alt="Product"
width={400}
height={300}
className="rounded-lg"
/>2. Fonts causing FOUT (Flash of Unstyled Text):
I handle this with next/font and the size-adjust property (more on this in the Font Loading section below).
3. Dynamic content injected above the viewport:
// Bad: banner appears after load, pushes everything down
{isLoaded && <PromoBanner />}
// Good: reserve space with min-height
<div className="min-h-[48px]">
{isLoaded ? <PromoBanner /> : null}
</div>FreshMart CLS numbers:
| Page | Before | After |
|---|---|---|
| Homepage | 0.24 | 0.01 |
| Category listing | 0.18 | 0.03 |
| Product detail | 0.09 | 0.0 |
| Cart | 0.15 | 0.02 |
Every page under 0.05. The homepage went from "poor" to essentially zero shift.
Bundle Size Optimization
Bundle size directly impacts LCP (more JavaScript = slower parse and execute) and INP (larger bundles = more main thread work during interactions). Here is my process.
Step 1: Measure with Bundle Analyzer
npm install @next/bundle-analyzer// next.config.ts
import bundleAnalyzer from '@next/bundle-analyzer'
const withBundleAnalyzer = bundleAnalyzer({
enabled: process.env.ANALYZE === 'true',
})
export default withBundleAnalyzer(nextConfig)Run ANALYZE=true next build and you get a treemap visualization showing every module and its gzipped size. On FreshMart, the first analysis revealed three problems:
- Moment.js was imported for a single date formatting call — 67KB gzipped. Replaced with
date-fns/format— 2.1KB. - Lodash was fully imported instead of cherry-picked — 25KB gzipped. Switched to
lodash-es/debounceandlodash-es/throttle— 1.8KB total. - A rich text editor was loaded on every page because it was in the layout — 142KB gzipped. Moved to dynamic import on the page that needed it.
Step 2: Dynamic Imports for Below-the-Fold
import dynamic from 'next/dynamic'
const ReviewSection = dynamic(() => import('@/components/ReviewSection'), {
loading: () => <ReviewSkeleton />,
})
const RecommendationGrid = dynamic(
() => import('@/components/RecommendationGrid'),
{ ssr: false } // Client-only component, no SSR overhead
)Step 3: Tree Shaking Verification
Not all libraries tree-shake correctly. I verify by checking the bundle analyzer output after import changes. If a library does not support ESM exports, I either find an alternative or use a targeted import path.
FreshMart bundle size results:
| Metric | Before | After | Reduction |
|---|---|---|---|
| First Load JS | 387KB | 142KB | 63% |
| Shared chunks | 198KB | 89KB | 55% |
| Homepage route | 67KB | 23KB | 66% |
| Product page route | 54KB | 18KB | 67% |
The 63% reduction in First Load JS was the single biggest contributor to the LCP improvement.
Image Optimization with next/image
Images are typically 50-70% of a page's total weight. Getting this wrong undoes every other optimization. Here is what I enforce on every project.
Responsive Sizes with srcSet
<Image
src="/images/hero.jpg"
alt="Hero banner"
width={1920}
height={1080}
sizes="(max-width: 768px) 100vw, (max-width: 1200px) 75vw, 60vw"
priority
quality={80}
/>The sizes attribute tells the browser which image size to download at each viewport width. Without it, mobile users download the full 1920px image. With it, they get a 768px version that is 70% smaller.
Format Strategy
Next.js serves WebP by default when the browser supports it. For even better compression, I configure AVIF as the preferred format:
// next.config.ts
const nextConfig: NextConfig = {
images: {
formats: ['image/avif', 'image/webp'],
deviceSizes: [640, 750, 828, 1080, 1200, 1920],
imageSizes: [16, 32, 48, 64, 96, 128, 256],
minimumCacheTTL: 60 * 60 * 24 * 30, // 30 days
},
}Image size comparison (FreshMart hero, 1200px wide):
| Format | Size | Savings vs PNG |
|---|---|---|
| PNG | 842KB | — |
| WebP | 198KB | 76% |
| AVIF | 134KB | 84% |
Blur Placeholder for Perceived Performance
import heroImage from '@/public/images/hero.jpg'
<Image
src={heroImage}
alt="Hero"
placeholder="blur" // Automatically generates blurDataURL at build time
priority
/>Static imports enable the blur placeholder automatically. For dynamic images from a CMS, I generate blurDataURL server-side using plaiceholder:
import { getPlaiceholder } from 'plaiceholder'
async function getBlurDataURL(src: string): Promise<string> {
const buffer = await fetch(src).then((res) => res.arrayBuffer())
const { base64 } = await getPlaiceholder(Buffer.from(buffer))
return base64
}Font Loading Strategy
Fonts are the most underestimated CLS and LCP killer. A bad font loading strategy can add 300-500ms to LCP and cause visible layout shifts on every page.
The next/font Approach
// app/layout.tsx
import { Plus_Jakarta_Sans, Inter, JetBrains_Mono } from 'next/font/google'
const jakarta = Plus_Jakarta_Sans({
subsets: ['latin'],
variable: '--font-jakarta',
display: 'swap',
preload: true,
})
const inter = Inter({
subsets: ['latin'],
variable: '--font-inter',
display: 'swap',
preload: true,
})
const jetbrains = JetBrains_Mono({
subsets: ['latin'],
variable: '--font-jetbrains',
display: 'swap',
preload: true,
})
export default function RootLayout({ children }: { children: React.ReactNode }) {
return (
<html className={`${jakarta.variable} ${inter.variable} ${jetbrains.variable}`}>
<body className="font-sans">{children}</body>
</html>
)
}Why next/font instead of a <link> tag to Google Fonts?
- Self-hosted — Font files are served from your domain, eliminating DNS lookup and connection to
fonts.googleapis.com. Saves 100-200ms. - Automatic size-adjust —
next/fontcalculatessize-adjust,ascent-override, anddescent-overrideCSS properties so the fallback font matches the web font's dimensions. This eliminates CLS from font swap. - Preloaded — The font file gets a
<link rel="preload">in the HTML head.
FreshMart font loading impact:
| Metric | Google Fonts CDN | next/font self-hosted |
|---|---|---|
| Font load time | 340ms | 80ms |
| CLS from font swap | 0.08 | 0.0 |
| LCP impact | +220ms | +0ms |
Server Components for Performance
React Server Components are the single most impactful Next.js performance feature since static generation. The principle is simple: if a component does not need interactivity, render it on the server and send zero JavaScript to the client.
The Decision Framework
Does the component use useState, useEffect, onClick, onChange,
or any browser API?
YES → 'use client' at the top of the file
NO → Keep it as a Server Component (the default)On FreshMart, I audited every component and found that 73% of them did not need client-side JavaScript. Product cards, category headers, footer, navigation links, breadcrumbs, SEO metadata — all Server Components.
The Composition Pattern
When a page is mostly static but has one interactive element, do not make the whole page a Client Component. Compose:
// app/products/[id]/page.tsx — Server Component
import { getProduct } from '@/lib/products'
import { ProductGallery } from '@/components/ProductGallery'
import { AddToCartButton } from '@/components/AddToCartButton' // 'use client'
import { ProductReviews } from '@/components/ProductReviews'
export default async function ProductPage({
params,
}: {
params: Promise<{ id: string }>
}) {
const { id } = await params
const product = await getProduct(id)
return (
<main>
{/* Server-rendered — zero JS sent */}
<h1 className="text-3xl font-bold">{product.name}</h1>
<p className="text-lg text-muted">{product.description}</p>
{/* Client Component — only this sends JS */}
<AddToCartButton productId={product.id} price={product.price} />
{/* Server-rendered gallery with optimized images */}
<ProductGallery images={product.images} />
{/* Server-rendered reviews */}
<ProductReviews productId={product.id} />
</main>
)
}FreshMart JS reduction from Server Components:
| Page | Client JS Before | Client JS After | Reduction |
|---|---|---|---|
| Homepage | 245KB | 67KB | 73% |
| Product page | 189KB | 52KB | 72% |
| Category listing | 156KB | 41KB | 74% |
| Cart | 134KB | 98KB | 27% (most is interactive) |
The cart page had the smallest reduction because most of its UI is interactive — quantity selectors, remove buttons, promo code input. That is expected and fine.
My Performance Testing Workflow
I do not wait until the end of a project to test performance. It is baked into every PR.
Local Development
- `next build && next start` — Never test performance on
next dev. The dev server has no optimizations and gives misleading numbers. - Lighthouse CI in the terminal:
npx lighthouse http://localhost:3000 \
--output=json \
--output-path=./lighthouse-report.json \
--chrome-flags="--headless" \
--throttling.cpuSlowdownMultiplier=4I use 4x CPU slowdown to simulate a mid-range phone. If it scores 90+ with 4x throttle, it will score 95+ on real hardware.
CI/CD Pipeline
# .github/workflows/lighthouse.yml
name: Lighthouse CI
on: [pull_request]
jobs:
lighthouse:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
- run: npm ci
- run: npm run build
- run: npm start &
- name: Run Lighthouse
uses: treosh/lighthouse-ci-action@v12
with:
urls: |
http://localhost:3000
http://localhost:3000/categories/fresh-produce
http://localhost:3000/products/organic-bananas
budgetPath: ./lighthouse-budget.json
uploadArtifacts: true// lighthouse-budget.json
[
{
"path": "/*",
"timings": [
{ "metric": "largest-contentful-paint", "budget": 1500 },
{ "metric": "interactive", "budget": 3500 },
{ "metric": "cumulative-layout-shift", "budget": 0.1 }
],
"resourceSizes": [
{ "resourceType": "script", "budget": 150 },
{ "resourceType": "total", "budget": 500 }
]
}
]If any budget is exceeded, the PR fails. No exceptions.
Field Data Monitoring
Lab tests are necessary but not sufficient. Real users have different devices, networks, and interaction patterns. I use two sources for field data:
- Chrome UX Report (CrUX) — Google's real-user dataset. Available via PageSpeed Insights API or BigQuery.
- Vercel Speed Insights — If deployed on Vercel, this gives real-user Web Vitals with zero configuration.
I check CrUX data weekly. If any metric regresses, I investigate the specific deployment that caused it.
Real Numbers from Production Sites
Here are the actual Lighthouse scores and field metrics from projects I have shipped. These are not cherry-picked lab runs — they are field data from CrUX.
FreshMart (UK Grocery Platform)
| Metric | Launch Day | After Optimization | Target |
|---|---|---|---|
| Lighthouse Performance | 54 | 97 | 95+ |
| LCP (p75, mobile) | 4.1s | 1.3s | < 1.5s |
| INP (p75, mobile) | 380ms | 68ms | < 100ms |
| CLS (p75, mobile) | 0.24 | 0.01 | < 0.1 |
| First Load JS | 387KB | 142KB | < 150KB |
| Total page weight | 2.8MB | 680KB | < 1MB |
The 54-to-97 jump was not one magic fix. It was the compound effect of every technique in this article applied systematically.
Performance Optimization Checklist
This is the exact checklist I run on every Next.js project before launch:
- [ ] Hero image has
priorityprop - [ ] All images use
next/imagewith explicit dimensions - [ ]
sizesattribute set correctly for responsive images - [ ] AVIF configured as preferred image format
- [ ] Fonts loaded via
next/fontwithdisplay: swap - [ ] No external font CDN requests
- [ ] Server Components used for all non-interactive UI
- [ ] Dynamic imports for below-the-fold components
- [ ] Bundle analyzer run — no unexpected large dependencies
- [ ] No full library imports (lodash, moment, etc.)
- [ ]
startTransitionused for non-urgent state updates - [ ] Lighthouse CI configured in CI/CD pipeline
- [ ] Performance budgets set and enforced
- [ ] CrUX data baseline captured
- [ ] Mobile tested with 4x CPU throttle
Key Takeaways
- LCP is a four-part chain. Optimizing image compression alone is not enough. You need to fix TTFB, resource load delay, load duration, and render delay.
- INP replaced FID in March 2024. If you are still only testing first-click responsiveness, you are missing the metric Google actually uses.
- Server Components are the biggest performance win in modern Next.js. On FreshMart, they reduced client-side JavaScript by 72% across the site.
- Bundle analysis is not optional. Every project I have audited has at least one library that could be replaced or dynamically imported, saving 30-100KB.
- Font loading is a silent killer.
next/fontwith self-hosting eliminates both the CLS from font swap and the LCP delay from external CDN requests. - Lab tests are necessary but not sufficient. Field data from CrUX is what Google uses for ranking. Set up monitoring and check it weekly.
- Performance budgets in CI prevent regression. If it is not automated, it will eventually break.
These are not theoretical recommendations. They are the techniques behind a 54-to-97 Lighthouse score improvement on a production grocery platform serving real users. If you want this level of performance engineering on your project, check out my services or take a look at the FreshMart case study for the full breakdown.
*Uvin Vindula is a Web3 and AI engineer based between Sri Lanka and the UK. He builds production-grade web applications with non-negotiable performance standards through iamuvin.com↗. Every project ships with 95+ Lighthouse scores, sub-1.5s LCP, and zero compromises. Follow his work at @IAMUVIN↗.*
Working on a Web3 or AI project?

Uvin Vindula
Web3 and AI engineer based in Sri Lanka and the UK. Author of The Rise of Bitcoin. Director of Blockchain and Software Solutions at Terra Labz. Founder of uvin.lk — Sri Lanka's Bitcoin education platform with 10,000+ learners.