IAMUVIN

Performance & Optimization

Core Web Vitals for Next.js: How I Hit 95+ on Every Project

Uvin Vindula·September 16, 2024·11 min read
Share

TL;DR

Hitting a 95+ Lighthouse score on Next.js is not about sprinkling lazy loads everywhere and hoping for the best. It requires a systematic approach: Server Components by default, aggressive bundle splitting, font preloading with size-adjust, responsive images with next/image, and measuring everything in the field — not just in lab conditions. I have shipped these patterns across multiple production sites, including FreshMart, a UK grocery platform where I brought the mobile Lighthouse score from 54 to 97. This article breaks down the exact techniques I use, with real before/after numbers and the code patterns behind them.


What Core Web Vitals Actually Measure in 2026

If you are building with Next.js and care about core web vitals next.js performance, you need to understand what Google actually measures — and what they changed.

As of March 2024, Google replaced First Input Delay (FID) with Interaction to Next Paint (INP) as a Core Web Vital. This was not a minor tweak. FID only measured the delay of the *first* interaction. INP measures the responsiveness of *every* interaction throughout the entire page lifecycle and reports the worst one (technically the 98th percentile). Sites that passed FID easily started failing INP overnight.

Here are the three metrics that determine your page experience ranking signal:

MetricWhat It MeasuresGoodNeeds WorkPoor
LCP (Largest Contentful Paint)How fast the main content loads< 2.5s2.5s - 4.0s> 4.0s
INP (Interaction to Next Paint)How responsive the page feels< 200ms200ms - 500ms> 500ms
CLS (Cumulative Layout Shift)How stable the layout is< 0.10.1 - 0.25> 0.25

Google says "good" is under 2.5s for LCP and under 200ms for INP. I do not aim for "good." My targets are non-negotiable:

  • LCP < 1.5s — One full second under the threshold.
  • INP < 100ms — Half the "good" limit. Users should never perceive lag.
  • CLS < 0.1 — Zero visible layout shifts. Period.

These are not aspirational. These are the numbers I hit on FreshMart, on client projects I deliver through my services, and on every Next.js site I ship.


LCP — How I Get It Under 1.5 Seconds

LCP is almost always your hero image, your hero heading, or your largest above-the-fold text block. The fix is not complicated, but most developers get it wrong because they optimize the wrong thing.

Step 1: Identify the LCP Element

Before optimizing anything, I open Chrome DevTools, go to the Performance panel, and record a page load. The LCP element is highlighted in the timeline. On FreshMart, it was the hero product image on the homepage and the product thumbnail on category pages.

Most people assume LCP is about image compression. It is not. LCP is a chain of four sub-parts, and you need to fix all of them:

  1. Time to First Byte (TTFB) — Server response time.
  2. Resource load delay — Time between TTFB and when the browser starts loading the LCP resource.
  3. Resource load duration — How long the LCP resource takes to download.
  4. Element render delay — Time between resource loaded and element painted.

Step 2: Kill the Resource Load Delay

The biggest LCP killer in Next.js apps is that the browser does not know about the hero image until JavaScript executes. This is the fix:

tsx
// app/page.tsx — Server Component (default)
import Image from 'next/image'

export default function HomePage() {
  return (
    <section className="relative h-[600px]">
      <Image
        src="/images/hero-produce.webp"
        alt="Fresh organic groceries delivered to your door"
        fill
        priority // This is the key — adds preload link to <head>
        sizes="100vw"
        quality={85}
        className="object-cover"
      />
      <h1 className="relative z-10 text-5xl font-bold text-white">
        Fresh groceries, delivered in 2 hours
      </h1>
    </section>
  )
}

The priority prop on next/image generates a <link rel="preload"> tag in the HTML <head>. Without it, the browser discovers the image only after parsing the component tree. On FreshMart, adding priority to the hero image alone cut LCP by 800ms.

Step 3: Optimize TTFB with Static Generation

For pages where the content does not change every request, I use static generation or ISR:

tsx
// app/categories/[slug]/page.tsx
export async function generateStaticParams() {
  const categories = await getCategories()
  return categories.map((cat) => ({ slug: cat.slug }))
}

export const revalidate = 3600 // Regenerate every hour

FreshMart LCP numbers (mobile, 4G throttled):

ChangeBeforeAfter
Added priority to hero image3.8s3.0s
Switched hero to WebP (was PNG)3.0s2.4s
Static generation + edge CDN2.4s1.6s
Preconnect to image CDN1.6s1.3s

Final mobile LCP: 1.3 seconds. That is 65% faster than where we started.


INP — The New FID Replacement

INP is where most Next.js apps silently fail. You can have a perfect Lighthouse lab score and still get flagged for poor INP in the field, because Lighthouse synthetic tests do not interact with your page the way real users do.

INP measures the time from a user interaction (click, tap, keypress) to the next visual update. Every interaction is measured. The reported value is the worst interaction (98th percentile for pages with 50+ interactions).

The Main Thread is the Enemy

Every time you run JavaScript on the main thread during an interaction, you are adding to INP. Here is what I audit on every project:

1. Move heavy computation off the main thread:

tsx
// Before: Filtering 2,000 products on the main thread
function handleSearch(query: string) {
  const results = products.filter((p) =>
    p.name.toLowerCase().includes(query.toLowerCase())
  )
  setResults(results)
}

// After: Debounce + Web Worker for heavy filtering
const worker = new Worker(
  new URL('../workers/search.worker.ts', import.meta.url)
)

function handleSearch(query: string) {
  worker.postMessage({ query, products })
}

worker.onmessage = (e: MessageEvent<Product[]>) => {
  setResults(e.data)
}

2. Break up long tasks with `startTransition`:

tsx
import { startTransition } from 'react'

function handleCategoryChange(categoryId: string) {
  // Urgent: update the selected state immediately
  setSelectedCategory(categoryId)

  // Non-urgent: filter and re-render the product grid
  startTransition(() => {
    const filtered = filterProductsByCategory(categoryId)
    setDisplayedProducts(filtered)
  })
}

3. Avoid layout thrashing in event handlers:

tsx
// Bad: forces synchronous layout recalculation
function handleClick(e: React.MouseEvent) {
  const height = element.offsetHeight // Forces layout
  element.style.height = `${height + 100}px` // Triggers layout again
}

// Good: batch reads and writes
function handleClick(e: React.MouseEvent) {
  requestAnimationFrame(() => {
    const height = element.offsetHeight
    element.style.height = `${height + 100}px`
  })
}

FreshMart INP numbers (field data, Chrome UX Report):

ChangeBefore (p75)After (p75)
Baseline (hydration-heavy)380ms
Moved to Server Components210ms
startTransition on filters210ms120ms
Web Worker for search120ms68ms

Final field INP at p75: 68ms. Well under my 100ms target.


CLS — Zero Layout Shifts

CLS is the most preventable metric and the most annoying when it fails. Every time a user is about to tap a button and the layout shifts because a font loaded or an ad appeared, you lose trust.

The Three CLS Killers

1. Images without dimensions:

tsx
// Bad: no dimensions = layout shift when image loads
<img src="/product.jpg" alt="Product" />

// Good: next/image requires dimensions or uses fill
<Image
  src="/product.jpg"
  alt="Product"
  width={400}
  height={300}
  className="rounded-lg"
/>

2. Fonts causing FOUT (Flash of Unstyled Text):

I handle this with next/font and the size-adjust property (more on this in the Font Loading section below).

3. Dynamic content injected above the viewport:

tsx
// Bad: banner appears after load, pushes everything down
{isLoaded && <PromoBanner />}

// Good: reserve space with min-height
<div className="min-h-[48px]">
  {isLoaded ? <PromoBanner /> : null}
</div>

FreshMart CLS numbers:

PageBeforeAfter
Homepage0.240.01
Category listing0.180.03
Product detail0.090.0
Cart0.150.02

Every page under 0.05. The homepage went from "poor" to essentially zero shift.


Bundle Size Optimization

Bundle size directly impacts LCP (more JavaScript = slower parse and execute) and INP (larger bundles = more main thread work during interactions). Here is my process.

Step 1: Measure with Bundle Analyzer

bash
npm install @next/bundle-analyzer
ts
// next.config.ts
import bundleAnalyzer from '@next/bundle-analyzer'

const withBundleAnalyzer = bundleAnalyzer({
  enabled: process.env.ANALYZE === 'true',
})

export default withBundleAnalyzer(nextConfig)

Run ANALYZE=true next build and you get a treemap visualization showing every module and its gzipped size. On FreshMart, the first analysis revealed three problems:

  1. Moment.js was imported for a single date formatting call — 67KB gzipped. Replaced with date-fns/format — 2.1KB.
  2. Lodash was fully imported instead of cherry-picked — 25KB gzipped. Switched to lodash-es/debounce and lodash-es/throttle — 1.8KB total.
  3. A rich text editor was loaded on every page because it was in the layout — 142KB gzipped. Moved to dynamic import on the page that needed it.

Step 2: Dynamic Imports for Below-the-Fold

tsx
import dynamic from 'next/dynamic'

const ReviewSection = dynamic(() => import('@/components/ReviewSection'), {
  loading: () => <ReviewSkeleton />,
})

const RecommendationGrid = dynamic(
  () => import('@/components/RecommendationGrid'),
  { ssr: false } // Client-only component, no SSR overhead
)

Step 3: Tree Shaking Verification

Not all libraries tree-shake correctly. I verify by checking the bundle analyzer output after import changes. If a library does not support ESM exports, I either find an alternative or use a targeted import path.

FreshMart bundle size results:

MetricBeforeAfterReduction
First Load JS387KB142KB63%
Shared chunks198KB89KB55%
Homepage route67KB23KB66%
Product page route54KB18KB67%

The 63% reduction in First Load JS was the single biggest contributor to the LCP improvement.


Image Optimization with next/image

Images are typically 50-70% of a page's total weight. Getting this wrong undoes every other optimization. Here is what I enforce on every project.

Responsive Sizes with srcSet

tsx
<Image
  src="/images/hero.jpg"
  alt="Hero banner"
  width={1920}
  height={1080}
  sizes="(max-width: 768px) 100vw, (max-width: 1200px) 75vw, 60vw"
  priority
  quality={80}
/>

The sizes attribute tells the browser which image size to download at each viewport width. Without it, mobile users download the full 1920px image. With it, they get a 768px version that is 70% smaller.

Format Strategy

Next.js serves WebP by default when the browser supports it. For even better compression, I configure AVIF as the preferred format:

ts
// next.config.ts
const nextConfig: NextConfig = {
  images: {
    formats: ['image/avif', 'image/webp'],
    deviceSizes: [640, 750, 828, 1080, 1200, 1920],
    imageSizes: [16, 32, 48, 64, 96, 128, 256],
    minimumCacheTTL: 60 * 60 * 24 * 30, // 30 days
  },
}

Image size comparison (FreshMart hero, 1200px wide):

FormatSizeSavings vs PNG
PNG842KB
WebP198KB76%
AVIF134KB84%

Blur Placeholder for Perceived Performance

tsx
import heroImage from '@/public/images/hero.jpg'

<Image
  src={heroImage}
  alt="Hero"
  placeholder="blur" // Automatically generates blurDataURL at build time
  priority
/>

Static imports enable the blur placeholder automatically. For dynamic images from a CMS, I generate blurDataURL server-side using plaiceholder:

tsx
import { getPlaiceholder } from 'plaiceholder'

async function getBlurDataURL(src: string): Promise<string> {
  const buffer = await fetch(src).then((res) => res.arrayBuffer())
  const { base64 } = await getPlaiceholder(Buffer.from(buffer))
  return base64
}

Font Loading Strategy

Fonts are the most underestimated CLS and LCP killer. A bad font loading strategy can add 300-500ms to LCP and cause visible layout shifts on every page.

The next/font Approach

tsx
// app/layout.tsx
import { Plus_Jakarta_Sans, Inter, JetBrains_Mono } from 'next/font/google'

const jakarta = Plus_Jakarta_Sans({
  subsets: ['latin'],
  variable: '--font-jakarta',
  display: 'swap',
  preload: true,
})

const inter = Inter({
  subsets: ['latin'],
  variable: '--font-inter',
  display: 'swap',
  preload: true,
})

const jetbrains = JetBrains_Mono({
  subsets: ['latin'],
  variable: '--font-jetbrains',
  display: 'swap',
  preload: true,
})

export default function RootLayout({ children }: { children: React.ReactNode }) {
  return (
    <html className={`${jakarta.variable} ${inter.variable} ${jetbrains.variable}`}>
      <body className="font-sans">{children}</body>
    </html>
  )
}

Why next/font instead of a <link> tag to Google Fonts?

  1. Self-hosted — Font files are served from your domain, eliminating DNS lookup and connection to fonts.googleapis.com. Saves 100-200ms.
  2. Automatic size-adjustnext/font calculates size-adjust, ascent-override, and descent-override CSS properties so the fallback font matches the web font's dimensions. This eliminates CLS from font swap.
  3. Preloaded — The font file gets a <link rel="preload"> in the HTML head.

FreshMart font loading impact:

MetricGoogle Fonts CDNnext/font self-hosted
Font load time340ms80ms
CLS from font swap0.080.0
LCP impact+220ms+0ms

Server Components for Performance

React Server Components are the single most impactful Next.js performance feature since static generation. The principle is simple: if a component does not need interactivity, render it on the server and send zero JavaScript to the client.

The Decision Framework

Does the component use useState, useEffect, onClick, onChange,
or any browser API?
  YES → 'use client' at the top of the file
  NO  → Keep it as a Server Component (the default)

On FreshMart, I audited every component and found that 73% of them did not need client-side JavaScript. Product cards, category headers, footer, navigation links, breadcrumbs, SEO metadata — all Server Components.

The Composition Pattern

When a page is mostly static but has one interactive element, do not make the whole page a Client Component. Compose:

tsx
// app/products/[id]/page.tsx — Server Component
import { getProduct } from '@/lib/products'
import { ProductGallery } from '@/components/ProductGallery'
import { AddToCartButton } from '@/components/AddToCartButton' // 'use client'
import { ProductReviews } from '@/components/ProductReviews'

export default async function ProductPage({
  params,
}: {
  params: Promise<{ id: string }>
}) {
  const { id } = await params
  const product = await getProduct(id)

  return (
    <main>
      {/* Server-rendered — zero JS sent */}
      <h1 className="text-3xl font-bold">{product.name}</h1>
      <p className="text-lg text-muted">{product.description}</p>

      {/* Client Component — only this sends JS */}
      <AddToCartButton productId={product.id} price={product.price} />

      {/* Server-rendered gallery with optimized images */}
      <ProductGallery images={product.images} />

      {/* Server-rendered reviews */}
      <ProductReviews productId={product.id} />
    </main>
  )
}

FreshMart JS reduction from Server Components:

PageClient JS BeforeClient JS AfterReduction
Homepage245KB67KB73%
Product page189KB52KB72%
Category listing156KB41KB74%
Cart134KB98KB27% (most is interactive)

The cart page had the smallest reduction because most of its UI is interactive — quantity selectors, remove buttons, promo code input. That is expected and fine.


My Performance Testing Workflow

I do not wait until the end of a project to test performance. It is baked into every PR.

Local Development

  1. `next build && next start` — Never test performance on next dev. The dev server has no optimizations and gives misleading numbers.
  2. Lighthouse CI in the terminal:
bash
npx lighthouse http://localhost:3000 \
  --output=json \
  --output-path=./lighthouse-report.json \
  --chrome-flags="--headless" \
  --throttling.cpuSlowdownMultiplier=4

I use 4x CPU slowdown to simulate a mid-range phone. If it scores 90+ with 4x throttle, it will score 95+ on real hardware.

CI/CD Pipeline

yaml
# .github/workflows/lighthouse.yml
name: Lighthouse CI
on: [pull_request]

jobs:
  lighthouse:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: 20
      - run: npm ci
      - run: npm run build
      - run: npm start &
      - name: Run Lighthouse
        uses: treosh/lighthouse-ci-action@v12
        with:
          urls: |
            http://localhost:3000
            http://localhost:3000/categories/fresh-produce
            http://localhost:3000/products/organic-bananas
          budgetPath: ./lighthouse-budget.json
          uploadArtifacts: true
json
// lighthouse-budget.json
[
  {
    "path": "/*",
    "timings": [
      { "metric": "largest-contentful-paint", "budget": 1500 },
      { "metric": "interactive", "budget": 3500 },
      { "metric": "cumulative-layout-shift", "budget": 0.1 }
    ],
    "resourceSizes": [
      { "resourceType": "script", "budget": 150 },
      { "resourceType": "total", "budget": 500 }
    ]
  }
]

If any budget is exceeded, the PR fails. No exceptions.

Field Data Monitoring

Lab tests are necessary but not sufficient. Real users have different devices, networks, and interaction patterns. I use two sources for field data:

  1. Chrome UX Report (CrUX) — Google's real-user dataset. Available via PageSpeed Insights API or BigQuery.
  2. Vercel Speed Insights — If deployed on Vercel, this gives real-user Web Vitals with zero configuration.

I check CrUX data weekly. If any metric regresses, I investigate the specific deployment that caused it.


Real Numbers from Production Sites

Here are the actual Lighthouse scores and field metrics from projects I have shipped. These are not cherry-picked lab runs — they are field data from CrUX.

FreshMart (UK Grocery Platform)

MetricLaunch DayAfter OptimizationTarget
Lighthouse Performance549795+
LCP (p75, mobile)4.1s1.3s< 1.5s
INP (p75, mobile)380ms68ms< 100ms
CLS (p75, mobile)0.240.01< 0.1
First Load JS387KB142KB< 150KB
Total page weight2.8MB680KB< 1MB

The 54-to-97 jump was not one magic fix. It was the compound effect of every technique in this article applied systematically.

Performance Optimization Checklist

This is the exact checklist I run on every Next.js project before launch:

  • [ ] Hero image has priority prop
  • [ ] All images use next/image with explicit dimensions
  • [ ] sizes attribute set correctly for responsive images
  • [ ] AVIF configured as preferred image format
  • [ ] Fonts loaded via next/font with display: swap
  • [ ] No external font CDN requests
  • [ ] Server Components used for all non-interactive UI
  • [ ] Dynamic imports for below-the-fold components
  • [ ] Bundle analyzer run — no unexpected large dependencies
  • [ ] No full library imports (lodash, moment, etc.)
  • [ ] startTransition used for non-urgent state updates
  • [ ] Lighthouse CI configured in CI/CD pipeline
  • [ ] Performance budgets set and enforced
  • [ ] CrUX data baseline captured
  • [ ] Mobile tested with 4x CPU throttle

Key Takeaways

  1. LCP is a four-part chain. Optimizing image compression alone is not enough. You need to fix TTFB, resource load delay, load duration, and render delay.
  2. INP replaced FID in March 2024. If you are still only testing first-click responsiveness, you are missing the metric Google actually uses.
  3. Server Components are the biggest performance win in modern Next.js. On FreshMart, they reduced client-side JavaScript by 72% across the site.
  4. Bundle analysis is not optional. Every project I have audited has at least one library that could be replaced or dynamically imported, saving 30-100KB.
  5. Font loading is a silent killer. next/font with self-hosting eliminates both the CLS from font swap and the LCP delay from external CDN requests.
  6. Lab tests are necessary but not sufficient. Field data from CrUX is what Google uses for ranking. Set up monitoring and check it weekly.
  7. Performance budgets in CI prevent regression. If it is not automated, it will eventually break.

These are not theoretical recommendations. They are the techniques behind a 54-to-97 Lighthouse score improvement on a production grocery platform serving real users. If you want this level of performance engineering on your project, check out my services or take a look at the FreshMart case study for the full breakdown.


*Uvin Vindula is a Web3 and AI engineer based between Sri Lanka and the UK. He builds production-grade web applications with non-negotiable performance standards through iamuvin.com. Every project ships with 95+ Lighthouse scores, sub-1.5s LCP, and zero compromises. Follow his work at @IAMUVIN.*

Working on a Web3 or AI project?

Share
Uvin Vindula

Uvin Vindula

Web3 and AI engineer based in Sri Lanka and the UK. Author of The Rise of Bitcoin. Director of Blockchain and Software Solutions at Terra Labz. Founder of uvin.lk — Sri Lanka's Bitcoin education platform with 10,000+ learners.