DevOps & Deployment
Docker for Next.js: Multi-Stage Builds and Production Deployment
Last updated: April 14, 2026
TL;DR
Not everything belongs on Vercel. When I'm building for enterprise clients with on-premise requirements, self-hosted infrastructure, or strict data residency rules, Docker is the answer. This guide walks through exactly how I containerize Next.js apps using multi-stage builds with standalone output mode to get production images under 150MB. I cover the Dockerfile I actually use in production, environment variable handling (the part most guides get wrong), Docker Compose for local development that mirrors production, health checks, layer caching for fast CI builds, and every mistake I've made along the way. If you're shipping Next.js in containers, this is the guide I wish I had when I started.
When to Docker a Next.js App
I deploy most of my projects to Vercel. It's the path of least resistance for Next.js, and I've written about why I use it for production. But there are situations where Vercel isn't an option, and Docker becomes the right tool.
Enterprise clients with on-premise requirements. I've worked with companies that can't send their data to a third-party cloud. Their compliance team won't approve it. Their security policy requires everything to run on their own infrastructure. Docker gives me a portable artifact I can hand off to their ops team.
Self-hosted infrastructure. Some projects run on VPS instances, dedicated servers, or private Kubernetes clusters. A Docker image is the universal deployment unit. Build once, run anywhere — that promise actually delivers.
Data residency and sovereignty. When a client in the EU needs their application and data to stay within specific geographic boundaries, I can't always guarantee that with a managed platform. Docker on a specific cloud region gives me that control.
Multi-service architectures. When the Next.js app is one piece of a larger system — sitting alongside a Python ML service, a Redis cache, and a PostgreSQL database — Docker Compose ties everything together in a way that Vercel can't.
Cost at scale. Vercel Pro costs $20/month per team member, and compute charges add up with high traffic. A $10/month VPS running Docker can handle surprising amounts of traffic for straightforward applications.
Here's my rule: if the project can run on Vercel without friction, it goes to Vercel. If any of the above constraints exist, I reach for Docker. There's no ego in the decision — it's about picking the right tool for the deployment context.
Multi-Stage Dockerfile
The key to a production-ready Next.js Docker image is multi-stage builds. Without them, your image carries all of node_modules, the entire source code, and every devDependency — easily 1GB+. With multi-stage builds, the final image contains only what's needed to run the app.
Here's the concept. A multi-stage Dockerfile uses multiple FROM statements, each creating a separate build stage. You copy only the artifacts you need from one stage to the next. The final stage is what becomes your image.
For Next.js, I use three stages:
- Dependencies — Install all node modules (including devDependencies for the build).
- Builder — Build the Next.js application.
- Runner — Copy only the standalone output and static files into a minimal base image.
# Stage 1: Install dependencies
FROM node:20-alpine AS deps
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci --ignore-scripts
# Stage 2: Build the application
FROM node:20-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build
# Stage 3: Production runner
FROM node:20-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
COPY --from=builder /app/public ./public
COPY --from=builder --chown=nextjs:1001 /app/.next/standalone ./
COPY --from=builder --chown=nextjs:1001 /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT=3000
ENV HOSTNAME="0.0.0.0"
CMD ["node", "server.js"]The deps stage uses npm ci for reproducible installs. The builder stage compiles everything. The runner stage starts fresh with node:20-alpine — a ~180MB base image — and copies only the standalone server, public assets, and static files. The final image typically lands between 100-150MB. Compare that to a naive single-stage build that easily hits 1-2GB.
I use --ignore-scripts on npm ci to avoid running postinstall scripts during the dependency stage. If a dependency needs a postinstall (like prisma generate), I run it explicitly in the builder stage where I have full control.
Standalone Output Mode
The multi-stage Dockerfile above relies on Next.js standalone output mode. Without it, you'd need to copy the entire node_modules folder into the runner stage, and you'd be back to a massive image.
Standalone mode tells Next.js to trace your application's imports and bundle only the Node.js modules your app actually uses. Instead of a 500MB node_modules, you get a self-contained server.js file with a minimal node_modules folder — usually under 50MB.
Enable it in next.config.ts:
import type { NextConfig } from 'next';
const nextConfig: NextConfig = {
output: 'standalone',
};
export default nextConfig;When you run npm run build with this setting, Next.js creates a .next/standalone directory containing:
server.js— The production server entry point.node_modules/— Only the packages your app imports at runtime.package.json— A minimal manifest.
Two things standalone mode does NOT include that you must copy manually:
- `public/` directory — Your static assets (favicons, images, robots.txt).
- `.next/static/` directory — Client-side JavaScript bundles, CSS, and media.
That's why the Dockerfile has these two explicit COPY lines in the runner stage. Miss either one and your app will serve pages with no styles, no client-side JavaScript, or missing static assets.
One gotcha: if you use next/image with the default loader, the image optimization API still works in standalone mode. But if you use a custom loader or external image domains, make sure remotePatterns is configured in next.config.ts — the standalone server respects the same config.
Environment Variables in Docker
This is where most Next.js Docker guides fall apart. Next.js has two categories of environment variables, and Docker handles them differently.
Build-time variables (`NEXT_PUBLIC_*`): These are inlined into the JavaScript bundle during npm run build. They're baked into the client-side code. You must provide them at build time using ARG and ENV in the Dockerfile, or pass them with --build-arg.
Runtime variables (server-only): These are read by process.env at request time. They don't need to exist during build. You provide them when running the container via -e flags or env_file in Docker Compose.
Here's how I handle both:
# In the builder stage
FROM node:20-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Build-time variables (inlined into client bundle)
ARG NEXT_PUBLIC_API_URL
ARG NEXT_PUBLIC_SITE_URL
ENV NEXT_PUBLIC_API_URL=$NEXT_PUBLIC_API_URL
ENV NEXT_PUBLIC_SITE_URL=$NEXT_PUBLIC_SITE_URL
RUN npm run build# Build with public env vars
docker build \
--build-arg NEXT_PUBLIC_API_URL=https://api.example.com \
--build-arg NEXT_PUBLIC_SITE_URL=https://example.com \
-t myapp .
# Run with server-only env vars
docker run \
-e DATABASE_URL="postgresql://..." \
-e JWT_SECRET="..." \
-e STRIPE_SECRET_KEY="sk_live_..." \
-p 3000:3000 \
myappThe critical rule: never put secrets in NEXT_PUBLIC_* variables. They end up in the client bundle and are visible to anyone who opens DevTools. Database URLs, API keys, and tokens go in server-only environment variables provided at runtime.
I keep a .env.example in every project that documents every variable the app needs, separated into build-time and runtime sections. This saves hours of debugging when someone else deploys the container and wonders why the API calls fail.
Docker Compose for Development
Running docker build and docker run with a dozen flags gets old fast. Docker Compose gives me a declarative configuration for the entire development stack.
Here's the docker-compose.yml I use for local development:
services:
app:
build:
context: .
dockerfile: Dockerfile.dev
args:
NEXT_PUBLIC_API_URL: http://localhost:3000/api
NEXT_PUBLIC_SITE_URL: http://localhost:3000
ports:
- "3000:3000"
volumes:
- .:/app
- /app/node_modules
- /app/.next
env_file:
- .env.local
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: dev
POSTGRES_PASSWORD: devpass
POSTGRES_DB: myapp
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U dev -d myapp"]
interval: 5s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
ports:
- "6379:6379"
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 5s
retries: 5
volumes:
pgdata:The volumes section is key for development. Mounting the project directory (.:/app) enables hot reload — you edit files on your machine, and the container picks up changes immediately. The /app/node_modules and /app/.next exclusions prevent your host's node_modules from overwriting the container's installed dependencies.
I use a separate Dockerfile.dev for development that skips multi-stage optimization and just runs npm run dev:
FROM node:20-alpine
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
COPY . .
EXPOSE 3000
CMD ["npm", "run", "dev"]The depends_on with condition: service_healthy ensures the database and Redis are ready before the app starts. Without health checks, the app would try to connect to a database that's still initializing and crash on the first request.
Production Docker Setup
For production, I use a separate docker-compose.prod.yml that references the multi-stage Dockerfile and adds production concerns:
services:
app:
build:
context: .
dockerfile: Dockerfile
args:
NEXT_PUBLIC_API_URL: ${NEXT_PUBLIC_API_URL}
NEXT_PUBLIC_SITE_URL: ${NEXT_PUBLIC_SITE_URL}
ports:
- "3000:3000"
env_file:
- .env.production
restart: unless-stopped
deploy:
resources:
limits:
memory: 512M
cpus: "1.0"
healthcheck:
test: ["CMD-SHELL", "wget -qO- http://localhost:3000/api/health || exit 1"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"Key differences from development:
- No volume mounts. The production image is self-contained. No file system dependencies.
- Resource limits. Memory and CPU constraints prevent a single container from consuming all host resources.
- Restart policy.
unless-stoppedmeans the container restarts automatically after crashes or host reboots, unless you explicitly stop it. - Log rotation. Without
max-sizeandmax-file, Docker logs grow unbounded and will eventually fill the disk. I've seen this kill production servers. - Health check with start period. The
start_periodgives Next.js time to initialize before health checks begin failing.
Health Checks
Docker health checks are how orchestrators know if your container is actually serving requests, not just running a process. I create a dedicated health endpoint in every Next.js app:
// app/api/health/route.ts
import { NextResponse } from 'next/server';
export const dynamic = 'force-dynamic';
export async function GET() {
const health = {
status: 'healthy',
timestamp: new Date().toISOString(),
uptime: process.uptime(),
version: process.env.APP_VERSION || 'unknown',
};
return NextResponse.json(health, { status: 200 });
}This endpoint runs on every health check interval. Keep it lightweight — don't run database queries or external API calls in the health check unless you specifically want the container to restart when those dependencies go down. For basic liveness checks, just confirming the Node.js process can serve HTTP responses is enough.
If you do want a readiness check that validates database connectivity:
export async function GET() {
try {
await prisma.$queryRaw`SELECT 1`;
return NextResponse.json({ status: 'healthy', db: 'connected' });
} catch {
return NextResponse.json(
{ status: 'unhealthy', db: 'disconnected' },
{ status: 503 }
);
}
}The 503 status code tells Docker (and load balancers) to stop sending traffic to this instance until it recovers.
Caching Layers for Fast Builds
Docker builds each instruction as a layer, and layers are cached. If a layer hasn't changed, Docker reuses it. The trick is ordering your Dockerfile instructions so that frequently-changing layers come last.
The slowest part of a Next.js build is npm ci — installing hundreds of packages. If you copy package.json and package-lock.json before copying the rest of your source code, Docker caches the dependency installation layer. As long as your dependencies haven't changed, rebuilds skip straight to copying source files and running the build.
# This layer is cached unless package.json or package-lock.json changes
COPY package.json package-lock.json ./
RUN npm ci --ignore-scripts
# This layer changes on every code change
COPY . .
RUN npm run buildFor CI pipelines (GitHub Actions, GitLab CI), I also use BuildKit cache mounts:
# syntax=docker/dockerfile:1
FROM node:20-alpine AS deps
WORKDIR /app
COPY package.json package-lock.json ./
RUN --mount=type=cache,target=/root/.npm \
npm ci --ignore-scriptsThe --mount=type=cache instruction persists the npm cache between builds. On a CI server with persistent cache, this cuts dependency installation from 60 seconds to under 10 seconds.
In GitHub Actions, I pair this with Docker layer caching:
- name: Build and push
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ghcr.io/iamuvin/myapp:latest
cache-from: type=gha
cache-to: type=gha,mode=maxThe type=gha cache backend stores layers in GitHub's cache. Subsequent builds pull cached layers instead of rebuilding from scratch. My CI builds went from 8 minutes to under 2 minutes with this setup.
Common Docker Mistakes with Next.js
I've made all of these. Learn from my pain.
Forgetting standalone mode. Without output: 'standalone' in next.config.ts, the .next/standalone directory doesn't exist. Your Dockerfile fails with a confusing "COPY failed: file not found" error. Always set this before building.
Copying `.next` from your host. If you've run npm run build locally, you have a .next directory. Docker copies it into the container and uses the stale build instead of building fresh. Add .next to your .dockerignore:
.next
node_modules
.git
.env*.localMissing `.dockerignore` entirely. Without it, Docker sends your entire project directory — including node_modules, .git, and local env files — as build context. This slows down every build and can leak secrets into the image. Always create a .dockerignore.
Using `node:20` instead of `node:20-alpine`. The standard Node.js image is ~1GB. Alpine is ~180MB. Unless you need glibc-specific packages (rare for Next.js), always use Alpine.
Hardcoding `NEXT_PUBLIC_*` in the Dockerfile. I've seen Dockerfiles with ENV NEXT_PUBLIC_API_URL=https://api.example.com in the runner stage. This doesn't work because NEXT_PUBLIC_* variables are inlined at build time. Setting them at runtime has no effect on the client bundle. They must be ARG values in the builder stage.
Running as root. The default Docker user is root. If the container is compromised, the attacker has root access. Always create a non-root user and switch to it before CMD. The Dockerfile above uses nextjs with UID 1001.
No `.env.example`. When someone else deploys your container three months later, they won't know which environment variables are required. Document every variable, whether it's build-time or runtime, and what format it expects.
Ignoring container size. A 2GB image takes forever to pull, especially on CI runners that start fresh. Multi-stage builds and Alpine get you under 150MB. Check your image size with docker images and treat anything over 200MB as a smell.
My Production Dockerfile
Here's the complete Dockerfile I use for production Next.js deployments. This is the real thing — the same file I hand to enterprise clients for their self-hosted infrastructure.
# syntax=docker/dockerfile:1
# ── Stage 1: Dependencies ────────────────────────────
FROM node:20-alpine AS deps
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json package-lock.json ./
RUN --mount=type=cache,target=/root/.npm \
npm ci --ignore-scripts
# ── Stage 2: Build ───────────────────────────────────
FROM node:20-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Build-time environment variables
ARG NEXT_PUBLIC_API_URL
ARG NEXT_PUBLIC_SITE_URL
ARG NEXT_PUBLIC_GA_ID
ENV NEXT_PUBLIC_API_URL=$NEXT_PUBLIC_API_URL
ENV NEXT_PUBLIC_SITE_URL=$NEXT_PUBLIC_SITE_URL
ENV NEXT_PUBLIC_GA_ID=$NEXT_PUBLIC_GA_ID
# Generate Prisma client if schema exists
RUN if [ -f "prisma/schema.prisma" ]; then \
npx prisma generate; \
fi
ENV NEXT_TELEMETRY_DISABLED=1
RUN npm run build
# ── Stage 3: Production Runner ──────────────────────
FROM node:20-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production
ENV NEXT_TELEMETRY_DISABLED=1
# Security: non-root user
RUN addgroup --system --gid 1001 nodejs && \
adduser --system --uid 1001 nextjs
# Copy only what's needed
COPY --from=builder /app/public ./public
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
# Prisma engine binaries (if applicable)
COPY --from=builder --chown=nextjs:nodejs /app/node_modules/.prisma ./node_modules/.prisma 2>/dev/null || true
COPY --from=builder --chown=nextjs:nodejs /app/node_modules/@prisma ./node_modules/@prisma 2>/dev/null || true
USER nextjs
EXPOSE 3000
ENV PORT=3000
ENV HOSTNAME="0.0.0.0"
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=40s --retries=3 \
CMD wget -qO- http://localhost:3000/api/health || exit 1
CMD ["node", "server.js"]And the .dockerignore that goes with it:
.next
node_modules
.git
.gitignore
.env*.local
.env.development
*.md
LICENSE
.vscode
.idea
coverage
.turboBuild and run:
# Build the production image
docker build \
--build-arg NEXT_PUBLIC_API_URL=https://api.example.com \
--build-arg NEXT_PUBLIC_SITE_URL=https://example.com \
-t myapp:latest .
# Run with server-side environment variables
docker run -d \
--name myapp \
-p 3000:3000 \
--env-file .env.production \
--restart unless-stopped \
myapp:latest
# Verify it's running
docker logs myapp
curl http://localhost:3000/api/healthThis Dockerfile handles the Prisma edge case (conditionally generating the client and copying engine binaries), disables Next.js telemetry in both build and runtime, runs as a non-root user, and includes a built-in health check. The final image consistently lands between 120-150MB depending on the application's dependency graph.
Key Takeaways
- Use multi-stage builds. Three stages (deps, builder, runner) keep your production image under 150MB instead of 1GB+.
- Enable standalone output. Set
output: 'standalone'innext.config.ts. Without it, multi-stage builds don't work effectively. - Separate build-time and runtime environment variables.
NEXT_PUBLIC_*must be provided asARGduring build. Server-only variables go in at runtime. - Always use Alpine base images.
node:20-alpineis 5x smaller thannode:20with no practical downsides for Next.js. - Create a `.dockerignore`. Exclude
node_modules,.next,.git, and env files to speed up builds and avoid leaking secrets. - Run as non-root. Create a dedicated user. It's one extra line and a significant security improvement.
- Add health checks. A simple
/api/healthendpoint lets Docker, Kubernetes, and load balancers know your app is alive. - Cache dependency layers. Copy
package.jsonbefore source code. Use BuildKit cache mounts in CI for even faster builds. - Use Docker Compose for development. One
docker compose upshould start your entire stack with hot reload working. - Keep a `.env.example`. Future you (and your team) will thank you.
Docker isn't always the answer for Next.js. When Vercel works, use Vercel. But when the project demands self-hosted infrastructure, data residency compliance, or multi-service orchestration, a well-built Docker setup is production-grade and portable. These are the patterns I use for every client project that can't go on Vercel — and they haven't let me down.
*Written by Uvin Vindula↗ — Web3 and AI engineer building from Sri Lanka and the UK. I help teams ship production software with Next.js, React, and modern infrastructure. Explore my services or reach out at contact@uvin.lk↗.*
Working on a Web3 or AI project?

Uvin Vindula
Web3 and AI engineer based in Sri Lanka and the UK. Author of The Rise of Bitcoin. Director of Blockchain and Software Solutions at Terra Labz. Founder of uvin.lk — Sri Lanka's Bitcoin education platform with 10,000+ learners.