Deploying Next.js Projects with Docker
Containerizing a Next.js application seems straightforward at first glance. But after our team went through several iterations of Dockerizing our Next.js app, we discovered numerous pitfalls that weren't immediately obvious. Here's what we learned from building an optimized Docker setup for our production Next.js application.
The Initial Prototype: Development Mode in Production
Our first attempt at Dockerizing our Next.js application was naive but seemed to work. We created a simple Dockerfile:
FROM node:16
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "run", "dev"]This Dockerfile worked fine for local development, but when we deployed it to production, we noticed several issues:
- The container was using significantly more CPU than expected
- Memory usage was higher than necessary
- Start-up times were slow
- File watching was enabled, which is unnecessary in production
The root cause? We were running Next.js in development mode (npm run dev) rather than production mode. This meant:
- Hot reloading was active and watching for file changes
- No code optimization was happening
- Development-specific code was being executed
Build vs. Start: Understanding Next.js Commands
After realizing our mistake, we updated our Dockerfile to use the proper production commands:
FROM node:16
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 3000
CMD ["npm", "start"]This approach was better because:
npm run buildcreates an optimized production buildnpm startruns the app in production mode without development features
When we deployed this version, performance improved significantly. CPU usage dropped by about 40%, and memory usage was more stable. However, we still had issues to solve.
Problem: Bloated Images with Development Dependencies
Our Docker image was unnecessarily large because we were including all dependencies, even those only needed for development. The size was approaching 2GB, which:
- Increased deployment times
- Used more storage in our registry
- Made container start-up slower
Looking at the image layers with docker history, we discovered that node_modules was responsible for most of the bloat. Many packages were only needed for linting, testing, or type checking—operations we don't perform in production.
Solution: Production Dependencies Only
Our first attempt to solve this was to install only production dependencies:
FROM node:16
WORKDIR /app
COPY package*.json ./
RUN npm install --only=production
COPY . .
RUN npm run build # This failed!
EXPOSE 3000
CMD ["npm", "start"]But this approach failed because some development dependencies were actually needed for the build process. Next.js requires various development dependencies to compile the application, even though they're not needed to run it.
The Multi-Stage Build Approach
The solution was to use a multi-stage Docker build:
# Build Stage
FROM node:16 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# Production Stage
FROM node:16-slim
WORKDIR /app
COPY --from=builder /app/next.config.js ./
COPY --from=builder /app/public ./public
COPY --from=builder /app/.next ./.next
COPY --from=builder /app/package.json ./package.json
# Install only production dependencies
RUN npm install --only=production
EXPOSE 3000
CMD ["npm", "start"]This approach:
- Uses a full Node.js image for building
- Copies only the necessary files to the production image
- Installs only production dependencies in the final image
The result was a much smaller image (around 400MB, down from 2GB), faster deployments, and quicker container start-up times.
Problem: Environment Variables and Build Time
Next.js handles environment variables in a specific way, and we ran into issues with our Docker setup. Environment variables used during build time (like API endpoints that are embedded in the client-side JavaScript) weren't being properly set.
In Next.js:
- Environment variables are processed at build time
NEXT_PUBLIC_*variables are embedded in the client-side JavaScript- Other environment variables are only available on the server side
Our problem was that the environment variables weren't available during the build stage in our multi-stage Dockerfile.
Solution: ARG vs ENV for Build-Time Variables
We had to distinguish between build-time and runtime environment variables and handle them appropriately in our Dockerfile:
# Build Stage
FROM node:16 AS builder
WORKDIR /app
# Define build arguments
ARG NEXT_PUBLIC_API_URL
ARG NEXT_PUBLIC_GA_ID
# Set environment variables for the build
ENV NEXT_PUBLIC_API_URL=${NEXT_PUBLIC_API_URL}
ENV NEXT_PUBLIC_GA_ID=${NEXT_PUBLIC_GA_ID}
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# Production Stage
FROM node:16-slim
WORKDIR /app
# Runtime environment variables (server-side only)
ENV NODE_ENV=production
COPY --from=builder /app/next.config.js ./
COPY --from=builder /app/public ./public
COPY --from=builder /app/.next ./.next
COPY --from=builder /app/package.json ./package.json
RUN npm install --only=production
EXPOSE 3000
CMD ["npm", "start"]When building the container, we would pass the build arguments:
docker build --build-arg NEXT_PUBLIC_API_URL=https://api.example.com --build-arg NEXT_PUBLIC_GA_ID=UA-123456-7 -t nextjs-app:latest .This approach properly handled the environment variables at build time, ensuring they were correctly embedded in the client-side JavaScript.
Problem: Next.js Caching Behavior
Another issue we encountered was with Next.js's build cache. When we built the Docker image multiple times with different environment variables, the cached build didn't always reflect the changes.
This happened because:
- Docker's layer caching meant that if
package.jsonand source files hadn't changed, the build step would be cached - Environment variable changes alone don't trigger a rebuild of the layer
Solution: Cache Busting for Build Arguments
To solve this, we implemented a cache-busting strategy when environment variables changed:
# Build Stage
FROM node:16 AS builder
WORKDIR /app
# Define build arguments with default empty values
ARG NEXT_PUBLIC_API_URL=""
ARG NEXT_PUBLIC_GA_ID=""
# Add a build-time variable for cache busting
ARG CACHEBUST=1
# Set environment variables for the build
ENV NEXT_PUBLIC_API_URL=${NEXT_PUBLIC_API_URL}
ENV NEXT_PUBLIC_GA_ID=${NEXT_PUBLIC_GA_ID}
COPY package*.json ./
RUN npm install
COPY . .
# Use the CACHEBUST arg here to invalidate cache when needed
RUN echo "CACHEBUST=${CACHEBUST}" > .env.build && npm run build
# Rest of the Dockerfile remains the same
...When building, we could increment the CACHEBUST value to force a rebuild:
docker build --build-arg NEXT_PUBLIC_API_URL=https://api.example.com --build-arg NEXT_PUBLIC_GA_ID=UA-123456-7 --build-arg CACHEBUST=$(date +%s) -t nextjs-app:latest .Problem: NGINX Integration
For production deployments, we wanted to use NGINX as a reverse proxy in front of our Next.js application to handle:
- SSL termination
- Static asset caching
- Compression
- HTTP/2 support
But we ran into routing issues between NGINX and Next.js, particularly with dynamic routes and API endpoints.
Solution: Proper NGINX Configuration
We created a custom NGINX configuration tailored for Next.js:
# nginx.conf
server {
listen 80;
server_name _;
gzip on;
gzip_proxied any;
gzip_comp_level 4;
gzip_types text/css application/javascript image/svg+xml;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
# Static assets
location /_next/static/ {
alias /app/.next/static/;
expires 365d;
access_log off;
}
location /static/ {
alias /app/public/static/;
expires 365d;
access_log off;
}
# Next.js application
location / {
proxy_pass http://localhost:3000;
}
}And updated our Dockerfile to include NGINX:
# Build Stage
FROM node:16 AS builder
# ... build stage remains the same ...
# Production Stage
FROM nginx:alpine
# Install Node.js in the NGINX image
RUN apk add --update nodejs npm
WORKDIR /app
# Copy NGINX configuration
COPY nginx.conf /etc/nginx/conf.d/default.conf
# Copy Next.js build from builder stage
COPY --from=builder /app/next.config.js ./
COPY --from=builder /app/public ./public
COPY --from=builder /app/.next ./.next
COPY --from=builder /app/package.json ./package.json
# Install production dependencies
RUN npm install --only=production
# Copy startup script
COPY ./start.sh /start.sh
RUN chmod +x /start.sh
EXPOSE 80
CMD ["/start.sh"]Finally, we created a startup script to run both NGINX and Next.js:
#!/bin/sh
# start.sh
# Start Next.js
cd /app
npm start &
# Start NGINX
nginx -g "daemon off;"This approach allowed us to benefit from NGINX's features while still running our Next.js application.
Advanced Strategy: Using Node Alpine for Smaller Images
We further optimized our Docker image size by using Node Alpine as the base image:
# Build Stage
FROM node:16-alpine AS builder
WORKDIR /app
# Install dependencies for node-gyp
RUN apk add --no-cache python3 make g++
# ... rest of the build stage ...
# Production Stage
FROM node:16-alpine
WORKDIR /app
# ... rest of the production stage ...This reduced our final image size to around 200MB, less than half the previous size.
Problem: Slow Builds and npm Install
Our Docker builds were still taking longer than we wanted, particularly because npm install had to download all dependencies for every build.
Solution: Docker BuildKit and Cache Mounting
We took advantage of Docker BuildKit's cache mounting features to speed up our builds:
# syntax=docker/dockerfile:1.4
FROM node:16-alpine AS builder
WORKDIR /app
# Copy package files
COPY package.json package-lock.json ./
# Use cache mount for node_modules
RUN --mount=type=cache,target=/root/.npm npm ci
# ... rest of the Dockerfile ...To enable Docker BuildKit, we set the environment variable before building:
DOCKER_BUILDKIT=1 docker build -t nextjs-app:latest .This significantly reduced our build times, especially for subsequent builds.
Final Production-Ready Dockerfile
After addressing all these challenges, our final production-ready Dockerfile looked like this:
# syntax=docker/dockerfile:1.4
# Build Stage
FROM node:16-alpine AS builder
WORKDIR /app
# Build args
ARG NEXT_PUBLIC_API_URL
ARG NEXT_PUBLIC_GA_ID
ARG CACHEBUST=1
# Environment variables for the build
ENV NEXT_PUBLIC_API_URL=${NEXT_PUBLIC_API_URL}
ENV NEXT_PUBLIC_GA_ID=${NEXT_PUBLIC_GA_ID}
ENV NODE_ENV=production
# Install dependencies for potential node-gyp builds
RUN apk add --no-cache python3 make g++
# Install dependencies with caching
COPY package.json package-lock.json ./
RUN --mount=type=cache,target=/root/.npm npm ci
# Copy application code
COPY . .
# Build the application
RUN echo "CACHEBUST=${CACHEBUST}" > .env.build && npm run build
# Production Stage with NGINX
FROM nginx:alpine
# Install Node.js
RUN apk add --update nodejs npm
WORKDIR /app
# NGINX config
COPY nginx.conf /etc/nginx/conf.d/default.conf
# Copy only necessary files from builder
COPY --from=builder /app/next.config.js ./
COPY --from=builder /app/public ./public
COPY --from=builder /app/.next ./.next
COPY --from=builder /app/package.json ./package.json
# Install production dependencies only
RUN npm install --only=production
# Non-root user for security
RUN adduser -D nextjs && chown -R nextjs:nextjs /app && chown -R nextjs:nextjs /var/cache/nginx && chown -R nextjs:nextjs /var/log/nginx && chown -R nextjs:nextjs /etc/nginx/conf.d && touch /var/run/nginx.pid && chown -R nextjs:nextjs /var/run/nginx.pid
USER nextjs
# Startup script
COPY --chown=nextjs:nextjs ./start.sh /start.sh
RUN chmod +x /start.sh
EXPOSE 80
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 CMD wget --quiet --tries=1 --spider http://localhost:80/ || exit 1
CMD ["/start.sh"]Results and Lessons Learned
After several iterations, our final Docker setup for Next.js achieved:
- 80% reduction in image size (from 2GB to ~200MB)
- 50% faster builds using BuildKit caching
- Proper handling of environment variables for both build and runtime
- Integration with NGINX for improved performance
- Enhanced security with non-root users
- Health checks for better container orchestration
Key lessons we learned:
- Understand the Next.js build process: Next.js has distinct build and runtime phases that must be handled correctly in Docker.
- Multi-stage builds are essential: They significantly reduce image size by separating build and runtime environments.
- Environment variables require special attention: Next.js processes them at build time, which needs to be considered in Docker.
- Caching strategies matter: Proper caching can dramatically speed up builds, but cache invalidation must be handled correctly.
- Security considerations shouldn't be an afterthought: Running containers as non-root users is an important security practice.
Conclusion
Dockerizing a Next.js application for production requires careful consideration of the Next.js build process, environment variable handling, and image optimization. The approach outlined in this article has served our team well, resulting in smaller, faster, and more secure Docker images for our Next.js applications.
For teams looking to deploy Next.js applications in containers, I recommend starting with a multi-stage build approach and then optimizing for your specific requirements. Take time to understand Next.js's unique characteristics, particularly around environment variables and build caching, as these are areas where many teams encounter challenges.