Next.js and MongoDB Development Environment with Docker Compose
Setting up a consistent development environment for our team became increasingly important as we grew. With developers using different operating systems and local MongoDB installations, we frequently encountered the dreaded "works on my machine" problem. Docker Compose seemed like the perfect solution, but our first attempts led to unexpected issues with container dependencies, data persistence, and network connectivity. Here's how we solved these problems to create a robust, cross-platform development environment.
The Initial Approach: Basic Docker Compose File
Our first attempt was to create a simple Docker Compose setup with two services: our Next.js application and MongoDB:
# docker-compose.yml
version: '3'
services:
nextjs:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
volumes:
- .:/app
- /app/node_modules
environment:
- MONGODB_URI=mongodb://mongodb:27017/myapp
depends_on:
- mongodb
mongodb:
image: mongo:4.4
ports:
- "27017:27017"
volumes:
- mongodb_data:/data/db
volumes:
mongodb_data:And a simple development Dockerfile:
# Dockerfile.dev
FROM node:16-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
CMD ["npm", "run", "dev"]While this setup seemed straightforward, we quickly ran into several issues.
Problem #1: Connection Failures with MongoDB
The first issue we encountered was that our Next.js application couldn't connect to MongoDB when starting up. The logs showed connection errors like this:
nextjs | MongoNetworkError: failed to connect to server [mongodb:27017] on first connect [Error: connect ECONNREFUSED 172.18.0.2:27017
nextjs | at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1144:16) {
nextjs | name: 'MongoNetworkError'
nextjs | }]After investigating, we discovered that the issue was timing-related. The MongoDB container was starting at the same time as our Next.js application, but our Next.js app was trying to connect to MongoDB immediately. Even though we had specified depends_on in the Docker Compose file, this only ensured that the MongoDB container started before the Next.js container—it didn't guarantee that the MongoDB service inside the container was ready to accept connections.
Solution #1: Implementing Health Checks and Connection Retry Logic
We tackled this problem in two ways:
- Added a healthcheck to the MongoDB service in Docker Compose
- Implemented connection retry logic in our Next.js application
Here's the updated Docker Compose file:
# Updated docker-compose.yml
version: '3.8' # Updated to 3.8 for healthcheck support
services:
nextjs:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
volumes:
- .:/app
- /app/node_modules
environment:
- MONGODB_URI=mongodb://mongodb:27017/myapp
- NODE_ENV=development
depends_on:
mongodb:
condition: service_healthy # Wait until MongoDB is healthy
mongodb:
image: mongo:4.4
ports:
- "27017:27017"
volumes:
- mongodb_data:/data/db
healthcheck: # Define healthcheck for MongoDB
test: ["CMD", "mongo", "--eval", "db.adminCommand('ping')"]
interval: 10s
timeout: 5s
retries: 5
start_period: 5s
volumes:
mongodb_data:We also added connection retry logic to our MongoDB connection code:
// lib/mongodb.js
import { MongoClient } from 'mongodb';
const MONGODB_URI = process.env.MONGODB_URI;
const MONGODB_DB = process.env.MONGODB_DB || 'myapp';
let cached = global.mongo;
if (!cached) {
cached = global.mongo = { conn: null, promise: null };
}
export async function connectToDatabase() {
if (cached.conn) {
return cached.conn;
}
if (!cached.promise) {
const opts = {
useNewUrlParser: true,
useUnifiedTopology: true,
};
// Connection retry logic with exponential backoff
const maxRetries = 5;
let retries = 0;
let backoff = 1000; // Start with 1s backoff
cached.promise = new Promise(async (resolve, reject) => {
while (retries < maxRetries) {
try {
const client = new MongoClient(MONGODB_URI, opts);
await client.connect();
const db = client.db(MONGODB_DB);
cached.conn = { client, db };
resolve(cached.conn);
break;
} catch (e) {
retries++;
console.log(`MongoDB connection attempt ${retries} failed: ${e.message}`);
if (retries >= maxRetries) {
reject(e);
break;
}
console.log(`Retrying in ${backoff}ms...`);
await new Promise(r => setTimeout(r, backoff));
backoff *= 2; // Exponential backoff
}
}
});
}
return cached.promise;
}This combination ensured that our Next.js application would wait for MongoDB to be fully operational before attempting to connect, and would retry the connection with increasing delays if the initial connection failed.
Problem #2: File Permission Issues with Volumes
Our second issue emerged when we mounted the local project directory as a volume in the Next.js container. When the container ran npm commands that modified files (like installing new dependencies), the created files ended up owned by the root user in the container, which mapped to different permissions on the host machine. This led to permission errors when developers tried to modify these files outside the container.
Solution #2: Non-Root User and Proper Volume Permissions
We updated our Dockerfile.dev to create and use a non-root user with the same UID/GID as the host user:
# Updated Dockerfile.dev
FROM node:16-alpine
# Create app directory
WORKDIR /app
# Add a non-root user
RUN addgroup --system --gid 1001 nodejs && adduser --system --uid 1001 nextjs && chown -R nextjs:nodejs /app
# Install dependencies first (for better layer caching)
COPY package*.json ./
RUN npm ci
# Switch to non-root user
USER nextjs
# The rest of the source code will be mounted as a volume
CMD ["npm", "run", "dev"]And updated our Docker Compose file to pass the host user's UID/GID as build arguments:
# docker-compose.yml with user handling
version: '3.8'
services:
nextjs:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
volumes:
- .:/app
- /app/node_modules
environment:
- MONGODB_URI=mongodb://mongodb:27017/myapp
- NODE_ENV=development
depends_on:
mongodb:
condition: service_healthy
# Using the non-root user we created in Dockerfile.dev
user: "nextjs:nodejs"
# ... rest of the file remains the same ...This solved our permission issues by ensuring that files created inside the container would have permissions compatible with the host user.
Problem #3: Data Persistence in MongoDB
While our Docker Compose file included a volume for MongoDB data persistence, we found that the data was still occasionally lost. This happened when:
- Developers ran
docker-compose down -v, which removed all volumes - The container was recreated with a different volume path
- Docker's default volume location filled up, causing Docker to clean up volumes
Solution #3: Named Volumes and Data Initialization
We improved our setup with named volumes and a data initialization script:
# docker-compose.yml with enhanced MongoDB persistence
version: '3.8'
services:
# ... nextjs service remains the same ...
mongodb:
image: mongo:4.4
ports:
- "27017:27017"
volumes:
- mongodb_data:/data/db
- ./mongo-init:/docker-entrypoint-initdb.d
environment:
- MONGO_INITDB_DATABASE=myapp
healthcheck:
test: ["CMD", "mongo", "--eval", "db.adminCommand('ping')"]
interval: 10s
timeout: 5s
retries: 5
start_period: 5s
volumes:
mongodb_data:
name: myapp_mongodb_data # Named volume for easier identificationWe also created a MongoDB initialization script to ensure our database was properly set up with default data:
// mongo-init/00-init.js
db = db.getSiblingDB('myapp');
// Create collections
db.createCollection('users');
db.createCollection('products');
// Insert some initial data if collections are empty
if (db.users.countDocuments() === 0) {
db.users.insertMany([
{
name: 'Admin User',
email: '[email protected]',
role: 'admin',
createdAt: new Date()
},
{
name: 'Test User',
email: '[email protected]',
role: 'user',
createdAt: new Date()
}
]);
print('Inserted default users');
}
if (db.products.countDocuments() === 0) {
db.products.insertMany([
{
name: 'Product 1',
price: 19.99,
description: 'This is a test product',
createdAt: new Date()
},
{
name: 'Product 2',
price: 29.99,
description: 'This is another test product',
createdAt: new Date()
}
]);
print('Inserted default products');
}
// Create indexes
db.users.createIndex({ email: 1 }, { unique: true });
db.products.createIndex({ name: 1 });
print('Database initialization completed');This approach ensured that our database structure and some initial data would be automatically set up when the MongoDB container was created, providing a consistent starting point for all developers.
Problem #4: Next.js Hot Reloading in Docker
Another issue we encountered was that Next.js hot reloading wasn't working properly inside the Docker container. Changes to files on the host wouldn't trigger reloads in the container.
Solution #4: Docker Volume Polling and Next.js Config
We updated our Next.js configuration and Docker setup to support hot reloading:
// next.config.js
module.exports = {
reactStrictMode: true,
// Enable polling in Docker environment
webpackDevMiddleware: (config) => {
if (process.env.NODE_ENV === 'development') {
config.watchOptions = {
poll: 800, // Check for changes every 800ms
aggregateTimeout: 300, // Delay before rebuilding
};
}
return config;
},
}And added chokidar polling to our Docker Compose environment:
# docker-compose.yml with enhanced hot reloading
version: '3.8'
services:
nextjs:
# ... other config ...
environment:
- MONGODB_URI=mongodb://mongodb:27017/myapp
- NODE_ENV=development
- CHOKIDAR_USEPOLLING=true # Enable file watching with polling
- WATCHPACK_POLLING=true # Enable polling for Next.js 12+
# ... rest of config ...
# ... mongodb service remains the same ...These changes ensured that file changes on the host were properly detected inside the Docker container, enabling the hot reloading feature of Next.js.
Problem #5: Environment Variables and Secrets
Managing environment variables and secrets across the team became challenging. We needed a solution that would:
- Keep sensitive values out of source control
- Provide consistent configuration for all developers
- Work seamlessly with our Docker Compose setup
Solution #5: .env Files and Environment-Specific Overrides
We implemented a structured approach to environment variables:
# .env.example (checked into source control)
# Database
MONGODB_URI=mongodb://mongodb:27017/myapp
MONGODB_DB=myapp
# Next.js
NEXT_PUBLIC_API_URL=http://localhost:3000/api
# Feature flags
NEXT_PUBLIC_ENABLE_FEATURE_X=false
# .env.local (gitignored, for local overrides)
# Each developer creates this file locally based on .env.example
We updated our Docker Compose file to use these .env files:
# docker-compose.yml with .env file support
version: '3.8'
services:
nextjs:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
volumes:
- .:/app
- /app/node_modules
env_file:
- .env.example
- .env.local # Override with local settings
environment:
- NODE_ENV=development
# ... rest of config ...
# ... mongodb service remains the same ...This approach gave us a good balance between standardization and flexibility, while keeping sensitive values out of source control.
Problem #6: Development Tools and Container Bloat
As our development needs grew, we started adding more services to our Docker Compose setup: a MongoDB admin UI, Redis for caching, and more. This led to container bloat and slow startup times.
Solution #6: Multiple Compose Files and Profiles
We reorganized our Docker Compose setup into multiple files with different profiles:
# docker-compose.yml (base configuration)
version: '3.8'
services:
nextjs:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
volumes:
- .:/app
- /app/node_modules
env_file:
- .env.example
- .env.local
environment:
- NODE_ENV=development
- CHOKIDAR_USEPOLLING=true
depends_on:
mongodb:
condition: service_healthy
user: "nextjs:nodejs"
profiles: ["app", "dev", "full"]
mongodb:
image: mongo:4.4
ports:
- "27017:27017"
volumes:
- mongodb_data:/data/db
- ./mongo-init:/docker-entrypoint-initdb.d
environment:
- MONGO_INITDB_DATABASE=myapp
healthcheck:
test: ["CMD", "mongo", "--eval", "db.adminCommand('ping')"]
interval: 10s
timeout: 5s
retries: 5
start_period: 5s
profiles: ["app", "db", "dev", "full"]
volumes:
mongodb_data:
name: myapp_mongodb_data# docker-compose.tools.yml (development tools)
version: '3.8'
services:
mongo-express:
image: mongo-express
restart: always
ports:
- "8081:8081"
environment:
- ME_CONFIG_MONGODB_SERVER=mongodb
- ME_CONFIG_MONGODB_PORT=27017
depends_on:
- mongodb
profiles: ["tools", "full"]
redis:
image: redis:6-alpine
ports:
- "6379:6379"
volumes:
- redis_data:/data
profiles: ["cache", "full"]
redis-commander:
image: rediscommander/redis-commander
ports:
- "8082:8081"
environment:
- REDIS_HOSTS=local:redis:6379
depends_on:
- redis
profiles: ["tools", "full"]
volumes:
redis_data:
name: myapp_redis_dataNow developers could choose which services to start based on their needs:
# Just the app and MongoDB
docker-compose --profile app up
# Full development environment with all tools
docker-compose -f docker-compose.yml -f docker-compose.tools.yml --profile full up
# Just the database and admin tools
docker-compose -f docker-compose.yml -f docker-compose.tools.yml --profile tools --profile db upThis flexibility made our development environment much more efficient, allowing developers to start only the services they needed for their current task.
Integrating with IDE and Debugging
To complete our development setup, we added support for debugging with VS Code:
// .vscode/launch.json
{
"version": "0.2.0",
"configurations": [
{
"name": "Docker: Attach to Node",
"type": "node",
"request": "attach",
"port": 9229,
"address": "localhost",
"localRoot": "${workspaceFolder}",
"remoteRoot": "/app",
"protocol": "inspector",
"restart": true
}
]
}And updated our Docker Compose file to support the debugger:
# docker-compose.yml with debugging support
version: '3.8'
services:
nextjs:
# ... other config ...
ports:
- "3000:3000"
- "9229:9229" # Expose debug port
environment:
# ... other env vars ...
- NODE_OPTIONS=--inspect=0.0.0.0:9229 # Enable inspector
# ... rest of config ...
# ... other services ...We updated our package.json to include a debug script:
// package.json scripts section
"scripts": {
"dev": "next dev",
"dev:debug": "NODE_OPTIONS='--inspect=0.0.0.0:9229' next dev",
"build": "next build",
"start": "next start",
"lint": "next lint"
}Final Docker Compose Setup
Our final Docker Compose development environment evolved into a robust, flexible system that:
- Ensured proper startup order with health checks
- Handled file permissions correctly with non-root users
- Provided consistent data persistence with named volumes and initialization scripts
- Supported hot reloading for rapid development
- Managed environment variables securely
- Offered flexible service configurations via profiles
- Integrated with debugging tools for troubleshooting
One Difference from Production
It's worth noting that our development Docker Compose setup differed from our production deployment in a few ways:
- Development mounted source code as volumes; production used built assets
- Development ran Next.js in dev mode; production used built and optimized code
- Development included debugging tools and hot reloading; production focused on performance
- Development used local MongoDB; production used a managed MongoDB service
Understanding these differences helped us avoid the "it works in development" problem that can occur when development and production environments differ significantly.
Conclusion
Creating a reliable Docker Compose setup for Next.js and MongoDB development required solving several non-obvious challenges. By addressing container startup order, file permissions, data persistence, hot reloading, and environment configuration, we built a development environment that worked consistently across our team.
The result was well worth the effort—our onboarding time for new developers decreased from days to hours, and the "works on my machine" problems virtually disappeared. The flexibility of our multi-profile setup also meant that developers could tailor their environment to their specific needs, improving efficiency and satisfaction.