Setting Up a Local Dev Environment with Docker Compose
“It works on my machine” stopped being an acceptable excuse years ago, but plenty of developers still run into environment inconsistency issues. Your local PostgreSQL is version 15, production is 16. Your colleague has Node 20, you have Node 22. Redis works on macOS but someone on the team is running Windows.
Docker Compose solves this by defining your entire development environment in a single YAML file. Everyone on the team runs the same services, the same versions, with the same configuration. Here’s how to set it up practically, without turning your dev environment into a Kubernetes cosplay.
What Docker Compose Actually Does
Docker Compose lets you define and run multiple Docker containers together. Instead of running docker run commands for each service (database, cache, app server, etc.), you describe everything in a docker-compose.yml file and bring it all up with docker compose up.
Each service runs in its own container, isolated but networked together. Your app container can talk to the database container by hostname. Ports are mapped to your local machine so you can access services from your browser or tools.
The key benefit for development: the environment is reproducible. Clone the repo, run docker compose up, and everything works. No installing PostgreSQL locally. No version conflicts. No “did you configure Redis?” conversations.
A Practical Example
Let’s set up a full-stack dev environment for a Node.js app with PostgreSQL and Redis. Create a docker-compose.yml in your project root:
services:
app:
build: .
ports:
- "3000:3000"
volumes:
- .:/app
- /app/node_modules
environment:
DATABASE_URL: postgres://devuser:devpass@db:5432/devdb
REDIS_URL: redis://cache:6379
depends_on:
db:
condition: service_healthy
cache:
condition: service_started
db:
image: postgres:16-alpine
ports:
- "5432:5432"
environment:
POSTGRES_USER: devuser
POSTGRES_PASSWORD: devpass
POSTGRES_DB: devdb
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U devuser -d devdb"]
interval: 5s
timeout: 5s
retries: 5
cache:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
pgdata:
Let me break down the important parts.
The App Service
app:
build: .
volumes:
- .:/app
- /app/node_modules
build: . tells Compose to build the image from the Dockerfile in the current directory. The volumes section is crucial for development—.:/app mounts your local source code into the container, so changes you make locally are immediately reflected inside the container. The /app/node_modules line prevents your local node_modules from overwriting the container’s node_modules (which might be built for a different platform).
You’ll need a Dockerfile:
FROM node:22-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
CMD ["npm", "run", "dev"]
The Node.js Alpine image is small and fast. The package*.json is copied and installed before the rest of the code—this way, npm install is cached and only re-runs when dependencies change.
The Database Service
db:
image: postgres:16-alpine
healthcheck:
test: ["CMD-SHELL", "pg_isready -U devuser -d devdb"]
interval: 5s
timeout: 5s
retries: 5
The healthcheck is important. Without it, your app container might start before PostgreSQL is ready to accept connections, causing startup errors. The depends_on with condition: service_healthy in the app service ensures proper startup order.
The pgdata named volume persists database data between container restarts. Without it, your data disappears every time you docker compose down.
Environment Variables
environment:
DATABASE_URL: postgres://devuser:devpass@db:5432/devdb
REDIS_URL: redis://cache:6379
Notice the hostnames db and cache—these match the service names in your Compose file. Docker Compose creates a network where services can find each other by name. Your app code connects to db:5432 instead of localhost:5432 when running inside Docker.
For local development outside Docker (e.g., running just the databases in Docker but the app locally), the port mappings let you connect through localhost:
# From inside Docker
DATABASE_URL=postgres://devuser:devpass@db:5432/devdb
# From your local machine
DATABASE_URL=postgres://devuser:devpass@localhost:5432/devdb
Useful Commands
Here’s what you’ll use daily:
# Start everything
docker compose up
# Start in background
docker compose up -d
# View logs
docker compose logs -f app
# Stop everything
docker compose down
# Stop and remove volumes (reset database)
docker compose down -v
# Rebuild after Dockerfile changes
docker compose up --build
# Run a one-off command
docker compose exec app npm test
# Open a shell in a running container
docker compose exec app sh
The exec commands are particularly useful. Need to run migrations? docker compose exec app npm run migrate. Need to check the database directly? docker compose exec db psql -U devuser devdb.
Hot Reloading
For the development experience to be good, hot reloading needs to work. Since we mounted the source code as a volume, file changes are immediately visible inside the container. If your dev server supports hot reloading (most do—Vite, Next.js, nodemon), it’ll work.
One gotcha: some file watchers don’t work well with Docker volumes on macOS or Windows because of how filesystem events propagate between the host and container. If hot reloading doesn’t trigger, try adding this to your dev server configuration:
// vite.config.js
export default {
server: {
watch: {
usePolling: true
}
}
}
Polling is slower than native file watching but works reliably across all Docker setups.
Don’t Dockerize Everything
A common mistake is putting absolutely everything in Docker, including your code editor, git, and development tools. That’s overkill for most projects. The practical approach:
Dockerize: Databases, caches, message queues, other services your app depends on. These are the things that cause “works on my machine” problems.
Don’t Dockerize (usually): Your application code during active development, if running it locally is straightforward. Many developers prefer running Node/Python/Go directly on their machine for the fastest feedback loop, while databases and services run in Docker.
The hybrid approach—services in Docker, app running locally—is often the best developer experience. You get reproducible service versions without the overhead of running your IDE’s file watcher through Docker volume mounts.
The .dockerignore File
Don’t forget a .dockerignore:
node_modules
.git
.env
*.md
.vscode
.idea
This prevents unnecessary files from being copied into your Docker image during builds. The node_modules exclusion is especially important—without it, your local node_modules (potentially built for a different OS) gets copied into the image and causes platform compatibility issues.
When Things Go Wrong
Port conflicts: If port 5432 is already in use (maybe you have local PostgreSQL running), change the host port: "5433:5432". The container still uses 5432 internally.
Volume permissions: On Linux, files created inside containers might be owned by root. Add user: "1000:1000" to your service to match your host user ID.
Slow on macOS: Docker Desktop’s file system performance on macOS isn’t great. If your app feels sluggish, try the hybrid approach—run the app locally, services in Docker.
Docker Compose isn’t magic. It’s a straightforward tool that eliminates a specific category of frustrating problems. Set it up once, commit the docker-compose.yml to your repo, and never have the “it works on my machine” conversation again.