How to Dockerize the MERN Stack for Development


We can easily containerize the entire MERN stack (MongoDB, Express, React, Node.js) using Docker while even adding support for hot reloading.

Here’s what we want to accomplish today:

  • Dockerize our React client
  • Dockerize our Express API server
  • Find a Mongo image from Docker Hub
  • Write our docker-compose.yml
  • Enable hot reloading

Dockerizing React and Express

Current Project Structure

Suppose this is our project structure.

Inside client, we have our React application.

Inside server, we have our Express API server.

πŸ“‚ project
 ┣ πŸ“‚ client
    ┣ πŸ“‚ src
    β”— πŸ“œ package.json
 β”— πŸ“‚ server
    ┣ πŸ“‚ src
    β”— πŸ“œ package.json

New Project Structure

First, we need to add a Dockerfile in every directory that will be built into a custom image (hint: client and server)

We can also add a .dockerignore in each of those directories, which will speed up the Docker build process.

Lastly, we need a docker-compose.yml in our root directory.

This is our new project structure.

πŸ“‚ project
 ┣ πŸ“œ docker-compose.yml
 β”— πŸ“‚ client
    ┣ πŸ“œ .dockerignore
    ┣ πŸ“œ Dockerfile
    ┣ πŸ“‚ src
    β”— πŸ“œ package.json
 β”— πŸ“‚ server
    ┣ πŸ“œ .dockerignore
    ┣ πŸ“œ Dockerfile
    ┣ πŸ“‚ src
    β”— πŸ“œ package.json

Dockerfile

This is what our Dockerfile will look like for the client.

# Pull Docker Hub base image
FROM node:14.16.0-alpine3.10
# Set working directory
WORKDIR /usr/app
# Install app dependencies
COPY package*.json ./
RUN npm install -qy
# Copy app to container
COPY . .
# Run the "dev" script in package.json
CMD ["npm", "run", "dev"]

You’ll notice that I like to explicitly differentiate between my development and production build.

In my package.json, I have a dev script that simply runs React’s default start script.

"scripts": {
  "dev": "react-scripts start"
},

This is what our Dockerfile will look like for the server.

# Pull Docker Hub base image
FROM node:14.16.0-alpine3.10
# Set working directory
WORKDIR /usr/app
# Install app dependencies
COPY package*.json ./
RUN npm install -qyg nodemon@2.0.7
RUN npm install -qy
# Copy app to container
COPY . .
# Run the "dev" script in package.json
CMD ["npm", "run", "dev"]

The only line that’s different for the server/Dockerfile is the installation of nodemon, which we will need for hot reloading. This can be including in your devDependencies as well.

"scripts": {
  "dev": "nodemon -L src/index.js"
},

The -L here enforces Chokidar polling, which essentially watches for file changes in server.

create-react-app enforces Chokidar polling, so we don’t need to specify it here. However, there have been some issues on Windows machines regarding this, so we will add our own environment variable in our docker-compose.yml to ensure it’s turned on.

.dockerignore

In our Dockerfile, there is a COPY . . command that copies our entire directory into the container.

However, we don’t want to copy everything into our container.

Firstly, we don’t want our local node_modules inside the container since those are generally OS-specific. Because of this, we do a fresh npm install before the container is started.

We also don’t need our Docker-related files inside our container, so we put all of that into .dockerignore.

Our .dockerignore can be the exact same for both client and server.

node_modules
.dockerignore
Dockerfile

Writing our Docker Compose

docker-compose.yml

Our docker-compose.yml defines 3 containers: client, server, and db.

version: "3"
services:

  client:
    build: 
      context: ./client
      dockerfile: Dockerfile
    ports:
      - 3000:3000
    networks:
      - mern-network
    volumes:
      - ./client/src:/usr/app/src
      - ./client/public:/usr/app/public
    depends_on:
      - server
    environment:
      - REACT_APP_SERVER=http://localhost:5000
      - CHOKIDAR_USEPOLLING=true
    command: npm start
    stdin_open: true
    tty: true
  
  server:
    build:
      context: ./server
      dockerfile: Dockerfile
    ports:
      - 5000:5000
    networks:
      - mern-network
    volumes:
      - ./server/src:/usr/app/src
    depends_on:
      - db
    environment:
      - MONGO_URL=mongodb://db:27017
      - CLIENT=http://localhost:3000
    command: /usr/app/node_modules/.bin/nodemon -L src/index.js

  db:
    image: mongo:3.6.19-xenial
    ports:
      - 27017:27017
    networks:
      - mern-network
    volumes:
      - mongo-data:/data/db

networks:
  mern-network:
    driver: bridge

volumes:
  mongo-data:
    driver: local
  • stdin_open and tty will keep the client container live and open to requests after starting up (without it, client will exit immediately)
  • depends_on will ensure that the containers start in the correct order (i.e. Express should start after MongoDB)
  • networks provides isolation from other containers that may be running on the same host
  • volumes enables persistence of DB data between container restarts and provides bind mounts to allow for hot reloading
  • CHOKIDAR_USEPOLLING will allow for hot reloading for React

Starting Up Containers

We can now start all 3 containers using docker-compose.

docker-compose up

Communication Between Containers

In our browser, we can access the client React app at http://localhost:3000.

Inside client, we can access our Express API server at http://localhost:5000.

fetch('http://localhost:5000/api');

Inside server, we can connect to our MongoDB instance using the container name mongo as the URL host name (instead of localhost or another IP).

const url = 'mongodb://mongo:27017/<db>';
mongoose.connect(url, { useNewUrlParser: true, useUnifiedTopology: true });