Running Multiple Apps on a single port using DockerMay 28, 2021Docker 618 words
Running Multiple Apps on a single port using Docker
Photo by Martijn Baudoin on Unsplash

Intro

At any given moment, I am usually working on several applications. Either for a code review, or spinning up related services to power up a front-end. The problem is, many of these applications run on different ports (or same ones) and sometimes it’s difficult to keep track of them. For instance. this is how the apps and ports are mapped.

Application Port Address
frontend-app-1 3000 http://localhost:3000
frontend-app-2 3002 http://localhost:3002
backend-app-1 5000 http://localhost:5000
backend-app-2 5001 http://localhost:5001
backend-app-3 5002 http://localhost:5002

To solve this problem and optimize my workflow, I created a docker-compose file which uses nginx as a reverse proxy for all the applications running locally. So now the address looks like,

Application Port Address
frontend-app-1 3000 http://local.dev/fa1
frontend-app-2 3002 http://local.dev/fa2
backend-app-1 5000 http://local.dev/api/ba1
backend-app-2 5001 http://local.dev/api/ba2
backend-app-3 5002 http://local.dev/api/ba3

💬 In reality, ba1 would be an actual service name, like audit-log.

ℹ️ Want to know how to map http://localhost to http://local.dev? Check out Use local.dev instead of localhost

Using the URLs like above is much cleaner, and emualtes an actual address, eventhough everything is on my local machine.

The Reverse Proxy

A proxy means that information is going through a third party, before getting to the location. To break it into simple terms, a proxy will add a layer of masking. You simply provide a URL like local.dev, and whenever that URL is accessed, the reverse proxy will take care of where that request goes.

First step is to configure nginx as a reverse proxy in the docker-compose.yml file.

version: '3'
networks:
  vpcbr:
    driver: bridge
    ipam:
      config:
        - subnet: 10.6.0.0/16
          gateway: 10.6.0.1

services:
	nginx:
    image: nginx:latest
    container_name: local_nginx
    volumes:
       - ./nginx/reverse_proxy.conf:/etc/nginx/nginx.conf
    ports:
       - 8080:8080
    networks:
       vpcbr:

I keep all the configuration in a file called reverse_proxy.conf and use volumes to mount it into the Docker container and keep it in sync with the file on the host machine.

http {
  server {
    listen 8080;
    location / {
      proxy_pass http://10.6.1.1:3000;
    }
    location /fa1 {
      proxy_pass http://10.6.1.1:3000/;
    }
    location /fa2 {
      proxy_pass http://10.6.1.2:3002/;
    }
    .
    .
    .
  }
}

There isn’t much to this part. Mostly it’s the configuration here tells that the nginx service will be listening on port 8080 for any incoming request, and based on the path, routes the request to the respective service. Starting is like any other container with docker compose up command.

If you notice, the proxy_pass is going to the IP 10.6.1.1, that’s because in the docker-compose.yml file I’ve defined a network bridge so that each service can define it’s static IP. You can learn more about Networking in Compose from the docs.

Setting up applications

Now that the reverse proxy is configured, let’s add the service definition for the applications. Since most of the applications I work with run in NodeJS environment, I like to keep a common configuration which the other services will extend from:

# ./docker/node.service.yml
services:
    nodejs:
      image: node:15.14-alpine
      working_dir: /usr/app
      entrypoint: ['/bin/sh', '-c']
      command:
        - |
          npm install
          npm run dev

This file, node.service.yml has configurations like the base image for Node, working directory, and common commands to run when starting up the container.

The actual service definition in the main docker-compose.yml file then extends node.service.yml

frontend-app-1:
  extends:
    file: ./docker/node.service.yml    service: nodejs    container_name: frontend-app-1
    volumes:
      # Mount the folder from host to the container.
      # This will enable HMR for development
    depends_on:      - 'nginx'    networks:
      vpcbr:
        ipv4_address: 10.6.1.2

The depends_on configuration tells Docker to wait for the nginx service to be up before starting any of the ‘dependent’ services. The volumes configuration let’s me mount the volume from the host machine, it helps when I want to change something in the code, and need HMR to see instant updates.

And that’s it. I just run docker compose up -d (obviously I have an alias for this), and all the applications start up in the background and ready to be accessed using a simple URL. No managing of multiple terminals, no re-checking the port every 10 mins to launch the app. I can even specify the service name when running the docker compose command, so I don’t have to run all the services. 💃🏼

For some, this might seem like an overkill, but when one works on multiple services and front-end applications, having the ability to just launch stuff using URL paths instead of remembering port is a time-saver.

Let me know in the comments below, What method do you practice to when developing multiple applications running on different ports?

Read next
console.log like the cool kids
console.log like the cool kidsconsole.log is in every Developer's toolkit. But there's more to it than just the regular log method
The tale of bypassing CORS
The tale of bypassing CORSMaking an API request to get data and display it to the user is the common feature in most, if not all, of the front-end applications I've built. But it's not always sunshine and rainbows. Oh the frustration when browser decides to throw a CORS error!