Skip to main content

Load Balancing Docker Compose Replicas Using Nginx

·1006 words·5 mins
Sebastian Scheibe
Author
Sebastian Scheibe
Table of Contents

Introduction
#

This is the second article in a series on how to maximize CPU utilization for single-threaded languages like Node.js and Python. In the previous article, I explored using Docker Swarm, which provides built-in load balancing. You can find that article here .

In this article, however, we’ll stick with Docker Compose and look at how to effectively handle load balancing without switching to Docker Swarm. The solution? Leveraging Nginx as a load balancer to manage multiple replicas of your application, compensating for Docker Compose’s lack of native load balancing support.

Picture this: You’ve deployed your Node.js or Python app with Docker Compose, and while these languages are single-threaded by nature, your server has much more CPU power available than a single instance can handle. To fully utilize this power, running multiple replicas of your app becomes essential, especially when handling high traffic or CPU-intensive tasks.

At this point, you might have tried increasing the deploy.replicas count in Docker Compose, only to encounter an error like this:

Attaching to docker-replica-test-node-1, docker-replica-test-node-2
docker-replica-test-node-1  | Server is running at http://localhost:8080/
Error response from daemon: driver failed programming external connectivity on endpoint docker-replica-test-node-2: 
Bind for 0.0.0.0:8080 failed: port is already allocated

In this article, I’ll guide you through how to overcome this challenge by introducing Nginx as the solution for load balancing across your app replicas, ensuring optimal CPU usage without leaving Docker Compose behind.

Problem
#

The issue occurs because each container is trying to bind to the same port (8080), but Docker Compose cannot allocate the same port to multiple containers. Docker Compose lacks built-in load balancing for service ports, which leads to a conflict when replicas attempt to use the same port.

Here’s an example of a docker-compose.yml file that causes the issue:

version: '3.7'

services:
  node:
    image: node:22
    working_dir: /usr/src/app
    volumes:
      - ./index.js:/usr/src/app/index.js
    ports:
      - "8080:8080"
    command: node index.js
    deploy:
      replicas: 2

Solution: Add Nginx
#

To solve this issue, we add Nginx into the docker-compose.yml, which does provide port load balancing and allows us to create multiple instances (replicas) of containers.

Add Nginx to the docker-compose.yml
#

Copy this block of Nginx config at the bottom:

  nginx:
    image: nginx:latest
    depends_on: 
      - api
    ports:
      - "8080:80"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf:ro

Your config should look like this:

version: '3.7'

services:
  node:
    image: node:22
    working_dir: /usr/src/app
    volumes:
      - ./index.js:/usr/src/app/index.js
    ports:
      - "8080:8080"
    command: node index.js
    deploy:
      replicas: 2
  nginx:
    image: nginx:latest
    depends_on: 
      - node
    ports:
      - "8080:80"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf:ro

Remove the public ports on your own application
#

After removing the ports, it should look like this:

version: '3.7'

services:
  node:
    image: node:22
    working_dir: /usr/src/app
    volumes:
      - ./index.js:/usr/src/app/index.js
    command: node index.js
    deploy:
      replicas: 2
  nginx:
    image: nginx:latest
    depends_on: 
      - node
    ports:
      # The port 8080 will be the port your application is available on the outside
      - "8080:80"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf:ro

Create an nginx.conf to route correctly
#

Create a nginx.conf file and put in the following configuration. Adjust the ports if necessary.

events { }

http {
    # This is backend server, in this case referencing to the node domain, which is equal to the service name (see docker-compose.yml -> node)
    upstream node_backend {
        # Make sure this is the correct port your Node.js service is listening on
        server node:8080;
    }

    server {
        listen 80;
        # Feel free to adjust the max_body_size, default is 2M which might be too low, compared to the pure Node.js service
        client_max_body_size 20M;

        location / {
            proxy_pass http://node_backend;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }
    }
}

Deploy
#

docker compose up  
                                                                                                                      
[+] Building 0.0s (0/0)                                                                                          docker:desktop-linux
[+] Running 4/3
 ✔ Network docker-replica-test_default    Created                                                                                0.0s
 ✔ Container docker-replica-test-node-1   Created                                                                                0.0s
 ✔ Container docker-replica-test-nginx-1  Created                                                                                0.0s
 ✔ Container docker-replica-test-node-2   Created                                                                                0.0s
Attaching to docker-replica-test-nginx-1, docker-replica-test-node-1, docker-replica-test-node-2
docker-replica-test-node-1   | Server is running at http://localhost:8080/
docker-replica-test-node-2   | Server is running at http://localhost:8080/
docker-replica-test-nginx-1  | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
docker-replica-test-nginx-1  | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
docker-replica-test-nginx-1  | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
docker-replica-test-nginx-1  | 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
docker-replica-test-nginx-1  | 10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
docker-replica-test-nginx-1  | /docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh
docker-replica-test-nginx-1  | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
docker-replica-test-nginx-1  | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
docker-replica-test-nginx-1  | /docker-entrypoint.sh: Configuration complete; ready for start up

Checking the running containers
#

docker ps
CONTAINER ID   IMAGE         COMMAND                  CREATED          STATUS          PORTS                 NAMES
94a20094f924   nginx:latest  "/docker-entrypoint.…"   10 seconds ago   Up 9 seconds    0.0.0.0:8080->80/tcp  docker-replica-test-nginx-1
91e4c63c76ba   node:22       "docker-entrypoint.s…"   10 seconds ago   Up 9 seconds                          docker-replica-test-node-1
6b04208d32ec   node:22       "docker-entrypoint.s…"   10 seconds ago   Up 9 seconds                          docker-replica-test-node-2

Testing
#

Let’s call the service a couple of times.

curl localhost:8080
Hello, World!

curl localhost:8080
Hello, World!

curl localhost:8080
Hello, World!

This is what you should see in the logs. See at the bottom that both Node.js nodes receive requests.

Attaching to docker-replica-test-nginx-1, docker-replica-test-node-1, docker-replica-test-node-2
docker-replica-test-node-1   | Server is running at http://localhost:8080/
docker-replica-test-node-2   | Server is running at http://localhost:8080/
docker-replica-test-nginx-1  | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
docker-replica-test-nginx-1  | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
docker-replica-test-nginx-1  | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
docker-replica-test-nginx-1  | 10-listen-on-ipv6-by-default.sh: info: IPv6 listen already enabled
docker-replica-test-nginx-1  | /docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh
docker-replica-test-nginx-1  | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
docker-replica-test-nginx-1  | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
docker-replica-test-nginx-1  | /docker-entrypoint.sh: Configuration complete; ready for start up
docker-replica-test-node-1   | Request received
docker-replica-test-nginx-1  | 192.168.65.1 - - [02/Oct/2024:04:35:56 +0000] "GET / HTTP/1.1" 200 24 "-" "curl/8.10.0"
docker-replica-test-node-2   | Request received
docker-replica-test-nginx-1  | 192.168.65.1 - - [02/Oct/2024:04:35:57 +0000] "GET / HTTP/1.1" 200 24 "-" "curl/8.10.0"
docker-replica-test-node-1   | Request received
docker-replica-test-nginx-1  | 192.168.65.1 - - [02/Oct/2024:04:35:58 +0000] "GET / HTTP/1.1" 200 24 "-" "curl/8.10.0"

Conclusion
#

To optimize CPU usage and handle higher traffic in Node.js and Python applications running on Docker Compose, leveraging Nginx for load balancing is a highly effective solution. While Docker Compose lacks native load balancing capabilities, integrating Nginx ensures that multiple replicas can handle incoming traffic efficiently, allowing you to make full use of your server’s resources.

This method is particularly valuable when working with single-threaded languages where multiple replicas are necessary to maximize CPU efficiency. By incorporating Nginx as a load balancer, you maintain the simplicity and familiarity of Docker Compose while achieving production-grade performance.

If you’re scaling applications and facing port conflicts with multiple replicas, this guide offers a practical way to overcome those challenges. Follow the steps, test your setup, and you’ll have a robust load-balanced system ready to handle significant workloads.

Resources
#

Docker Compose documentation

Nginx documentation

Load Balancing Docker Compose Replicas Using Docker Swarm

Credits
#

Article image by Bernd Dittrich