1. Home
  2. HOWTO
  3. How to use Nginx to Load Balance between multiple nodes

How to use Nginx to Load Balance between multiple nodes

If you are a witness or have websites that need to handle a lot of requests, or if you want to provide failover protection should one of your servers fail, this is one way to do it. In my case I was only looking for a proxy, but realized that adding load balancing was also quite easy to do as well, didn’t require any additional software and only minor configuration changes to Nginx as a proxy.

Description of Configuration

In the main Nginx web server configuration file, suitable for any settings that apply to all virtual servers defined in the sites-available directory. The default settings this file contains are generally fine, and the only addition required for a load balancing proxy is the line below the Request rate limiting comment:

limit_req_zone $binary_remote_addr zone=ws:10m rate=1r/s;
# Hosts we will load balance between.
upstream server_pool {
    server localhost:port;
    server 002.xxx.yyy.zzz;
    server 003.xxx.yyy.zzz;
    server 004.xxx.yyy.zzz:port;
    server 005.xxx.yyy.zzz;
}

You will note that the list of servers needn’t be limited to the local host, nor all use the same ports. That normalizes access to a single URL of the load balancer on whatever port you wish to provide the public API websocket service on. I decided to use port 80 as that is likely to be open to support web traffic and thus not typically blocked by firewalls.

# Public API websocket 
server {
    listen 80;
    server_name _;
    ...
# Reject and terminate illegitimate requests with prejudice  
   location / {
        return 444;
    }
# Matches websocket URL address 
    location ~ ^/ws$ {
...
} 

 

The server block defines the public API. Here no server name is specified and this becomes the default nginx server that listens on port 80. The next 2 location blocks contain the remainder of the load balancing proxy server. These 2 blocks must be ordered this way to insure proper request filtering occurs.

The first location block matches all requests except those handled by the second location block which processes websocket requests. It uses the special nginx status of 444 which causes any matching requests to be terminated immediately with no further processing or response to the client.

The second location matches URLs of the form: “ws://server domain or IP/ws“. The URL must end with /ws or it will be rejected and terminated by the first location block.

Further down in the second location block you will see the reference to the upstream server_pool list (proxy_pass http://server_pool;) at the top of the file. Use any name you want but just make sure the names match. Also note the last line of the location block which is required for load balancing.

The 3rd and last file is used to configure an individual websocket server. It acts as a proxy to forward websocket requests received on one port such as port 80 onto the appropriate port the witness_node listens to (named “rpc_endpoint” in the witness node’s config.ini file). It also serves to reject requests that don’t originate from the load balancer by the inclusion of these 2 lines:

        # Allow requests only from the load balancer; IP only, no domain names!
        allow   www.xxx.yyy.zzz;
        deny    all;

That about covers a basic proxy and load balancer based on the efficient and versatile Nginx web server. Keep a look out for my next article on encrypting your public API with SSL using free LetsEncrypt certificates.