How to configure nginx as a load balancer

how-to-configure-nginx-as-a-load-balancer

By David Mytton,
CEO & Founder of Server Density.

Published on the 28th February, 2013.

Last year we deployed our own load balancers using pound, but we are now transitioning to nginx because it is more actively developed and has much better features including caching and new support for web sockets.

nginx is the frontend load balancer for v2 of our server monitoring service, Server Density, which is about to go into beta testing. However, it is already in production for internal services like the metrics backend (which powers our graphing).

NGINX load balancing configuration

You need x2 modules which are built into the nginx core: Proxy, which forwards requests to another location, and Upstream, which defines the other location(s). They should be available by default.

Within your nginx.conf file you need to specify 2 blocks. The first of these is upstream which defines the nodes within the load balanced cluster:

upstream web_rack {
    server 10.0.0.1:80;
    server 10.0.0.2:80;
    server 10.0.0.3:80;
}

Here you have 3 nodes with a web server listening on port 80. The group has been called web_rack. This is the destination for the proxy and this upstream module deals with distributing that proxied request across the defined nodes. There are different options for how the distribution works including defining nodes with higher priority and what happens if nodes are down.

Next you tell the “vhost” about this upstream rack:

server {
    listen 80;
    server_name www.example.com;
    location / {
        proxy_pass http://web_rack;
    }
}

This creates an equivalent of an Apache vhost listening on www.example.com port 80 and all requests are proxied to the web_rack, which then distributes them to the 3 nodes we have configured.

The full nginx.conf file would look like this:

http {
    upstream web_rack {
        server 10.0.0.1:80;
        server 10.0.0.2:80;
        server 10.0.0.3:80;
    }

    server {
        listen 80;
        server_name www.example.com;
        location / {
            proxy_pass http://web_rack;
        }
    }
}

More advanced options

The docs for each module contain more examples of the options you can include but some of the ones we make use of are:

nginx load balancer log formatting

It can be useful for debugging to dump a load of info into the request logs. We use:

log_format upstreamlog '[$time_local] $remote_addr - $remote_user - $server_name  to: $upstream_addr: $request upstream_response_time $upstream_response_time msec $msec request_time $request_time';

so we can see where things are coming from and where they’re going, plus how long the response took.

This goes in the http config block and you can find other variables in the docs.

Conditional forwarding based on the HTTP Method

We distribute GET and POST requests in certain situations e.g. our graphing is very write heavy so we have dedicated, separate processes dealing with POSTs and GETs to avoid contention. Nginx splits these up:

location / {
    if ($request_method = POST)
    {
        proxy_pass http://post_rack;
        break;
    }

    proxy_pass http://get_rack;
}

This goes in the server config block.

Proxy headers

Again useful for debugging we add headers into the proxied request so we can see where things are going and where they have come from, plus some timestamps for monitoring:

proxy_set_header        Host $host;
proxy_set_header        X-Real-IP $remote_addr;
proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header        X-Queue-Start "t=${msec}000";

This goes in the http config block.

Deploying nginx with Puppet

All the examples above are hand written but we actually make use of Puppet to configure everything, using our own fork to add some non-supported features like the conditional HTTP method routing, custom logging and SSL improvements (not mentioned above since they’re not load balancer specific but things like better cipher choices and HSTS headers).

Read more about our usage of deploying nginx with Puppet or check out our tutorial content on monitoring NGINX.

Everything we know about NGINX: the eBook

We’ve worked with NGINX for more than 7 years.
So we thought it was time we wrote a book about it. It’s packed with insights, tricks, and pretty much everything we've learned from using NGINX.

Fill in the form below to get your free copy!

Help us speak your language. What is your primary tech stack?

What infrastructure do you currently work with?

Articles you care about. Delivered.

Help us speak your language. What is your primary tech stack?

Maybe another time