Why we use NGINX

Adopting NGINX

By Pedro Pessoa, Operations Engineer at Server Density.
Published on the 11th February, 2016.

Wait, another article about NGINX?

You bet. Every single Server Density request goes through NGINX. NGINX is such an important part of our infrastructure. A single post couldn’t possibly do it justice. Want to know why we use NGINX? Keep reading.

Why is NGINX so important?

Because it’s part of the application routing fabric. Routing, of course, is a highly critical function because it enables load balancing. Load balancing is a key enabler of highly available systems. Running those systems requires having “one more of everything”: one more server, one more datacenter, one more zone, region, provider, et cetera. You just can’t have redundant systems without a load balancer to route requests between redundant units.

But why choose NGINX and not something else, say Pound?

We like Pound. It is easy to deploy and manage. There is nothing wrong with it. In fact it’s a great option, as long as your load balancing setup doesn’t have any special requirements.

Special requirements?

Well, in our case we wanted to handle WebSocket requests. We needed support for the faster SPDY and HTTP2 protocols. We also wanted to use the Tornado web framework. So, after spending some time with Pound, we eventually settled on NGINX open source.

NGINX is an event driven web server with inbuilt proxying and load balancing features. It has inbuilt native support for FCGI and uWSGI protocols. This allows us to run Python WSGI apps in fast application servers such as uWSGI.

NGINX also supports third party modules for full TCP/IP socket handling. This allows us to pick and mix between our asynchronous WebSocket Python apps and our upstream Node.js proxy.

What’s more, NGINX is fully deployable and configurable through Puppet.

So, how do you deploy NGINX? Do you use Puppet?

Indeed, we do. In fact we’ve been using Puppet to manage our infrastructure for several years now. We started with building manifests to match the setup of our old hosting environment at Terremark, so we could migrate to Softlayer.

In setting up Puppet, we needed to choose between writing our own manifest or reaching out to the community. As advocates of reusing existing components (versus building your own), a visit to the Forge was inevitable.


We run Puppet Enterprise using our own manifests (Puppet master pulls them from our Github repo). We also make intense use of Puppet Console and Live Management to trigger transient changes such as failover switches.

Before we continue, a brief note on Puppet Live Management.

Puppet Live Management: It’s Complicated Deprecated

Puppet Live management allows admins to change configurations on the fly, without changing any code. It also allows them to propagate those changes to a subset of servers. Live Management is a cool feature.

Alas, last year Puppet Labs deprecated this cool feature. Live Management doesn’t show up in the enterprise console any more. Thankfully, the feature is still there but we needed to use a configuration flag to unearth and activate it again (here is how).


Our NGINX Module

Initially, we used the Puppetlabs nginx module. Unnerved by the lack of maintenance updates, though, we decided to fork James Fryman’s module (Puppet Labs’ NGINX module was a fork of the James Fryman’s module, anyway).

We started out by using a fork and adding extra bits of functionality as we went along. In time, as James Fryman’s module continued to evolve, we started using the (unforked) module directly.

Why we use NGINX with multiple load balancers

We used to run one big load balancer for everything. I.e. one load balancing server handling all services. In time, we realised this is not optimal. If requests for one particular service piled up, it would often cross-talk to other services and affect them too.

So we decided to go with many, smaller load balancer servers. Preferably one for each service.

All our load balancers have the same underlying class in common. Depending on the specific service they route, each load balancer class will have its own corresponding service configuration.

Trigger transient changes using the console

One of the best aspects of the Live Management approach is that it helps overcome the lack of control inherent in the NGINX open source interface (NGINX Plus offers more options).


In order for NGINX configuration file changes to take effect, NGINX needs to reload. For example, to remove one node from the load balancer rotation, we would need to edit the corresponding configuration file and then trigger a NGINX reload.

Using the Puppet Console, we can change any node or group parameters and then trigger the node to run. Puppet will then reload NGINX. What’s cool is that the in flight connection is not terminated by the reload.


NGINX caters for a micro-service architecture, composed of small, independent processes. It can handle WebSockets and it supports SPDY. It’s also compatible with the Tornado web framework. Oh, and it was built around FCGI and uWSGI protocols.

Over the years, we’ve tried to fine-tune how we use Puppet to deploy NGINX. As part of that, we use Puppet Console and Live Management features quite extensively.

Is NGINX part of your infrastructure? Please comment with your own use case and suggestions, in the comments below.

Everything we know about NGINX: the eBook

We’ve worked with NGINX for more than 7 years.
So we thought it was time we wrote a book about it. It’s packed with insights, tricks, and pretty much everything we've learned from using NGINX.

Fill in the form below to get your free copy!

Help us speak your language. What is your primary tech stack?

What infrastructure do you currently work with?

Articles you care about. Delivered.

Help us speak your language. What is your primary tech stack?

Maybe another time