High availability with Pound load balancer
Pedro Pessoa, Operations Engineer at Server Density.
Published on the 21st August, 2012.
Following our migration to SoftLayer and their release of global IPs, we started to implement our multi-datacenter plan by replacing dedicated (static IP) load balancers with Pound and a global IP. Pound in itself is very easy to deploy and manage. There’s a package for Ubuntu 12.04, the distribution we are using for the load balancer servers, allowing us to have it running in no time out of the box, particularly if what you’re load balancing doesn’t have any special requirements.
Unfortunately, this was not the case of our Server Density monitoring web app that requires a somewhat longer cookies header. The Pound Ubuntu package is compiled with the default
MAXBUF = 4096 but we needed about twice as that to allow our header through. This is a bug we didn’t discover until testing Pound because our hardware load balancers didn’t have this limit but it highlights something to fix on the next version of Server Density. We don’t particularly like recompiling distribution packages, mostly because we diverge from the general usage and eventually will cause suspicion on these changes if some problem arises on that particular package.
Presented with no other option without breaking existing customer connections (cookie is sent before we can truncate it) we decided to start a PPA for our Pound changed package. This carries two advantages we appreciate, it’s shared with the world and we can make use of Launchpad build capabilities.
Pound Load Balancer Configuration
Besides the previous application specific change, our Pound configuration is quite simple and managed from Puppet Enterprise – hence the ruby template syntax ahead. From the defaults, we changed:
- Use of the dynamic rescaling code. Pound will periodically try to modify the back-end priorities in order to equalize the response times from the various back-ends. Although our backend servers are all exactly the same, they are deployed as virtualised instances on a “public” cloud at Softlayer, so can independently suffer from performance impact on the host.
- A “redirect service”. Anything not
*serverdensity.com*is immediately redirected to http://www.serverdensity.com and doesn’t even hit the back-end servers.
Service "Direct Access" HeadDeny "Host: .*serverdensity.com.*" Redirect "http://www.serverdensity.com" End
- Each back-end relies on a co-located mongo-s router process to reach our Mongo data cluster. We use the HAport configuration option to make sure the back-end is taken out of rotation when there is a problem with the database and the webserver is still responding on port 80.
HAport <%= hAportMongo %>
- Finally, because we have a cluster of load balancers, we needed to be able to trace which load balancer handled the request. For this we add an extra header.
AddHeader "X-Load-Balancer: <%= hostname %>"
Redundancy and automated failover
The SoftLayer Global IP (GIP) is the key to give us the failover capability with minimum lost connections to our service. By deploying two load balancers per data center, being targeted by a single GIP, we can effectively route traffic to any load balancer.
We deploy the load balancers in an active-standby configuration. While the active is being targeted by the GIP, receiving all traffic, the standby load balancer monitors the active health. If the active load balancer stops responding (ICMP, HTTP or HTTPS) the GIP is automatically re-routed to the standby load balancer using the SoftLayer API. This situation is then alerted through a PagerDuty, also using their API, to allow the on-call engineer to respond. There’s no automatic recover attempt to avoid flapping and to allow investigation of the event.