Premium Hosted Website & Server Monitoring Tool.

(Sysadmin / Devops blog)

visit our website

Author Archives: David Mytton

About David Mytton

David Mytton is the founder of Server Density. He has been programming in PHP and Python for over 10 years, regularly speaks about MongoDB (including running the London MongoDB User Group), co-founded the Open Rights Group and can often be found cycling in London or drinking tea in Japan. Follow him on Twitter and Google+.
  1. A guide to handling incidents, downtime and outages

    Leave a Comment

    Outages and downtime are inevitable. Designing your systems to handle failure is a key part of modern infrastructure architecture which makes it possible to survive most problems, however there will be incidents you didn’t think about, software bugs you didn’t catch and other events which result in downtime for your service.

    Microsoft, Amazon and Google spend $billions every quarter and even they still have outages. How much do you spend?

    There are some companies who constantly seem to have problems and suffer from it unnecessarily. Regular outages ultimately become unacceptable but if you adopt a few key principles and design your systems properly, the few times when you do have service incidents you can be forgiven by customers.

    Step 1: Planning

    If critical alerts result in panic and chaos then you deserve to suffer from the incident! There are a number of things you can do in advance to ensure that when something does go wrong, everyone on your team knows what they should be doing.

    • Put in place the right documentation. This should be easily accessible, searchable and up to date. We use Google Docs for this.
    • Use proper config management, be it Puppet, Chef, Ansible, Salt Stack or some other systems to be able to make mass changes to your infrastructure in a controlled manner. It also helps your team understand novel issues because the code that defines the setup is easily accessible.

    Unexpected failures

    Be aware of your whole system. Unexpected failures can come from unusual places. Are you hosted on AWS? What happens if they suffer an outage and you need to use Slack or Hipchat for internal communication? Are you hosted on Google Cloud? What happens if your GMail is unavailable during a Google Cloud outage? Are you using a data center within the city you live in? What happens if there’s a weather event and the phone service is knocked out?

    Step 2: Be ready to handle the alerts

    Some people hate being on call, others love it! Either way, you need a system to handle on call rotations, escalating issues to other members of the team, planning for reachability and allowing people to go off-call after incidents. We use PagerDuty on a weekly rotation through the team and consider things like who is available, internet connectivity, illness, holidays and looping in product engineering so issues waking people up can be resolved quickly.

    pagerduty-on-call-calendar

    More and more outages are being caused by software bugs getting into production because it’s never just a single thing that goes wrong – a cascade of problems all culminate to cause downtimeso you need rotations amongst different teams, such as frontend engineering, not just ops.

    Step 3: Deal with it, using checklists

    Have a defined process in place ready to run through whenever the alerts go off. Using a checklist removes unnecessary thinking so you can focus on the real problem, and ensures key actions are taken and not forgotten. Have a channel for communication both internally and externally – there’s nothing worse to be the customer of a service that is down and you have no idea if they’re working on it or not.

    Google Docs Incident Handling

    Step 4: Write up a detailed postmortem

    This is the opportunity to win back trust. If you follow the steps above and provide accurate, useful information during the outage so people know what is going on, this is the chance to write it up, explain what happened, what went wrong and crucially, what you are going to do to prevent it from happening again. Outages highlight unknown system flaws and it’s important to tell your users that the hole no longer exists, or is in the process of being closed.

    Interested in learning more?

    We are going live on the internet in the form of a Q&A webinar on the 11th November 2014 @ 18:30 BST. We’ll be discussing things to consider when handling incidents, on-call rotations and outage status page communications. Join us for free!

  2. How to monitor Apache

    Leave a Comment

    Apache is perhaps the most well known and widely deployed web servers, having originally been released back in 1995 and currently deployed on a large number of web servers (although losing ground to NGINX). As an important part of the classic LAMP stack, it is a critical component in your web serving architecture, and if you’re not currently – you should be monitoring Apache.

    Enabling Apache monitoring with mod_status

    Most of the tools for monitoring Apache require the use of the mod_status module. This is included by default but needs to be enabled. You will need to specify an endpoint in your Apache config:

    
    
    <Location /server-status>
    
      SetHandler server-status
      Order Deny,Allow
      Deny from all
      Allow from 127.0.0.1
    
    </Location>
    
    

    This will make the status page available at http://localhost/server-status on your server. We have a full guide to configuring this. Be sure to enable the ExtendedStatus directive to get full access to all the stats.

    Monitoring Apache from the command line

    Once you have enabled the status page and verified it is working above, you can make use of command line tools to monitor the traffic on your server in real time. This is useful for debugging issues and examining traffic as it happens.

    The apache-top tool is a popular method of achieving this. It is often available as a system package e.g. apt-get install apachetop but can also be downloaded from the source, as it is only a simple Python script.

    Apache monitoring and alerting – Apache stats

    Using apache-top is useful for real time debugging and examining what is happening on your server right now, but it is less useful if you want to collect statistics over a period of time. This is where a monitoring product such as Server Density will come in. Our monitoring agent supports parsing the Apache server status output and can give you statistics on requests per second and idle/busy workers.

    Apache has several process models but the most common is to have worker processes running idle waiting to service requests. As more requests come in then more workers will be launched to handle them, up to a configured maximum. At that point the requests will be queued and your visitors will experience delays. This means it’s important not just to monitor the raw requests per second but also how many idle workers you have.

    A good way to approach configuring Apache alerts is to understand what kind of baseline traffic your application experiences and set alerts around this e.g. alert if the stats are significantly higher (indicating a sudden traffic spike) and if the values are suddenly significantly lower (indicating a problem preventing traffic somewhere). You could also benchmark your server to find out at what traffic level things start to slow down and the server becomes too overloaded – this will then act as a good upper limit which you can trigger alerts at too.

    Apache monitoring and alerting – server stats

    Monitoring Apache stats like requests per second and worker status is useful to keep an eye on Apache itself, but its performance will also be affected by how overloaded the server is. Ideally you will be running Apache on its own dedicated instance so you don’t need to worry about contention with other applications.

    Web servers are generally limited by CPU and so your hardware spec should offer the web server as many CPUs and/or cores as possible. As you get more traffic then you will likely see the CPU usage increase, especially as Apache workers take up more CPU time and are distributed across the available CPUs and cores.

    CPU % usage itself is not necessarily a useful metric to alert on because the values tend to be per CPU or per core and you may have many cores. It’s more useful to set up monitoring on average CPU utilisation across all CPUs or cores. Using a tool such as Server Density, you can visualise this and configure alerts so you can be notified when the CPU is overloaded – our guide to understanding these metrics and configuring CPU alerts will help.

    On Linux this average across all CPUs is abstracted out to another system metric called load average. It is a decimal number rather than a percentage and allows you to understand load from the perspective of the operating system i.e. how long processes are waiting for access to the CPU. The recommended threshold for load average therefore depends on how many CPUs and cores you have – our guide to load average will help you understand this further.

    Monitoring the remote status of Apache

    All of the above metrics monitor the internal status of Apache and the servers it is running on but it is also important to monitor the experience your users are getting too. This is achieved by using external status and response time tools – you want to know if your Apache instance is serving traffic from different locations around the world (wherever your customers are) and the kind of response time performance. You will then know at what stage you need to add more capacity, either by increasing the capabilities of the Apache server or by adding more into a load balanced cluster.

    This is easy to do with a service like Server Density because of our in-built website monitoring. You can check the status of your public URLs and other endpoints from custom locations and get alerts when performance drops or there is an outage.

    This is particularly useful when you can build graphs to correlate the Apache and server metrics with remote response time, especially if you are benchmarking your servers and want to know when a certain load average starts to affect end user performance.

  3. What’s new in Server Density – Summer 2014

    Leave a Comment

    We’ve been a bit quiet over the last few months but have still been working on improvements and new functionality to our server and website monitoring product, Server Density. This post summarises what we added over the summer and what’s coming up soon.

    Log search beta

    One of the first things you do when responding to an alert or tracking down performance problems is look at the server logs. Current log management products are expensive and complex to set up, so we’re pleased to announce the beta of our log search functionality.

    Log search uses the existing Server Density agent to tail your logs and make them searchable from within your account. There’s a new dedicated search view so you can search by device, or you can view the logs from individual device views. Later, logs will automatically be displayed as part of a new, upcoming alert incident view.

    If you’re interested in trying this out, please fill out this short form to get into the beta.

    Server Density log search

    Google Cloud integration

    We released our integration into Google Cloud and Google Compute Engine which allows you to manage your instances and get alerts on instance and disk state changes. You can also sign up for $500 in free Google Compute Engine credits.

    Google Cloud graphs

    Snapshots

    Click on any data point on your device graphs and then click the Snapshot link, and it will take you through to a view of what was happening on that server at that exact point in time. You can also click the Snapshot tab to go to the latest snapshot and then navigate backwards and forward through each time point.

    Server snapshot

    Linux agent 1.13.4

    A number of fixes have been released as part of the latest Linux agent release, including better handling of plugin exceptions and more standards compliance for init scripts. MongoDB over SSL is also now supported. See the release notes.

    Chef cookbook improvements

    There are a range of improvements to the official Chef cookbook which include better support for EC2 and Google auto scaling and support for managing plugins through Chef. This is available on the Chef Supermarket and has had almost 100,000 downloads in the last 10 days.

    Puppet module improvements

    The official Puppet module has also had improvements to make it work better with Google Cloud. It is also available on the Puppet Forge.

    App performance improvements

    A lot of work has been done behind the scenes to improve the performance of the product generally. This ranges from optimising requests and connections in the UI, upgrades to the hardware powering the service to moving all our assets onto a CDN. We have a few more improvements still to release but this all goes towards our goal of having response times as close to instantaneous as possible.

    Onboarding and help popups

    We retired our old app tour with new in-app popup bubbles to help you learn more about functionality. Blank slates have been redesigned and we have more improvements to help show off some of the great functionality coming soon.

    How to monitor xyz

    We’re running a series of free webinars through Google Hangouts to cover how to monitor a range of different technologies. We started with MongoDB but our next upcoming hangout will be on how to monitor Nginx. Many more hangouts will be scheduled over the next few months and you can watch them back through our Youtube channel.

    Redesigned multi factor authentication setup

    The flow for setting up a new multi factor token has been redesigned to make it clearer how to proceed through. We highly recommend enabling this for extra security – passwords are no longer enough!

    Enable MFA

    Improved cloud actions menu

    Actions taken within Server Density are separated from actions taken on the Cloud Provider level to ensure commands aren’t sent accidentally.

    Cloud actions

    Delete confirmations

    Previously it was too easy to take the delete actions which could lead to accidentally deleting a device. We’ve improved the confirmation requirements for this.

    delete

    Auto refreshing graphs

    All graphs, on the device overview and on the dashboard, now auto refresh so you can keep the window open and see the data show up immediately.

    What’s coming next?

    We’ll be returning to our monthly post schedule for “What’s new” as we start releasing some of the things we’ve been working on over the last few months. This includes permissions and a range of new alerting functionality, starting with tag based alerts and group recipients. Lots of interesting new functionality to be announced before the end of the year!

  4. Automated Google Cloud and Google Compute Engine monitoring

    Leave a Comment

    Today we’re releasing the Server Density integration into Google Compute Engine as an official Google Cloud Platform Technology Partner. Server Density works across all environments and platforms and is now fully integrated into Google’s cloud infrastructure products, including Compute Engine and Persistent Disks, to offer alerting, historical metrics and devops dashboards to Google customers.

    Google Cloud graphs

    Server Density customers can connect their Google Cloud accounts to automatically monitor and manage instances across Google data centers alongside existing environments and other cloud providers. Many customers will run systems across multiple providers in a hybrid setup, so Server Density is uniquely placed to help with that because even though we have specialist integration into Google, it works well anywhere – cloud, hybrid and on-prem.

    $500 credit for Google/Server Density customers

    Server Density normally starts at $10/m to monitor Linux, Windows, FreeBSD and Mac servers but Google Cloud customers can monitor up to 5 servers for free for life (worth over $500/year). Google is also offering Server Density customers $500 in credits to trial Google Cloud Platform. To find out more and sign up, head over to our website for details.

  5. How to monitor MongoDB

    Leave a Comment
    Update: We hosted a live Hangout on Air with Paul Done from MongoDB discussing how to monitor MongoDB. We’ve made the slides and video available, which can be found embedded at the bottom of this blog post.

    We use MongoDB to power many different components of our server monitoring product, Server Density. This ranges from basic user profiles all the way to high throughput processing of over 30TB/month of time series data.

    All this means we keep a very close eye on how our MongoDB clusters are performing, with detailed monitoring of all aspects of the systems. This post will go into detail about the key metrics and how to monitor your MongoDB servers.

    MongoDB Server Density Dashboard

    Key MongoDB monitoring metrics

    There is a huge range of different things you should keep track of with your MongoDB clusters, but only a few that are critical. These are the monitoring metrics we have on our critical list:

    Oplog replication lag

    The replication built into MongoDB through replica sets has worked very well in our experience. However, by default writes only need to be accepted by the primary member and replicate down to other secondaries asynchronously i.e. MongoDB is eventually consistent by default. This means there is usually a short window where data might not be replicated should the primary fail.

    This is a known property, so for critical data, you can adjust the write concern to return only when data has reached a certain number of secondaries. For other writes, you need to know when secondaries start to fall behind because this can indicate problems such as network issues or insufficient hardware capacity.

    MongoDB write concern

    Replica secondaries can sometimes fall behind if you are moving a large number of chunks in a sharded cluster. As such, we only alert if the replicas fall behind for more than a certain period of time e.g. if they recover within 30min then we don’t alert.

    Replica state

    In normal operation, one member of the replica set will be primary and all the other members will be secondaries. This rarely changes and if there is a member election, we want to know why. Usually this happens within seconds and the condition resolves itself but we want to investigate the cause right away because there could have been a hardware or network failure.

    Flapping between states should not be a normal working condition and should only happen deliberately e.g. for maintenance or during a valid incident e.g. hardware failure.

    Lock % and disk i/o % utilization

    As of MongoDB 2.6, locking is on a database level, with work ongoing for document level locking in MongoDB 2.8. Writes take a global database lock so if this situation happens too often then you will start seeing performance problems as other operations (including reads) get backed up in the queue.

    We’ve seen high effective lock % be a symptom of other issues within the database e.g. poorly configured indexes, no indexes, disk hardware failures and bad schema design. This means it’s important to know when the value is high for a long time, because it can cause the server to slow down (and become unresponsive, triggering a replica state change) or the oplog to start to lag behind.

    However, it can trigger too often, so you need to be careful. Set long delays e.g. if the lock remains above 75% for more than 30 minutes and if you have alerts on replica state and oplog lag, you can actually set this as a non-critical alert.

    Related to this is how much work your disks are doing i.e. disk i/o % utilization. Approaching 100% indicates your disks are at capacity and you need to upgrade them i.e. spinning disk to SSD. If you are using SSDs already then you can provide more RAM or you need to split the data into shards.

    MongoDB SSD performance benchmarks

    Non-critical metrics to monitor MongoDB

    There are a range of other metrics you should keep track of on a regular basis. Even though they might be non-critical, they will help avoid issues escalating to critical production problems if dealt with and investigated.

    Memory usage and page faults

    Memory is probably the most important resource you can give MongoDB and so you want to make sure you always have enough! The rule of thumb is to always provide sufficient RAM for all of your indexes to fit in memory, and where possible, enough memory for all your data too.

    Resident memory is the key metric here – MongoDB provides some useful statistics to show what it is doing with your memory.

    Page faults are related to memory because a page fault happens when MongoDB has to go to disk to find the data rather than memory. More page faults indicate that there is insufficient memory, so you should consider increasing the available RAM.

    Connections

    Every connection to MongoDB has an overhead which contributes to the required memory for the system. This is initially limited by the Unix ulimit settings but then will become limited by the server resources, particularly memory.

    High numbers of connections can also indicate problems elsewhere e.g. requests backing up due to high lock % or a problem with your application code opening too many connections.

    Shard chunk distribution

    MongoDB will try and balance chunks equally around all your shards but this can start to lag behind if there are constraints on the system e.g. high lock % slowing down moveChunk operations. You should regularly keep an eye on how balanced the cluster is.

    We have released a free tool to help with this. It can be run standalone, programmatically or as part of a plugin for Server Density.

    Tools to monitor MongoDB

    Now you know the things to keep an eye on, you need to know how to actually collect those monitoring statistics!

    Monitoring MongoDB in real time

    MongoDB includes a number of tools out of the box. These are all run against a live MongoDB server and report stats in real time:

    • mongostat – this shows key metrics like opcounts, lock %, memory usage and replica set status updating every second. It is useful for real time troubleshooting because you can see what is going on right now.
    • mongotop – whereas mongostat shows global server metrics, mongotop looks at the metrics on a collection level, specifically in relation to reads and writes. This helps to show where the most activity is.
    • rs.status() – this shows the status of the replica set from the viewpoint of the member you execute the command on. It’s useful to see the state of members and their oplog lag.
    • sh.status() – this shows the status of your sharded cluster, in particular the number of chunks per shard so you can see if things are balanced or not.

    MongoDB monitoring, graphs and alerts

    Although the above tools are useful for real time monitoring, you also need to keep track of statistics over time and get notified when metrics hit certain thresholds – some critical, some non-critical. This is where a monitoring tool such as Server Density comes in. We can collect all these statistics for you, allow you to configure alerts and dashboards and graph the data over time, all with minimal effort.

    MongoDB graphs

    If you already run your own on-premise monitoring using something like Nagios or Munin, there are a range of plugins for those systems too.

    MongoDB themselves provide free monitoring as part of the MongoDB Management Service. This collects all the above statistics with alerting and graphing, similar to Server Density but without all the other system, availability and application monitoring.

    Monitor MongoDB Slides

    Monitor MongoDB Video

  6. What’s in your on call playbook?

    Leave a Comment

    Back in February we started centralising and revamping all our ops documentation. I played around with several different tools and ended up picking Google Docs to store all the various pieces of information about Server Density, our server monitoring application.

    We make use of Puppet to manage all of our infrastructure and this acts as much of the documentation – what is installed, configuration, management of servers, dealing with failover and deploys – but there is still need for other written docs. The most important is the incident response guide, which is the step by step checklist all our on-call team run through when an alert gets triggered.

    iPhone Server Monitoring Alert

    Why do you need an incident response guide?

    As your team grows, you can’t just rely on one or two people knowing everything about how to deal with incidents in an ad-hoc manner. Systems will become more complex and you’ll want to distribute responsibilities around team members, so not everyone will have the same knowledge. During an incident, it’s important that the right things get done in the right order. There are several things to remember:

    • Log everything you do. This is important so that other responders can get up to speed and know what has been done, but is also important to review after the incident is resolved so you can make improvements as part of the postmortem.
    • Know how to communicate internally and with end-users. You want to make sure you are as efficient as possible as a team, but also keep your end-users up to date so they know what is happening.
    • Know how to contact other team members. If the first responder needs help, you need a quick way to raise other team members.

    All this is difficult to remember during the stress of an incident so what you need is an incident response guide. This is a short document that has clear steps that are always followed when an alert is triggered.

    Google Docs Incident Handling

    What should you have in your incident response guide?

    Our incident response guide contains 6 steps which I’ve detailed below, expanded upon to give some insight into the reasoning. In the actual document, they are very short because you don’t want to have complex instructions to follow!

    1. Log the incident in JIRA. We use JIRA for project management and so it makes sense to log all incidents there. We open the incident ticket as soon as the responder receives the alert and it contains the basic details from the alert. All further steps taken in diagnosing and fixing the problem are logged as comments. This allows us to refer to the incident by a unique ID, it allows other team members to track what is happening and it means we can link the incident to followup bug tasks or improvements as part of the postmortem.
    2. Acknowledge the alert in PagerDuty. We don’t acknowledge alerts until the incident is logged because we link the acknowledgment with the incident. This helps other team members know that the issue is being investigated rather than someone has accidentally acknowledged the alert and forgotten about it.
    3. Log into the Ops War Room in Hipchat. We use Hipchat for real time team communication and have a separate “war room” which is used only for discussing ongoing incidents. We use sterile cockpit rules to prevent noise and also pipe in alerts into that room. This allows us to see what is happening, sorted by timestamp. Often we will switch to using a phone call (usually via Skype because Google Hangouts still uses far too much CPU!) if we need to discuss something or coordinate certain actions, because speaking is faster than typing. Even so, we will still log the details in the relevant JIRA incident ticket.
    4. Search the incident response Google Docs folder and check known issues. We have a list of known issues e.g. debug branches deployed or known problems waiting fixes which sometimes result in on-call alerts. Most of the time though it is something unusual and we have documentation on all possible alert types so you can easily search by error string and find the right document, and the steps for debugging. Where possible we try to avoid triggering on-call alerts to real people where a problem can be fixed using an automated script, so usually these steps are debug steps to help track down where the problem is.
    5. If the issue is affecting end-users, do a post to our status site. Due to the design of our systems, we very rarely have incidents which affect the use of our product. However, where there is a problem which causes customer impact, we post to our public status page. We try and provide as much detail as possible and post updates as soon as we know more, or at the very least every 30m even if there is nothing new to report. It seems counter-intuitive that publicising your problems would be a good thing, but customers generally respond well to frequent updates so they know when problems are happening. This is no excuse for problems happening too frequently but when they do happen, customers want to know.
    6. Escalate the issue if you can’t figure it out. If the responder can’t solve the issue then we prefer they bring in help sooner rather than prolong the outage. This is either by escalating the alert to the secondary on-call in PagerDuty or by calling other team members directly.

    Replying to customer emails

    Another note we have is regarding support tickets that come in reporting the issue. Inevitably some customers are not aware of your public status page and they’ll report any problems directly to you. We use Zendesk to set the first ticket as a “Problem” and direct the customer to our status page. Any further tickets can be set as “Incidents” of that “Problem” so when we solve the issue, we can do a mass reply to all linked tickets. Even though they can get the same info from the status page, it’s good practice to email customers too.

    What do you have in your playbook?

    Every company handles incidents differently. We’ve built this process up over the years of experience, learning how others do things and understanding our own feelings when services we use have outages. You can do a lot to prevent outages but you can never eliminate them, so you need to spend as much time planning the process for handling them. What do you have in your incident response processes? Leave a comment!

  7. Cloud location matters – latency, privacy, redundancy

    Leave a Comment

    This article was originally published on GigaOm.

    Now that we’re seeing intense competition in the cloud infrastructure market, each of the vendors is looking for as many ways to differentiate itself as possible. Big wallets are required to build the infrastructure and picking the right locations to deploy that capital is becoming an important choice. Cloud vendors can be innovative on a product or technical level, but location is just as important — which geographies does your cloud vendor have data centers in and why does that matter?

    Why is location important?

    There are a number of reasons why a diverse range of locations is important:

    • Redundancy: Compared to the chances of a server failure, whole data center outages are rare — but they can happen. In the case of power outages, software bugs or extreme weather, it’s important to be able to distribute your workloads across multiple, independent facilities. This is not just to get redundancy across data centers but also across geographies so you can avoid local issues like bad weather or electrical faults. You need data centers close enough to minimize latency but far enough to be separated by geography.
    • Data protection: Different types of data have different locality requirements e.g. requiring personal data to remain within the EU.
    • User latency: response times for the end user are very important in certain applications, so having data centers close to your users is important, and the ability to send traffic to different regions helps simplify this. CDNs can be used for some content but connectivity is often required to the source too.

    Deploying data centers around the world is not cheap, and this is the area where the big cloud providers have an advantage. It is not just a case of equipping and staffing data centers — much of the innovation is coming from how efficient those facilities are. Whether that means using the local geography to make data centers green, or building your own power systems, this all contributes to driving down prices, which can only truly be done at scale.

    How do the top providers perform?

    The different providers all have the concept of regions or data centers within a specific geography. Usually, these are split into multiple regions so you can get redundancy within the region, but this is not sufficient for true redundancy because the whole region could fail, or there could be a local event like a storm. Therefore, counting true geographies is important:

    Cloud provider locations

    Azure is in the lead with 12 regions followed by Softlayer (10), Amazon (8) and Rackspace (6). Google loses out, with only 3 regions.

    Where is the investment going?

    It’s somewhat surprising that Amazon has gone for so long with only a single region in Europe — although this may be about to change with evidence of a new region based in Germany. If you want redundancy then you really need at least 2 data centers nearby, otherwise latency will pose a problem. For example, replicating a production database between data centers will experience higher latency if you have to send data across the ocean (from the U.S. to Ireland, say). It’s much better to replicate between Ireland and Germany!

    AWS Map

    Softlayer is also pushing into other regions with the $1.2 billion investment it announced for new data centers in 2014. Recently it launched Hong Kong and London data centers, with more planned in North America (2), Europe (2), Brazil, UAE, India, China, Japan and Australia (2).

    Softlayer network map

    The major disappointment is Google. It’s spending a lot of money on infrastructure and actually have many more data centers worldwide than are part of Google Cloud – in USA (6), Europe (3) and Asia (2) – which would place it second behind Microsoft. Of course, Google is a fairly new entrant into the cloud market and most of its demand is going to be from products like search and Gmail, where consumer requirements will dominate. Given the speed at which it’s launching new features, I expect this to change soon if it’s really serious about competing with the others.

    Google data center locations

    What about China?

    I have specifically excluded China from the figures above but it is still an interesting case. The problem is that while connectivity inside China is very good (in some regions), crossing the border can add significant latency and packet loss. Microsoft and Amazon both have regions within China, but they require a separate account and you usually have to be based in China to apply. Softlayer has announced a data center in Shanghai, so it will be interesting to see whether it can connect their global private network with good throughput. As for Google, it publicly left China 4 years ago so it may never launch a region there.

    It’s clear that location is going to be a competitive advantage, one where Microsoft currently holds first place but will lose it to Softlayer soon. Given the amount of money being invested, it will be interesting to see where cloud availability expands to next.

  8. How to monitor Nginx

    1 Comment
    Update: We hosted a live Hangout on Air with Rick Nelson the Technical Solutions architect from NGINX, in which we dug deeper into some of the issues discussed in this blog post. We’ve made the slides and video available, which can be found embedded at the bottom of this blog post.



    Nginx is a popular web server which is often used as a load balancer because of its performance. It is used extensively at Server Density to power our public facing UI and APIs, and also for its support for WebSockets. As such, monitoring Nginx is important because it is often the critical component between your users and your service.

    Monitor Nginx from the command line

    Monitoring Nginx in real time has advantages when you are trying to debug live activity or monitor what traffic is being handled in real time. These methods make use of the Nginx logging to parse and display activity as it happens.

    Enable Nginx access logging

    For monitoring the real time Nginx traffic, you first need to enable access logging by editing your Nginx config file and adding the access_log directive. As a basic example:

    
    server {
        access_log /var/log/nginx/access_log combined;
        ...
    }
    

    Then restart Nginx and tail the log as requests hit the server to see them in real time:

    
    tail -f /var/log/nginx/access_log
    

    Using ngxtop to parse the Nginx access log

    Whilst tailing the access log directly is useful for checking a small number of requests, it quickly becomes unusable if you have a lot of traffic. Instead, you can use a tool like ngxtop to parse the log file for you, displaying useful monitoring stats on the console.

    
    $ ngxtop
    running for 411 seconds, 64332 records processed: 156.60 req/sec
    
    Summary:
    |   count |   avg_bytes_sent |   2xx |   3xx |   4xx |   5xx |
    |---------+------------------+-------+-------+-------+-------|
    |   64332 |         2775.251 | 61262 |  2994 |    71 |     5 |
    
    Detailed:
    | request_path                             |   count |   avg_bytes_sent |   2xx |   3xx |   4xx |   5xx |
    |------------------------------------------+---------+------------------+-------+-------+-------+-------|
    | /abc/xyz/xxxx                            |   20946 |          434.693 | 20935 |     0 |    11 |     0 |
    | /xxxxx.json                              |    5633 |         1483.723 |  5633 |     0 |     0 |     0 |
    | /xxxxx/xxx/xxxxxxxxxxxxx                 |    3629 |         6835.499 |  3626 |     0 |     3 |     0 |
    | /xxxxx/xxx/xxxxxxxx                      |    3627 |        15971.885 |  3623 |     0 |     4 |     0 |
    | /xxxxx/xxx/xxxxxxx                       |    3624 |         7830.236 |  3621 |     0 |     3 |     0 |
    | /static/js/minified/utils.min.js         |    3031 |         1781.155 |  2104 |   927 |     0 |     0 |
    | /static/js/minified/xxxxxxx.min.v1.js    |    2889 |         2210.235 |  2068 |   821 |     0 |     0 |
    | /static/tracking/js/xxxxxxxx.js          |    2594 |         1325.681 |  1927 |   667 |     0 |     0 |
    | /xxxxx/xxx.html                          |    2521 |          573.597 |  2520 |     0 |     1 |     0 |
    | /xxxxx/xxxx.json                         |    1840 |          800.542 |  1839 |     0 |     1 |     0 |
    

    Nginx monitoring and alerting – Nginx stats

    The above tools are useful for monitoring manually but aren’t useful if you want to automatically collect Nginx monitoring statistics and configure alerts on them. Nginx alerting is useful for ensuring your web server availability and performance remains high.

    The basic Nginx monitoring stats are provided by HttpStubStatusModule – metrics include requests per second and number of connections, along with stats for how requests are being handled.

    Server Density supports parsing the output of this module to automatically graph and trigger alerts on the values, so we have a guide to configuring HttpStubStatusModule too. Using this module you can keep an eye on the number of connections to your server, and the requests per second throughput. What values these “should” be will depend on your application and hardware.

    nginx monitoring alerts

    A good way to approach configuring Nginx alerts is to understand what kind of baseline traffic your application experiences and set alerts around this e.g. alert if the stats are significantly higher (indicating a sudden traffic spike) and if the values are suddenly significantly lower (indicating a problem preventing traffic somewhere). You could also benchmark your server to find out at what traffic level things start to slow down and the server becomes too overloaded – this will then act as a good upper limit which you can trigger alerts at too.

    Nginx monitoring and alerting – server stats

    Monitoring Nginx stats like requests per second and number of connections is useful to keep an eye on Nginx itself, but its performance will also be affected by how overloaded the server is. Ideally you will be running Nginx on its own dedicated instance so you don’t need to worry about contention with other applications.

    Web servers are generally limited by CPU and so your hardware spec should offer the web server as many CPUs and/or cores as possible. As you get more traffic then you will likely see the CPU usage increase.

    CPU % usage itself is not necessarily a useful metric to alert on because the values tend to be per CPU or per core. It’s more useful to set up monitoring on average CPU utilisation across all CPUs or cores. Using a tool such as Server Density, you can visualise this and configure alerts so you can be notified when the CPU is overloaded – our guide to understanding these metrics and configuring CPU alerts will help.

    On Linux this average across all CPUs is abstracted out to another system metric called load average. It is a decimal number rather than a percentage and allows you to understand load from the perspective of the operating system i.e. how long processes are waiting for access to the CPU. The recommended threshold for load average therefore depends on how many CPUs and cores you have – our guide to load average will help you understand this further.

    Monitoring Nginx and load balancers with Nginx Plus

    If you purchase a commercial version of Nginx then you get access to more advanced monitoring (and other features) without having to recompile Nginx with the HttpStubStatusModule enabled.

    Nginx Plus includes monitoring stats for connections, requests, load balancer counts, upstream metrics, the status of different load balancer upstreams and a range of other metrics. A live example of what this looks like is provided by Nginx themselves. It also includes a JSON Nginx monitoring API which would be useful for pulling the data out into your own tools.

    Monitoring nginx in real time

    Monitoring the remote status of Nginx

    All of the above metrics monitor the internal status of Nginx and the servers it is running on but it is also important to monitor the experience your users are getting too. This is achieved by using external status and response time tools – you want to know if your Nginx instance is serving traffic from different locations around the world (wherever your customers are) and the kind of response time performance.

    This is easy to do with a service like Server Density because of our in-built website monitoring. You can check the status of your public URLs and other endpoints from custom locations and get alerts when performance drops or there is an outage.

    This is particularly useful when you can build graphs to correlate the Nginx and server metrics with remote response time, especially if you are benchmarking your servers and want to know when a certain load average starts to affect end user performance.

    Monitor Nginx Slides

    Monitor Nginx Video

  9. Sysadmin Sunday 189

    Leave a Comment
  10. Migrating a high throughput app to a new environment with zero downtime

    3 Comments

    We’re currently in the process of completing a project to migrate our server and website monitoring product, Server Density, from Softlayer to Google Compute Engine. We have over 100 servers powering the service, processing 30-40TB of incoming data each month and write over 1 billion documents into MongoDB each day – this makes Server Density a non-trivial workload and given how we monitor over 42,000 servers for our customers, we need to do this with zero downtime and minimal impact.

    This post is about how we planned and are executing the migration process.

    Server Density architecture

    Why we’re migrating to Google Cloud

    Over the last month I’ve given talks in the UK and US about the migration process and in the talk video at the bottom of this post I go into more detail about the reasons. I’ll also be writing up why we decided to move to Google Cloud in the near future. Please leave a comment if you have any particular questions!

    Steps to migration

    Given the requirements for zero downtime and minimal impact, and how complicated the migration process is, there are a number of steps we have gone through for the project.

    1. Planning
    2. Testing
    3. Replication
    4. Switch over

    One of the key principles for the entire project is to keep it as simple as possible – we run servers on Softlayer and we plan to migrate 1:1 to servers on Google Compute Engine. This means not using many of the Google Cloud features like load balancers and snapshots for backups for the initial move. We want to change as few things as possible and although we plan to use these features in the future, for the migration we will replicate our existing environment as closely as possible.

    Simplicity

    Step 1: Planning

    Migrating 100 servers and a large amount of traffic involves a lot of moving parts and so we need to plan the steps meticulously. Some key questions can be asked:

    • What is different? – the new environment will have things which differ from the old. Perhaps different OS images are available. Maybe the networking is configured in a different way. How are regions and zones defined? Many people will be used to the idea that Amazon zones are independent but a whole region can still fail. Google specifically says deploying across zones is sufficient for redundancy. Although of course it’s never possible to guarantee this 100%!
    • Are there any vendor specific APIs? – you may be using APIs or features specific to the current vendor that aren’t available or the same in the new environment. Scripts may need to be reimplemented.
    • What’s new to learn? – linked to the above questions, this is about building knowledge about how the new environment works and ensuring that the whole team is aware of how to do their normal tasks. Failover, support processes, access to resources, etc.
    • Costs – you’ll have to duplicate the existing environment and run the two simultaneously for a while and so costs could easily double. This will need to be closely planned to reduce duplication as much as possible. The costs of the new environment should be fully modelled comparing the existing environment as closely as possible, so you don’t get any unexpected surprises such as i/o or networking costs.

    Step 2: Testing

    This stage has taken the longest because we needed to realistically simulate production workloads on a very different environment. At Softlayer we have the luxury of custom spec dedicated servers with SSDs for our MongoDB deployments and we want to ensure we get at least the same level of performance on Google Compute Engine.

    I discovered some important information such as Google Compute Engine instance limits on both the disk volumes and VM itself, how to size the disks correctly and that RAID is unnecessary.

    It was also important to ensure that we were using the latest Linux kernel, and at least Linux 3.3 to get the best disk performance on the Google SSDs. More recently I have been testing a new Google disk product which gives even better MongoDB performance, which I expect will be announced into general availability soon!

    The common problems I found on GCE were to do with volume sizes being too small, because IOPs scale linearly with volume size and you can easily size a disk too small if you just go based on required disk space.

    Common Google Compute Engine problems

    Step 3: Replication

    This is where the real migration starts and it involves duplicating the entire environment onto the new provider. We use Puppet to do most of the work here because it defines our entire infrastructure and so it is very easy to duplicate servers elsewhere – all the configuration is defined centrally.

    MongoDB has very good replication features and so the way to replicate the data is to simply set up new replica slaves in the new environment. Benchmarking the connectivity between Softlayer and Google’s US region shows we have over 250Mbps of throughput, which is about the same we get between Softlayer’s WDC and SJC data centers where we replicate for redundancy.

    Setting up replication with MongoDB requires either using SSL for all connections between replica sets and the clients, or setting up a VPN. SSL is only available in MongoDB with a custom compiled version (or the Enterprise version) and in MongoDB 2.4 it’s not that good – you can’t do a rolling upgrade so the entire cluster must all be using SSL or nothing. MongoDB 2.6 solves this but we currently run 2.4 in production and as a rule, never upgrade to new major MongoDB versions.

    This means we have to set up a VPN, which we can do using built in VPN support from both Softlayer and Google.

    VPN or SSL?

    Step 4: Switch over

    Since we are replicating our environment on Google Compute Engine, it is effectively just a secondary data center and so we can switch over just as if we were failing over from the primary data center.

    The process would involve freezing the Softlayer MongoDB nodes so they cannot become primary, issuing a step down so the Google nodes become primary and then switching the DNS so that traffic goes to Google first. There may be some latency for users if they go into the web servers in the old facility and data has to transit to the other provider across the VPN, but it should be transparent and importantly, cause no downtime.

    Obviously, this is a simple explanation of how it will work. In practice we have a detailed checklist of exactly what steps will be taken and when. This is important to devise and rehearse in advance so we can be sure we don’t miss anything on the big day.

    After the switch and we have verified all functionality is working, we can begin shutting down the old environment. This will consist of first shutting down the servers and then several days later, cancelling them with Softlayer. This is so we can spot anything that have might been missed without actually destroying any servers!

    Switchover

    Current status

    We’re in the final stages of preparing the new environment at Google and so you can expect a followup post once the migration has been completed to explain how it went, what we learned and any issues we had to solve. If you’re a customer then we’ll be announcing the migration dates on our status page along with new IPs if you happen to have whitelisted our current ones.

    Talk video

    This post was first given at a talk at the Google Cloud Bay Area meetup in June 2014 and again at the July 2014 Scale Responsibly event in London. The video from the London talk is below: