MongoDB performance SSDs vs Spindle SAS Drives

By David Mytton,
CEO & Founder of Server Density.

Published on the 6th August, 2012.

For the storage of the historical time series data for our server monitoring service, Server Density, we have a cluster of MongoDB servers running across 2 data centres with Softlayer. There are 8 dedicated servers split into 4 shards with 2 nodes per shard (one per data centre). All 8 are of identical specification: Intel Xeon-SandyBridge E3-1270-Quadcore, 16GB RAM and 2Gbps networking running Ubuntu 10.04 LTS and MongoDB 2.0.6.

We initially deployed these machines using 100GB SSDs; specifically RealSSD P300 MTFDDAC100SAL-1N1AA. However, with significantly increasing data volume we needed to decide how to scale the data storage requirements – by adding new shards or by upgrading the disks on the machines. We wanted to see what kind of MongoDB performance SSDs would provide.

Larger SSDs are expensive, so we wanted to see what kind of performance impact we would see by replacing the SSDs with spinning disks. This would give us more room on the vertical scaling option (cheaper) before we needed to add more machines/shards and scale horizontally (more expensive). We expected some performance hit (obviously) but wanted actual metrics to understand the tradeoffs and make an informed decision.

So, we reprovisioned our MongoDB secondaries with Seagate Cheetah 15k SAS drives in RAID0 (speed, not redundancy) so we could test against real data (benchmarks against test/dummy data would not answer our questions). We chose these because they’re the fastest Softlayer offer, and have the same interface speed as the SSDs (6GB/s). These drives start at 73.4GB but go up to 600GB, which would allow us to keep the vertical scaling plan. We also set up another test with SSDs in RAID0 so we were comparing the equivalent setup, both using an Adaptec RAID controller.

Note that RAID0 doesn’t provide redundancy so a disk failure would take the node offline. We achieve redundancy through the MongoDB replica set instead.

Our RAM configuration is such that we are able to store both indexes and data from the last 24 hours in memory. Most customers query the most recent data most often, so we’re set up to return that fastest. Time ranges before the last 24 hours page to disk, so performance is important to ensure that users are waiting the minimum amount of time for the graphs to plot.

Since we wanted to test the page fault query speed and we split our data with a collection per day, we wrote some scripts to run queries against older collections to force the page fault. In all cases, the SSD was fastest, followed by the SSD in RAID0, then the SAS disks. The script reads every document out of the collection through a mongos (so the query hits all shards), around 300,000 documents in total.

SSD vs 15k SAS for MongoDB

Lower is better = how long it took to iterate through every document. With the data here, we can then ask the question – are the cost savings gained from not using SSDs worth making users wait an extra ~6 seconds for the data to load? If we were to make that cost saving, it also tells us where we can get quick performance improvements in the future, just by spending some more money (the easiest way, but not necessarily an option depending on the stage of your company!).

Having read this far, why not subscribe to our RSS feed or follow us on Twitter?

Articles you care about. Delivered.

Help us speak your language. What is your primary tech stack?

Maybe another time