MongoDB connection overhead

By David Mytton,
CEO & Founder of Server Density.

Published on the 8th June, 2011.

After adding new web nodes for the launch of our plugin directory, we started seeing performance problems with our MongoDB database cluster. The symptom was increased response times in the main web app and using mongostat we were able to see this was being caused by queue spikes in MongoDB as queries were backing up waiting to get executed.

We spent several days optimising queries and even the servers mongod itself was running on (increasing RAM, removing non-essential services, disabling cron jobs) but continued to see the spikes even with significantly reduced load and more than sufficient memory for the indexes + data size.

Through discussions with 10gen, the company behind MongoDB, we narrowed down the problem to the number of connections to the main cluster. This had gone from ~1100 per mongod node to ~1500 (as a result of increased web nodes and traffic). It turns out that every connection has a fairly large overhead – 10MB on Linux – and this requires sufficient RAM to accommodate all connections. Even with the increased RAM and reduced load, connection overhead + data size + index size well exceeded the total available RAM.

So we tweaked our connection pooling and optimised how the web nodes use the mongos routers to reduce the number of connections to around 800. This helped significantly but we were still concerned with the per connection overhead of 10MB.

This value is based on the Linux stack size, which defaults to 10240:

david@rs1a ~: ulimit -a
stack size (kbytes, -s) 10240

10gen suggested that they had done some testing on changing this value to 1024 to reduce the overhead to just 1MB. However, this hasn’t been as extensively tested as with the defaults so we decided to implement on one of our shards to test. The improvement was immediately noticable and after a period of testing, we deployed this change to all our servers.

On CentOS / Red Hat, this can be changed in the /etc/security/limits.conf file by adding the following 2 lines:

david hard stack 1024
david soft stack 1024

You should replace david with the name of the user MongoDB runs as. Log out and log back in and run ulimit -s to confirm the change has taken effect, then restart mongod. At low loads, you may not see any effect but as usage increases this significantly reduces the amount of RAM you need.

Articles you care about. Delivered.

Help us speak your language. What is your primary tech stack?

Maybe another time