Why we moved away from “the cloud” to a “real” server

By David Mytton,
CEO & Founder of Server Density.

Published on the 8th September, 2009.

Up until the end of August, our server monitoring service, Server Density, was hosted in “the cloud”. We had no physical hardware and were using virtualised instances provided by Slicehost and The Rackspace Cloud. This allowed us to start very small and cheap and expand as the service grew – we were paying for what we used, something which is ideal for the first stages of a startup. Everything worked very well and we had few problems.


Back in Feb 2009 we started out with a 256MB Slicehost instance for only $20 per month. By the end of August we had a 2GB Slicehost instance and an 8GB Rackspace Cloud instance, costing $130 and $345 respectively. We also had a 256MB Rackspace Cloud instance for a VPN we used to access the servers through at $10 per month. The total expenditure was therefore $485 per month.


The Rackspace Cloud options only became available after we had been using Slicehost for several months and given they were half the price, we decided to expand our database server using them rather than Slicehost. The reason for the price difference is Slicehost include a data transfer allowance whereas you pay per GB with The Rackspace Cloud; since most of our transfer is incoming, that allowed us to save substantially.

Slicehost is also owned by Rackspace and they run in the same data centre which meant we were able to use the internal network to communicate between our Slicehost and Rackspace Cloud instances. There was no disruption to the service as we kept our application serving with Slicehost and the database with The Rackspace Cloud.

The disk space problem

The problem we were facing was disk space. Since we store a lot of historical monitoring data, we needed a large amount of storage. The only way we could increase our available storage was to increase the instance size. This was not scalable and so we set about looking at the alternatives.


From a technical interest point of view, as well as scalability, we wanted to move to Amazon EC2. It would have been very cool to deploy on top of a virtualised environment that we could load balance, implement a failover system and make use of the Elastic Block Storage for our data storage needs.

Although we needed little instance storage, our database needs to run 64bit and so we would have had to go with the large EC2 instance. Our pricing calculations looked like this:

  • Large EC2 Instance: $292
  • Transfer In: $15 (150GB)
  • Transfer Out: $4 (20GB)
  • Elastic Block Storage: $90 (360GB inc backup)
  • Elastic Block Storage Requests: $30 (300 million, based on tests)

The total cost of which is $431, but we also wanted their support services which are priced at $100, so a total of $531 USD per month. The Elastic Block Storage requests per second is a big variable that is hard to predict. We ran our database on an instance for 24 hours and estimated that we would be using at least 10 million i/o requests per day.

Hardware is cheap

There are also technical complexities in that you have to provision EBS storage before you use it, and if you need to resize then you have to take a snapshot and rebuild from that.

We would have had to build our own infrastructure management system. This would have to handle the starting and failover of EC2 instances, backups and provisioning additional storage. In order to have no downtime when storage was being re-provisioned, we would need a second instance to replicate the database on. Aside from the extra server costs, all of this would have taken development time away from our efforts of improving the product itself.

We had the choice of working on infrastructure that makes no difference to the customer experience (but which would have been technically interesting and fun to develop) verses tangible progress with our product:

With six years of experience running my own software company I can tell you that nothing we have ever done at Fog Creek has increased our revenue more than releasing a new version with more features. Nothing. The flow to our bottom line from new versions with new features is absolutely undeniable.

Joel Spolsky

And that’s the hidden cost – development time – something that is particularly important for a startup. As Jeff Atwood says, “Hardware is Cheap, Programmers are Expensive“.

Given the situation, the move we made was to get a physical server with Rackspace. Although there was a cost increase compared to the existing solution, it was not much more expensive than the move to Amazon EC2 would have been, and there were no development costs. Since Slicehost and The Rackspace Cloud are in the same data centre as our current servers, the move was very easy.

Uptime and security

As a server monitoring product, the service needs to be available. There is no such thing as 100% uptime but Rackspace is known for its reliable network and systems.

Their support was another factor in the decision making process. Rackspace know how our systems are set up and, like us, continuously monitor them so that if there is an issue, we can work together to fix the problem as quickly as possible. Their support is indeed “Fanatical” – they even swapped out a lower Cisco firewall model for a newer one for free so we could access our servers through a secure VPN on our iPhones.

You can also get an amazing deal by negotiating with the sales team. Cloud pricing is fixed but moving to Rackspace got us amazing value for money for the hardware we have. And now we have an existing relationship, future upgrades can be negotiated.

Further, since we process customer card transactions through our servers (we collect details on our site but do not store them ourselves), we have to be PCI compliant, something that Amazon EC2 is not.

Not entirely clear skies


That said, we are making use of some “cloud” services. Our server has large internal storage but we also make use of the Rackspace Utility Network Attached Storage product. This allows us to scale disk space indefinitely and pay per GB we use. Unlike Amazon EBS, it really is per GB, not per provisioned storage. This has saved a lot of development time working out how to deploy our database across multiple disks or handle resizing existing volumes.

Frequent review

We are likely going to be a Rackspace customer for life. Their support is amazing and we know we can rely on them to deliver the service we would expect as customers. However as we grow we will frequently review the situation to see if savings can be made without sacrificing unnecessary development time and quality of service. Even if not for the primary hosting infrastructure, for other aspects of our systems – for example, we are currently looking into how we can use The Rackspace Cloud to run off-site database replication for further redundancy in addition to our current backups.

“The cloud” has its uses, especially when starting up, but it is not always the best option.

  • Can you elaborate more on why increasing the instance size is not scalable?

    • If you’re near the limit of the server then it’s not a case of adding just disk capacity, you have to double the spec of the entire instance, which almost doubles the price. That’s a lot to pay if you’re just after disk space. The system we have now means we can scale just the disk space as we need.

  • You make a strong case for “real” hardware and it supports what we know about virtualized infrastructure: It’s not a direct replacement for real, physical machines. There are two specific use-cases which currently present a lot of value: ad-hoc or temporary infrastructure (dev and testing environments, prototypes, demos) and data-processing (analysis, reporting, etc…). Dynamic scaling is interesting too but it’s still an open problem.

    The most compelling thing that you highlighted is the increased development cost. The tool support for virtualized infrastructure is still very rudimentary so you end up having to do a lot of the heavy lifting yourself.

  • Pingback: Closer To The Ideal » Blog Archive » Cloud computing is more expensive than it first appears()

  • “we have to be PCI compliant, something that Amazon EC2 is not.”

    I never thought about issues such as that. Working in the cloud is fine up to a point, but in cases like that you have to have the extra security of real hardware.

    • joe schmoe

      PCI Compliance means not keeping the private data on your servers…No need for hardware to hold data it’s not supposed to hold ;)

      • That’s certainly one option, but it’s totally legitimate to store PCI data, and store it safely. You also have to meet PCI compliance on your apps/servers if you accept credit cards on the site (you don’t have to store them for PCI to apply).

      • You have to be compliant if you’re collecting the details through your site (on your server) even if you don’t store the data.

      • You can avoid the PCI compliance issue on AWS with something like http://www.braintreepaymentsolutions.com/

  • Very thoughtful article on funding the right fit for your needs as those needs developed over time. While not every offering makes sense to every use case, I’m glad we have been able to provide solutions along the way that make sense for your changing needs — this is our goal: to provide the best fit for customer needs. At each step we intended to include Fanatical Support no matter what.

    I’m sure we will read your story and discuss it internally. We’ll probably even reach out to you to learn how we can do better at meeting the changing needs of a company like yours over time. Feel free to contact us anytime to discuss these matters further. And, thank you for your continued trust as a customer.

    Robert J Taylor
    Sr Sys Engineer
    Customer Advocate
    Rackspace Hosting
    M: 210.548.5616

    • Thanks for your comment. We’d certainly be happy to have discuss the points in the post. You should have my contact details through our customer account.

  • That “funding” was supposed to be “finding” — please excuse that and my other typos as I’m using a mobile device to respond away from the office :)

  • Pingback: Why we moved away from “the cloud” to a “real” server « Boxed Ice Blog « Netcrema – creme de la social news via digg + delicious + stumpleupon + reddit()

  • I can empathize! I wrote a similar post a couple weeks ago about my troubles scaling with Slicehost:


  • Interesting. I actually was wavering and thinking of going in the other direction, but I think after reading this I’ll stay where I am.

    I started right off the bat on a dedicated Rackspace server for one of my apps. This, in retrospect was a mistake because (duh) I didn’t launch anything for at least 2 months after starting the server contract (and as you know Rackspace contracts aren’t cheap!). And even after the product launched, it took a while for revenue to grow to a point where costs were being covered – server costs being a big wedge of that.

    Things are good now though, but I still felt a bit burned from the first few months of low activity and paying a not insignificant amount per month despite no or low revenue. So for the next product I was thinking of launching on the cloud and then slowly moving all assets from the first product to the cloud too.

    This post helps me put things in perspective though – you are absolutely right that the cost in terms of time of faffing with cloud management is certainly put to better use actually improving the product.

    The goal for small web apps / web producers like me should be to get from $XXXX revenue per month to $XXXXX revenue per month through product development – not wasting resources trying to mitigate server costs by a few hundred dollars each month and creating more technical headaches that are ultimately irrelevant to end-users.

    Thanks for setting my mind straight!

  • max hodges

    why does your db need to be 64 bit?

  • Pingback: Flow » Blog Archive » Daily Digest for September 10th - The zeitgeist daily()

  • Pingback: tecosystems » links for 2009-09-10()

  • Pingback: Blog Joyent | The “Cloud” is supposed to “Real”()

  • Excellent post, Dave. We’re another loyal ‘Spacer, always having been on dedicated servers though. Moving over from a single Texas server last year to a London based scalable cluster has taken our hosting bill to near 50% of our wage bill but service reliability is soooooo key with what we do. We’re busy in the middle of a kernel update at the moment though – it’s our first time offline in just under a year, having fielded DDOS and nasty image scraper attacks on client websites too. Rackspace = recommended.
    See you next week :)

  • Hey David,

    Interesting post. One point in your post that seems a little misleading is saying that AWS is “Not PCI compliant”. If my understanding is correct, AWS is only not PCI _Level 1_ compliant. The vast majority of web sites out there don’t require that level of compliance (which requires on-site inspections yearly). Level 2, 3, and 4 don’t require on-site inspections. Here’s a link to a description of the level differences:

    So in theory, as long are you’re processing less than 6 _million_ VISA transactions a year (which is probably 99.99% of web sites out there), you’re fine using AWS. I could be wrong, though :).


  • Joe

    Why didn’t you just do Rackspace Cloud Servers, and use the Cloud files service for cheap unlimited storage?

    • We did you Rackspace Cloud Servers but the Cloud Files service, like Amazon S3, is designed for static files, not serving a database.

  • Andrew TIllman

    We have been using EC2 and my experience is that if you have fairly consistent computing requirements you really are better off with a “real” server.

    That being said, the type of online service that can really benefit from something like EC2 is a service where the demand spikes a lot, were are certain times of the year were you are going to get a ton of demand, and other long periods of time were your demand is less then a tenth of your peak demand. Think of people that sell seasonal products, or something like the post office and such. In this situation EB2 gives you the real fexibility it seems to be designed for. You can quickly create more instances to handle your increased demand, and then get rid of them when your demand goes down. In this way you are not buying hardware to handles the biggest possible demand you get and you can scale up and down as needed without having servers sitting idle a lot of the time.

  • Pingback: Introducing the new Server Density infrastructure « Boxed Ice Blog()

  • Pingback: Moved away | Tag2g()

Articles you care about. Delivered.

Help us speak your language. What is your primary tech stack?

Maybe another time