Saving $500k per month buying your own hardware: cloud vs co-location

By David Mytton,
CEO & Founder of Server Density.

Published on the 23rd November, 2016.

Editor’s note: This is an updated version of an article originally published on GigaOm on 07/12/2013.

A few weeks ago we compared cloud instances against dedicated servers. We also explored various scenarios where it can be significantly cheaper to use dedicated servers instead of cloud services.

But that’s not the end of it. Since you are still paying on a monthly basis then if you project the costs out over 1 to 3 years, you end up paying much more than it would have cost to outright purchase the hardware. This is where buying and co-locating your own hardware becomes a more attractive option.

Putting the numbers down: cloud vs co-location

Let’s consider the case of a high throughput database hosted on suitable machines on cloud and dedicated servers and on a purchased/co-located server. For dedicated instances, Amazon has a separate fee structure and on Rackspace you effectively have to get their largest instance type.

So, calculating those costs out for our database instance on an annual basis would look like this:

Amazon EC2 c3.4xlarge dedicated heavy utilization reserved
Pricing for 1-year term
$4,785 upfront cost
$0.546 effective hourly cost
$2 per hour, per region additional cost
$4,785 + ($0.546 + $2.00) * 24 * 365 = $27,087.96

Rackspace OnMetal I/O
Pricing for 1-year term
$2.46575 hourly cost
$0.06849 additional hourly cost for managed infrastructure
Total Hourly Cost: $2.53424
$2.53424 * 24 * 365 = $22,199.94

Softlayer

Given the annual cost of these instances, it makes sense to consider dedicated hardware where you rent the resources and the provider is responsible for upkeep. Here, at Server Density, we use Softlayer, now owned by IBM, and have dedicated hardware for our database nodes. IBM is becoming very competitive with Amazon and Rackspace so let’s add a similarly spec’d dedicated server from SoftLayer, at list prices. To match a similar spec we can choose the Monthly Bare Metal Dual Processor (Xeon E5-2620 – 2.0Ghz, 32GB RAM, 500GB storage). This bears a monthly cost of $491 or $5,892/year.

Dedicated servers summary

Rackspace Cloud Amazon EC2 Softlayer Dedicated
$22,199.54 $27,087.96 $5,892

Let’s also assume purchase and colocation of a Dell PowerEdge R430 (two 8-core processors, 32GB RAM, 1TB SATA disk drive).

The R430 one-time list price is $3,774.45 – some 36% off the price of the SoftLayer server at $5,892/year. Of course there might be some more usage expenses such as power and bandwidth, depending on where you choose to colocate your server. Power usage in particular is difficult to calculate because you’d need to stress test the server, figure out the maximum draw and run real workloads to see what your normal usage is.

Running our own hardware

We have experimented with running our own hardware in London. In order to draw some conclusions we decided to use our 1U Dell server that has specs very similar to Dell R430 above. With everyday usage, our server’s power needs range close to 0.6A. For best results we stress tested it with everything maxed, for a total of 1.2A.

Hosting this with the ISP who supplies our office works out at $140/month or $1,680/year. This makes the total annual cost figures look as follows:

Rackspace Cloud Amazon EC2 Softlayer Dedicated Co-location
$22,199.54 $27,087.96 $5,892 $5,454.45/year 1, then $1,680/year

With Rackspace, Amazon and SoftLayer you’d have to pay the above price every year. With co-location, on the other hand, after the first year the annual cost drops to $1,680 because you already own the hardware. What’s more, the hardware can also be considered an asset yielding tax benefits.

Large scale implementation

While we were still experimenting on a small scale, I spoke to Mark Schliemann, who back then was VP of Technical Operations at Moz.com. They’d been running a hybrid environment and they had recently moved the majority of their environment off AWS and into a colo facility with Nimbix. Still, they kept using AWS for processing batch jobs (the perfect use case for elastic cloud resources).

Moz worked on detailed cost comparisons to factor in the cost of the hardware leases (routers, switches, firewalls, load balancers, SAN/NAS storage & VPN), virtualization platforms, misc software, monitoring software/services, connectivity/bandwidth, vendor support, colo, and even travel costs. Using this to calculate their per server costs meant that on AWS they would spend $3,200/m vs. $668/m with their own hardware. Their calculations resulted in costs of $8,096 vs. $38,400 at AWS, projecting out 1 year.

Optimizing utilization is much more difficult on the cloud because of the fixed instance sizes. Moz found they were much more efficient running their own systems virtualized because they could create the exact instance sizes they needed. Cloud providers often increase CPU allocation alongside memory whereas most use cases tend to rely on either one or the other. Running your own environment allows you to optimize this balance, and this was one of the key ways Moz improved their utilization metrics. This has helped them become more efficient with their spending.

Here is what Mark told me: “Right now we are able to demonstrate that our colo is about 1/5th the cost of Amazon, but with RAM upgrades to our servers to increase capacity we are confident we can drive this down to something closer to 1/7th the cost of Amazon.”

Co-location has its benefits, once you’re established

Co-location looks like a winner but there are some important caveats:

  • First and foremost, you need in-house expertise because you need to build and rack your own equipment and design the network. Networking hardware can be expensive, and if things go wrong your team needs to have the capacity and skills to resolve any problems. This could involve support contracts with vendors and/or training your own staff. However, it does not usually require hiring new people because the same team that deals with cloud architecture, redundancy, failover, APIs, programming, etc, can also work on the ops side of things running your own environment.
  • The data centers chosen have to be easily accessible 24/7 because you may need to visit at unusual times. This means having people on-call and available to travel, or paying remote hands at the data center high hourly fees to fix things.
  • You have to purchase the equipment upfront which means large capital outlay (although this can be mitigated by leasing.)

So what does this mean for the cloud? On a pure cost basis, buying your own hardware and colocating is significantly cheaper. Many will say that the real cost is hidden in staffing requirements but that’s not the case because you still need a technical team to build your cloud infrastructure.

At a basic level, compute and storage are commodities. The way the cloud providers differentiate is with supporting services. Amazon has been able to iterate very quickly on innovative features, offering a range of supporting products like DNS, mail, queuing, databases, auto scaling and the like. Rackspace was slower to do this but has already started to offer similar features.

Flexibility of cloud needs to be highlighted again too. Once you buy hardware, you’re stuck with it for the long term, but the point of the example above was that you had a known workload.

Considering the hybrid model

Perhaps a hybrid model makes sense, then? This is where I believe a good middle ground is. I know I saw Moz making good use of such a model. You can service your known workloads with dedicated servers and then connect to the public cloud when you need extra flexibility. Data centers like Equinix offer Direct Connect services into the big cloud providers for this very reason, and SoftLayer offers its own public cloud to go alongside dedicated instances. Rackspace is placing bets in all camps with public cloud, traditional managed hosting, a hybrid of the two, and support services for OpenStack.

And when should you consider switching? Nnamdi Orakwue, Dell VP of Cloud until late 2015, said companies often start looking at alternatives when their monthly AWS bill hits $50,000 but is even this too high?

Free eBook: The 9 Ingredients of Scale

From two students with pocket money, to 20 engineers and 80,000 servers on the books, our eBook is a detailed account of how we scaled a world-class DevOps team from the ground up. Download our definitive guide to scaling DevOps and how to get started on your journey.

Help us speak your language. What is your primary tech stack?

What infrastructure do you currently work with?

  • Vincent Janelle

    Are you including the costs of having your own DC team(s), hardware upgrades (3 year cycles of depreciation), bandwidth, network admins, maintaining multiple points of presence, etc?

    • Yes. The Moz costs include all that.

      • Vincent Janelle

        Ah, because some of the software/hardware renewal costs I have exceed that amount, even at low volumes. Seems a bit low, for annual pricing :)

  • mzzs

    a 500GB SATA system drive and a 400GB SSD

    Stopped reading, no credibility. If you’re going to talk about on-prem vs. cloud, compare apples to apples. A server with a single (SATA) OS disk and a single data disk is amateur stuff.

    Edit: Yes, you can build application-level redundancy so that the hardware underneath doesn’t matter. Yes, there are use cases for no RAID like the QA example below. No, just because Google does something doesn’t mean that the average company should do it. Google saves millions on hardware, but they’ve spent millions in engineering effort to do so and use in-house filesystems, cluster tools, etc. You’re not Google.

    Azure, AWS, and others have availability at the VM level so that you don’t know if their hardware underneath your instance fails, it keeps running (unless you’re in US-EAST-1, ;) ). A single colocated server doesn’t have this hardware independence that these cloud-based services provide. There’s value to that which isn’t calculated here.

    This article talking about renting a single cloud instance vs purchasing and colocating a single server. By having a single server with a single disk, you need to buy more than one of them to get some level of redundancy and your application has to support it at the application layer. This throws off all of the calculations in the article. Seriously guys, it’s silly.

    • This article is a continuation from https://blog.serverdensity.com/cloud-pricing-vs-dedicated-pricing-cheaper/ where I was specifically comparing the cloud compute instances to dedicated. This spec is completely valid for a large number of workloads from database servers to tools servers. Not every long running workload requires storage level redundancy.

      Comparing the compute costs is simplified by looking at on-instance storage, which is a completely legitimate way to get good performance and run databases when you get redundancy from having multiple nodes, especially if you couple this with deploying across zones/regions. You assume multi-server redundancy is something you have to implement yourself in your application. This is an example workload for databases and they’re very good at dealing with replication and failover, which you’d need anyway. So it’s no extra work.

      You’re expecting cloud providers to have magical redundancy on the host level so if a disk fails then you can simply migrate to a new one with no impact. This is a big misconception with the cloud – that it handles redundancy and scaling magically. That’s simply not true and you have to consider host level failures, which happen frequently.

      Your criticism would be partially valid if that’s where I’d stopped, but I went into much more detail through the Moz example, where their costs do consider single server level redundancy as well as multi-server, multi-region, etc. The cost analysis with a single server is indicative and is a good, simple example of the cost differences, but is just the introduction to the article. If you’d continued reading you could’ve learned more about what Moz are doing and what the various tradeoffs were with colo.

      • masasuka

        as someone who works for a company that maintains an enterprise level cloud environment for hundreds of thousands of customers, this statement : “ou’re expecting cloud providers to have magical redundancy on the host
        level so if a disk fails then you can simply migrate to a new one with
        no impact. This is a big misconception with the cloud” is beyond wrong.

        A ‘proper’ (amazon has this, google, microsoft, etc…) cloud setup (including the one we run) has redundancy, meaning that if a host drive dies, another host picks up the slack, this happens automatically, and takes less than 10 seconds to kick over. Storage in the cloud is floating, meaning that any host can access it on any drive array. If a Front end host dies, the controller takes it out of the ‘load balanced’ group, and the load of your site’s control is handled by another host. If one of the controllers dies, then the auto failover kicks in and the backup host takes over the switching job. The ONLY ways to take down a good cloud system are to either over flood it with network traffic, (rather hard to do, but as amazon has had this happen, it’s possible) knock out the power (if this happens, it doesn’t matter if you’re in the cloud, on a rented server, or a colo, you’re server is down), or kill the network, (again, as with power, you’re out a server regardless of type).

        On to the actual article. One of the things you pay for with a Dedicated server as opposed to a colocation server, is hardware , and software ‘always on’ guarantees. If you have a colocation, and a drive dies, you have to go to a store, buy a new drive (hopefully they have them in stock), then head to the datacentre, and replace the drive. If you have a dedicated server, you only have to let the guys know the best time to have the raid rebuild (usually a time when the load on the server is low, and the disk I/O of a raid rebuild wouldn’t impact performance). If you have a non-technical company, then you also have to factor in the costs of a systems admin as well as a website developer for a colocated unit, but for dedicated servers, the company you purchase from will provide os management and ensure it stays up and running. Also, you didn’t mention if you include OS licensing costs (redhat, windows, ubuntu advantage, etc..) as well as additional services that may be offered for free (backups, external firewall, load balancing, dns management, etc…)

        • You might be describing how your cloud works but that’s not how EC2 instance work and it’s not how Google Compute Engine instance work (in Europe, the US ones do have live migrate).

          You’re also confusing compare the compute instance local storage with SAN based network storage. The comparison was specifically with local instance storage on the host itself with no network communication because that’s most optimal for database workloads.

          All the points you mentioned about dedicated vs colo are mentioned in the article and are valid. You do have to be responsible for hardware and maintenance which is part of what you’re paying for from a dedicated provider – those replacement time guarantees, spare parts, etc.

          One point I did concede on the original article comments posted at GigaOm was that both Server Density and Moz are technical companies so already have tech teams. If you’re a non-tech company then it’s more difficult because you have to hire the team to run things (you’d have to do this anyway with the cloud). Dedicated is more appropriate here as you can pay for managed service so you don’t have to do anything.

          The costs for licenses and sysadmins were included in the Moz costs. Again, my single server example was supposed to compare the pure compute costs and there was an academic discussion of all the extra costs because they’re much more difficult to compare like for like. So that’s why I included the Moz analysis which did consider absolutely everything, vs AWS.

          • Misiek

            It’s sad to read you are working on “enterprise cloud”. I was working on 2 setups working on AWS and EC2 instances were dying with hardware below them. There’s no magic and client wasn’t using any cheaper instances. They are like everyone, using hardware without magic-powered functions and if RAM dies – it dies. When disk fails – it fails. Using single disk in own, colocated server is bad practice, but having RAID 1/5/6 is EXACTLY the same as using AWS (of course is cheaper).
            Load balancers, SAN storage or block storage like S3 are built on same hardware as instances and the only reason they seem to fail not so often is redundancy which you can build (still cheaper) on your own hardware. And you don’t have to buy disk and head to DC each time disk fails, you can buy few spare disks and people in DC for a constant price go to your server and replace parts.
            “Enterprise” guys always make me smile nowadays.

          • Mike Garcia

            Not sure where people get the idea that the host itself is magically redundant, I agree with you there, but there are certainly ways to build in “the magic”. Simply put:

            -Build your instance.
            -Create an AMI from that instance.
            -Set up an Auto-Scale Group behind a Load Balancer with the parameters of 1 instance needing to be active at all times.

            If your EC2 bombs out (which I’ve never had happen to me yet), the auto-scale group will automatically generate a new instance, re-attach the storage from the previous (or create new storage from a snapshot), register itself with the Load Balancer, and you’re back up and running in 10-30 minutes, depending on what complexity you have built in to the instance build options and whatnot.

            You can also setup in health checks on the instance, so if something like Memory dies out in the underlying hardware suddenly and the instance starts performing terribly as a result, you can also trigger an event to build a new instance in parallel, then drain user sessions from the poor performer until it has none active, then kill that instance.

            You can also run a redundant infrastructure in another Availability Zone or region, running on lower spec’d instances to save $$$, so if your main infrastructure dies for whatever reason, you can cut over to it, then leverage auto-scaling to build higher end instances on the fly, and drain sessions out of the lower performers and into the higher performers while you fix the issue at hand. This allows you to get your environment quickly working again, then replace your failover instances with production ready instances on the fly.

            All automated. Is it magic? Well, no. But is it effective? Yes.

            I’d be curious to see if all of David’s views have remained unchanged. This article was written in 2013, when AWS was still growing out of its infancy. Things have changed dramatically since then. For instance, things like “Optimizing utilization is much more difficult on the cloud” is a laughably incorrect statement in 2016.

          • I think this is still generally accurate in 2016, but only when looking at pure compute and networking. Cloud is usually more expensive when you compare like-for-like. The problem is this ignores the full portfolio of cloud products. The real benefits come from having access to things like managed databases, email delivery, DNS, queuing and data warehousing. The portfolios of AWS, Google and Azure have massively expanded and so the real cost saving is from using their other products where your developers don’t have to build their own.

            Unfortunately this is a lot more difficult to do cost comparisons for and so people will continue to compare based on per RAM, CPU or other similar hardware style metrics.

          • “The real benefits come from having access to things like managed databases, email delivery, DNS, queuing and data warehousing. ”

            That is what they like to promote, but in actuality these benefits are not all that hard to build yourself with a few good systems engineers. DNS=setup 3 servers or outsource to Dyn or dnsmadeeasy. You can do that in a few days or maybe a few hours if all you have are zone files and don’t need complex APIs (which you can do inhouse cheaply anyway.) email: use sendgrid, messagebus, or mandrill or just setup postfix and get certified by returnpath if you have a few months to warm up your ips (you can use another provider in the meantime while getting your own ready.) Managed databases=mlab or compose or pythian or just have your systems engineers manage it and pay a support contract to percona / oracle / mongodb if it is critical. One good systems engineer can manage hundreds of vitualized systems, optimize and perform operations on databases, and other services if they are good, often on a part time basis. queuing: use a queue like kafka or zeroqueue for zero cost on top the servers and the support you need to manage them will be cheaper than the cost from using aws’s marked up services. datawarehousing=use another provider. Use a CDN light limelight or akamai if the data needs to be public for much cheaper than amazon, or B2 or just rent or lease servers with DAS. You can save a fortune over aws if you are willing to shop around for other providers and can get a good systems engineer / manager to build and manage it. Sure AWS can do all these things but usually not better and almost never cheaper than the alternatives. They are easier to use than having to shop around and make these decisions piecemeal, but you can hire someone to make these decisions for you more cheaply than what you will spend on amazon. Exceptions being when you absolutely need a great deal of flexibility over your infrastructure (which in practice I rarely see product needs change so often to justify the high compute prices at aws, don’t have enough consistency in your needs to enter long term vendor contracts, or can’t find or afford any good systems engineers to work for you. AWS is popular among execs because it is a one stop shop with granular billing and a good reputation. AWS is popular with developers because everything is programmatically accessible without having to do much systems engineering. But in cases where cost savings are important (very early “scrappy” stage, or later stage where you are optimizing costs instead of just getting another round) – I find aws is hard to justify unless your workloads are extremely dynamic or data transfer needs are very low.

        • Dan

          That’s absolutely not how Amazon/Rackspace/Google works. When an Amazon server dies your instance is gone. There is no redundancy unless you design it into your application

  • cloudy mcjones

    If you are running an instance for a year you’d be silly to pay on-demand rates. Amazon’s reserved instance rates significantly cut costs up to 70% which definitely makes it a more cost effective solution.

    You say that electricity is hard to quantify but it really isn’t because you provision and pay for the maximum you can pull. Those costs are fixed and can be easily quantified. Sure, this changes if you pay metered but you still pay for the outlet coming into your rack.

    The server, ultimately, isn’t the real discussion here. It is what is running on the server that is important. You run your business on the server. You run applications on the server. When you have an idea for something new, you want to quickly spin up that idea into an application and see how well it work. If it doesn’t work, you want to turn it off and cut your losses. If it does work then you need to, hopefully, quickly scale it. This doesn’t work in a hardware model. Also, these applications require a proper SDLC environment. So when you buy one, you usually buy two or three or four (dev, QA, stage, production). When your developers go home at night, your hardware runs. In the cloud, you turn the environment off and don’t pay for it.

    Some folks are also not looking to optimize for cost. Optimizing for agility is sometimes paramount. You can’t be very agile on hardware.

    Looking at it on a unit-by-unit basis is too simplistic and doesn’t really get to the root of why you’re buying that server or instance. You are buying these things because you have to run an application and solve a business problems. These business problems are not the same and require different approaches. Simplifying it to say that when you spend $50,000 per month you want to move to hardware is silly. Talk to Netflix, airbnb, pinterest, etc and ask them why they’re still in the cloud when their spend is way more than $50,000 per month.

    • You’re correct that if you’re doing long running workloads then you’d use the reserved pricing, which is what my figures are from. See the first article: https://blog.serverdensity.com/cloud-pricing-vs-dedicated-pricing-cheaper/

      It’s easy to say how much you pay per unit of electricity but it’s not easy to know how many units you’re using. And units can vary between facilities – could be kWh could be Amps, etc. The difficulty is you have to run your actual workload on the hardware. You don’t pay the maximum you could ever draw, it’s some calculation from that. For example Equinix charge 70% of what your maximum is. There’s capacity reservation fees, utilisation fees and in the UK at least, carbon offset charges.

      A unit by unit comparison is supposed to be simplistic, to show the base compute costs. That’s why I included the real world figures from Moz, too. They consider everything.

      You’re correct about the hardware flexibility – that’s a perfect use case for the cloud. A new project with unknown requirements and for thing with truly flexible workloads e.g. processing or handling traffic spikes.

  • Mark

    First off I would disagree with your assessment that you
    need to use Amazon’s dedicated host option for a database to be able to utilize
    the IOPS from your SSDs. Amazon puts a lot of effort to prevent a noisy
    neighbor from affecting your performance. Dedicated hardware in my experience
    is more commonly used to support audit requirements.

    Secondly, your pricing is only using a 1 yr reserve, when
    your article is discussing keeping the physical hardware for a long time. It seems
    like it would make more sense to do your calculations with a 3 year reserve.

    So if you go with a 3 year heavy reserve w/o dedicated
    hardware, you are looking at: $4,185.99 a year (5804 upfront / 3 year) +
    ($0.257 hourly rate * 8760 hours in a year)

    Note: This is not currently including AWS bandwidth charges,
    because those would vary based on load and architecture.

    When you priced out the Dell sever, does that include
    support costs on the hardware? If so how long is the support contract, and what
    does it cover?

    For the sake of this comparison let’s assume that the dell
    server comes with a top tier support with quick onsite support, and replacement
    of failed hardware. This means over a 3 year period AWS will cost $2,761.96
    more than hosting the physical hardware.

    So for an annual savings of $920.65 you can have a physical
    server, that can never change size. If the hardware dies, you will have to wait
    for that hardware to be fixed, before you can use it again. Since this pricing
    does not allow for spare hardware. You have to proactively know how much
    hardware you will require, and it will take at least 2 weeks? For dell to deliver
    and you to configure the hardware in your data center.

    You will also need networking equipment, which I don’t know
    if that was included in your Colocation price. You will need someone who knows
    how to configure the networking for your environment. The argument that a cloud
    admin could / would know how to configure the network and other components of a
    physical data center is a bit off the mark. They are two very different skill
    sets.

    At the end of the day, all of the additional work and time
    your employees will have to spend configuring and setting up the on premise
    servers, will more than likely cost you more than the $920.65 difference in
    price between AWS and on premises.

    I’m not sure if Moz’s figures were also using dedicated
    hardware, and 1 year reserves for AWS, but if they are I don’t see that
    comparison as very valid either.

    We also have not discussed how you are backing up your
    database, and how much that storage is going to cost you.

    All in all, you seem to be taking a very narrow view of the
    pricing difference between on premise on cloud pricing.

    • It’s true that the AWS pricing is 1year and that’s because that’s what I was looking at in the first article before I had the Moz figures: https://blog.serverdensity.com/cloud-pricing-vs-dedicated-pricing-cheaper/

      However, the Moz figures are based on 3 years for their hardware and for the AWS comparisons. They also include all the support costs, hardware, replacements, etc. So I think that addresses all your points.

  • It would be more interesting if you had more numbers on all the other infrastructure, e.g. networking equipment, and expertise required to run your own virtual compute infrastructure on top of dedicated hardware. You sweep that under the rug but networking in general is not easy and a misconfigured router here and there can be disastrous. You also need to know enough to make sure there are no security holes in how you are virtualizing the network and all the virtual machines that will run on it. Putting all that together no longer paints such a rosy picture.

    Docker and other lightweight virtualization technologies can go a long way towards mitigating some of these issues but there is still no way around getting the right people and expertise to manage all that hardware.

    • Networking is a lot more difficult to compare directly because AWS are at such a massive scale they need completely different networking infrastructure than if you were deploying your own setup in a colo facility. But numbers are useful to have and I’ll be posting the figures from the setup we choose as we continue the colo experiment at Server Density.

      That said, the Moz figures do include everything, especially networking and the team requirements. So it is considered, just not in separated into the component costs.

  • Chris Beck

    David, I can 100% agree with this. We moved from AWS to dedicated and saved a fortune in the process. This was cloud to dedicated, but colo would provide even further savings. We have 10TB’s of SSD on 20 servers each with 72GB ram + 4 application servers all good specs, 1 gigabit dedicate line out, 48 port dedicated switch and LB for ~$4000/mo.

    • mag

      Interesting, can you recommend a good dedicated servers provider? with modern hardware and SSDs

      • We are currently with Softlayer so I’d recommend them.

  • Martino Io

    I fully agree with the article, while there are many concepts explained in the wrong way, colocation is still the best option as long as you have qualified technical skills to manage your own hardware+software. As long as you satisfy such requirements you will always save incredible amount of money, as the cloud providers usually have incredible margins (even if prices are falling down continuously) and of course they don’t work for free.
    Hardware can be obtained from refurbished stocks for incredibly low prices (slightly decreasing computing density), implementing OSS software with support contracts from developers will give you first class support for a fair price.
    I manage such infrastructure, currently 4 racks (1 full of storage, 3 of compute nodes) and the expenses we pay are peanuts compared to any “managed – cloud – paas whatever you want” contracts.
    70TB of storage over FC, 550 Xeon cores 2.8Ghz each, 10TB or RAM and around 500 active VMs at current time average expenses for colo, including bandwidth and power are 75K euros per year (on average), I don’t even want to know how much it would be on a hosted infra…

    • Which concepts aren’t correctly explained?

      • Martino Io

        For instance the numbers in the comparison of the 1U server, if you want a fair comparison then compare apples with apples; have a fully working minimal deployment of Open Stack and compare against it, you will definitely see more interesting numbers.
        You know It’s like comparing storage prices (per Gb), enterprise storage boxes from EMC/IBM vs USB HDD; well both contain data and of course if you would to use USB HDD the price per Gb will be probably thousand times less, although they both achieve the same goal of storing the data.
        Then you do your maths assuming that the hardware will be leased, while this is a viable option today, there are businesses that employ either second hand HW or they just buy what minimally accomplishes the task and then add more resources later; this should shift the maths toward calculating the TCO (and compare against) for a period of 3 to 5 years.

  • dtooke

    A hybrid model works best for us. The cloud is very convenient but dedicated servers will save money in the long run. You have more control with dedicated and don’t have mysterious system reboots. I would like to cost comparison including support.

  • Most data centers have different resiliency than the web-scale
    architectures deployed at Google, Amazon, Facebook, and eBay. Start by
    investigating and documenting the resiliency of the enterprise,
    colocation, or cloud deployment. Make sure to align server, network, and
    storage system resiliency with the appropriate data center Tier rating.

  • Lucian Ilea

    If you live in the Fantasy Land and believe that your hardware never breaks,then surely it is much more cheaper to get your own hardware…it is even cheaper to pay 5 dollars per month or 20 per year for website hosting :)))
    For example,I have a 24 core server at IBM/softlayer for my 7 websites including Atlantia.Online and I pay 500 dollars per month..I have 32 GB of Ram and a 960 GB SSD since softlayer was kind enough to double all my initial memory and hard disk configuration for the same price and I paid zero dollars for the first month:)
    On the other hand I bought a 240 GB SSD,Patriot,QLC,for my daily needs and it broke in 2 months after writing 5670 GB in Windows swap mostly or 20 disk writes….this is the medieval age folks
    So yeah it is good for Your Consciousness to build your own server and see how it breaks in 2 weeks if you keep it on all the time and millions of people access it…

  • Reddy I. Bell

    Nice article, agreed, but as someone has said, the cloud is now group thinking so you are preaching in the desert I’m afraid. Nothing trumps running your own hardware despite the Gospel of fear of cloud vendors.

    I also agree on the fact that a good use of hardware-as-a-service (cloud, dedicated), is at early stages / product launch when your workload is unknown. As soon as I have a good idea of the workload (more often than not can be known even before starting the project), I’d at least go dedicated.

    And regarding the lifespan of a server, quite frankly, I’d go as far as running a server for 10 – 20 years – as much as they can handle by
    1 – Having a single recovery disaster hot server in a totally different location (only for database servers)
    2 – Only making the things that have a high risk of failure redundant, like having several hard drive in the same machine.
    My philosophy is to work on my disaster recovery ability. Web Server level issues should be fixed in a few minutes (by temporary provisioning in the cloud). And data server issues in about one hour.
    One thing people tend to forget is that if the less hardware you have, the more impact a failure will have on you, the more hardware you have, the more likely you are to have a failure. That’s why I’m not concerned by Rackspace-class statistics telling us that 10% of their 50,000 hard drives died last year.
    The bottom line : people should monitor their server’s health often and work on their disaster recovery ability by practicing at least monthly, rather than crossing their fingers and/or making the cloud provider richer.

  • Misiek

    This movie should be played before and after cloud presentations. There’s is no problem with cloud when you have a lot of money and investors waiting with more money. Then you need to collect some money and you are ready for “cloud”. Magic words like “cloud”, “big data” are hitting managers heads on every presentation and then they force using cloud and big data in companies. It’s just waste of money. I’m writing this 2 years later and this is still valid.

Articles you care about. Delivered.

Help us speak your language. What is your primary tech stack?

Maybe another time