By Dan Roncadin – Sr. Technical Consultant

During a recent project, I designed a benchmark suite that ran on 3 competing cloud providers instances. The results were quite astounding. Buying a cloud provided virtual machine isn’t like buying a physical server. Differences between X86 server specifications and manufacturers have narrowed down most of the variation in performance between physical server models, to the point where a Dell 2-socket server performs much like an HP 2-socket server, same with IBM and Sun/Oracle. I’m sure patriotic manufacturer employees might beg to differ, but most buyers consider the platforms interchangeable from a performance standpoint. The main differences these days are in power consumption, expandability, and hardware management capabilities.

The inherent similarity in the outright performance of physical X86 servers leads to the unfortunate assumption that one cloud provider’s 1CPU/4GB RAM instance will perform similarly to a 1CPU/4GB instance on another provider. This couldn’t be further from the truth. Cloud providers are using vastly different hardware, hypervisors, networking gear, storage equipment. They have different policies on how to allocate resources to users, and much of this is opaque to their customers.

During a recent project, I designed a benchmark suite that ran on 3 competing cloud providers instances. The results were quite astounding. We saw drastic differences in CPU performance and storage performance, and the current market leaders weren’t necessarily the fastest! Some providers showed consistent performance, some were wildly variable. Providers also exhibited different scaling characteristics as you move up through their larger instances, sometimes it’s a smooth scaling up, and sometimes it a plateaued step increase.

Whether you are interested in building a private cloud, using a vendor’s public or private cloud, or are already well down the path of cloud computing, I can’t stress enough the importance of running a benchmark to get a quantitative analysis of what you are getting for your money. You may be very surprised at the results.

There are at least 4 things you should be benchmarking:

  • CPU/Memory — integer, floating-point, memory access & manipulation, and multi-threaded performance
  • Storage — latency, IOPS, throughput
  • Network — latency, throughput — both to Internet destinations, other regions of the same provider, and within region inter-instance traffic
  • Management — the speed of instance creation, destruction, changes, snapshots, the failure rate of requests

But that’s just the beginning; in our benchmark, we looked at these components as well as a number of real-world workload benchmarks that simulated running an active website, an OLTP database, and a big data Hadoop cluster. We also varied the time of day and ran the benchmark components multiple times across weeks to get a statistically valid set of data.

If you’re interested in learning more about Cloud Benchmarking or want to discuss designing & running a benchmark of your own, please reach out.

To learn about Taos’ cloud benchmarking outcomes for Go Daddy, here’s our press release.