Public cloud is not cheaper!
There, I said it! Public cloud is not something that should be considered for purely cost savings. Frankly, I don’t know a single CIO, CTO or VP of Engineering or Operations that would disagree with me. If you are one of the many folks trying to justify public cloud as a cost advantageous lever to pull, please consider the many other opportunities that public cloud offers. If you are one of those that try to kill public cloud migrations (you know who you are) with a cost argument, you may be excused. The sooner we all stop pretending that public cloud is only a cost tool, the sooner we can begin to accept the many benefits of moving to a public cloud.
Let me see if I can help.
By the time you’ve calculated all the re-engineering, loss of capital depreciation, and operating costs of a public cloud migration you are likely wondering why anyone would ever make a move to the public cloud. Let’s go through a few reasons that often are the reasons companies make their initial, or eventual, move to the public cloud.
One reason you may make a move to the public cloud is to save costs.
Wait?! What?!?! I just said that cost was not a lever!! With this contradictory statement we’ll need to dive a bit deeper because there is a potential for any company to reduce costs by moving to the public cloud.
Early-stage startups (<12 mos) are perfect use cases for public cloud cost advantages. Not only is it incredibly simple to establish a footprint within a public cloud (e.g. all you need is a credit card and a few hundred dollars), but should the startup become wildly successful or completely fail, the time-to-market and time-to-shutdown is nearly immediate.
Another area I’ve seen extensive cost savings by moving to the public cloud is where companies have significantly neglected their infrastructure. In these cases, a technical refresh is a considerable expense and by moving to a public cloud the solution can, in some cases, be less expensive than rolling your own update.
Slightly broader than technical refresh, any IT mismanagement of infrastructure will typically lead to cost-neutral or better when calculating a public cloud migration.
As applications grow and require geographical distribution there are two primary technical problems so solve: making the application work in an active/active configuration and finding a place to host these highly distributed applications around the globe. Most public cloud offerings can solve for the latter, it is the application owner’s decision to pursue the active/active component.
Anyone who has run one or more data centers knows the pains of running one or more data centers. There is physical infrastructure, cabling, power, cooling, etc.; and one must maintain of all of the above. While this is doable for some larger organizations, when multiplied across multiple regions and as many time zones, the daunting task becomes untenable. Furthermore, should a regional deployment strategy not work out for the business, the technical budget is left holding the proverbial bag.
Much like a faucet that can be turned on and off with ease, public cloud offers the same. Testing a market, be it domestic or international, is best done where the test can be performed without a significant time or effort commitment. Compared to building out a data center on your own, public cloud is incredibly easy on the timeline and your budget. If a test is successful, and the market is confirmed, time is still on your side should a move in-house be desired.
Hang in there, this is about to get political…
The laws of the world are changing rapidly. Given the security theater we are forced to perform within, security of data means everything. Whether it’s a credential exposure, personal information leak, or a public defacing threat; restrictions of data usage within each country differ. Most commonly found in developed countries, laws are being put into place to ensure their citizen’s data is kept within the confines of that country. Political borders exist!
Given these regional requirements for data storage, securing data within a country or continent is not done without a physical presence. As mentioned in the section above regarding Geography, establishing a footprint globally (even if only to test the market) is not without significant effort and cost and can be solved quite swiftly using a public cloud provider (assumption: your public cloud provider is deployed within your regional requirements).
Ahhhh…the holy grail of IT expenditure quantification: how to get more out of your employees without increasing staff or compensation. One mechanism to increase productivity is to get the hell out of your most expensive people’s way!
When considering a workflow of years past, an idea going from concept to production took no less than months. Now-a-days with public cloud provisioning taking seconds-to-minutes, an idea can not only be imagined, but it can be productionalized within the same day. Ideation is only valuable when it can be consumed. How powerful is this? Just ask the many software engineers that engage in hack-a-thons every couple of months, only to find their ideas come to life within weeks…now that’s exciting!
When engineers are given full reins of their deployments, and a few guardrails to make sure things don’t go wrong, true productivity can be realized. From scale to feature, engineers who are allowed to try new out their ideas unencumbered will soon realize the potential of public cloud.
It may be tempting to simply give every productivity-encouraged individual unbounded access to public cloud, but there are a few cautions along the way. Consider the cost implication of massive scale or unintended feature misuse. Consider the complexity of a solution that utilizes the many “cloud locking” services. Consider the productionalization and hand-off to an SRE team.
What does this productivity look like? Take a look at the automation solutions that each public cloud provider offers as part of the suite of services. Whether this be managed services such as database failover pairs established with a checkbox or single API call, or more sophisticated application scale-out logic that provides limitless solutions, the automation options are quite vast and nearly unlimited, and come without the need for you to invent them.
Many of the components commonly found in IT data centers and server rooms are now available within a public cloud. Whether network, storage or compute; public clouds offer a considerable amount of infrastructure as a service (IaaS) components which can replace an entire traditional data center. In more advanced clouds, Platform as a Service (PaaS) offerings provide even higher levels of consumable services.
Other non-infrastructure services also exist. Data storage, logic hosting and monitoring tools exist and provide quick integration into existing system designs. Why run a database when your public cloud provider can run one for you? Why run a server if you could be serverless? And why build your own system monitoring or log aggregation solution when a service can be consumed?
Take the example of establishing a database server with validated backup and a failover companion in a second data center. Previously this would take countless hours (or days) for a DBA to fulfil. Now a public cloud can provide the same service with the same resiliency within minutes.
We can also consider an example of application metrics and logging. Previously a system administrator would take days to build out the hardware and software infrastructure, now a cloud provider can offer this within seconds by simply provisioning access to this service. Extending the solution…the process of providing access and maintaining the solution across multiple locations around the world is a significant undertaking already implemented by a public cloud provider.
How about a CMDB? Yup, that’s there in the public cloud as well. As a matter of fact, you can’t really run a public cloud successfully without knowing where everything is. This core functionality is part of a public cloud, not an afterthought or a reconciliation exercise. This catalog of infrastructure and services is available to all consumers of public cloud.
More advanced solutions exist that few companies are prepared to implement. Many machine learning solutions provide environments which can be utilized given a diverse set of data inputs. These solutions can be seen in image detection and analysis, to threat detection. The more contributions to the learning models the more rich the solutions become.
Deploying to the cloud is not the same as deploying to a traditional data center. Servers are not resilient, latency is variable, and storage throughput is unpredictable. All of this may turn someone away from the public cloud, but consider the design resiliency requirements in order to withstand such adversity. What you end up with will be a much more stable and reliable application.
Gone are the days of pride of uptime, now there is pride knowing that a resource will come online only for as long as necessary. Given these transient (and in many cases immutable) scenarios, the resiliency of each application must be moved up the stack and into the software itself.
Whether through retry logic, asynchronous processing or transactional consistency, the logic that was placed within the servers and databases in the past is now moved into the application itself. This is great progress and one that every software engineer should think about when building the next Facebook, Twitter or MySpace.
Like most things that cannot be seen with one’s own eyes, public cloud security is a very real and very scary proposition. And, yet, consider the impact of the many thousands of applications contributing to the security of the public cloud. Whether in the form of attracting attackers, or offering forensic evidence, contributing security experiences for group review will only improve the security of any solution. Nowhere is this more true than in the public cloud.
Coupled with the “group effect” is the vast security expertise of the dedicated staff. Public cloud providers routinely recruit, hire and retain some of the most skilled security minded people in our industry. Compare this with the typical approach of one or two people dedicated to security in the best cases. Much more likely, security is something a system or application administrator does as a portion of their job.
Public clouds provide such an enormous attack surface that in order to offer a secure platform they must be hardened, monitored and improved. While this level of effort and skillset certainly can be applied to any technical environment, the costs of implementing the required rigor is one that is typically only viable at scale. Couple this with the desire of some of the best security folks in the industry whom routinely seek out security challenges, and you have a focused group of experts at your behest.
While it is hard to innovate under pressure, changing the game requires changing the players. In the cloud case, the game changes from a known and controlled environment to one that is, frankly, no longer within your control.
Public clouds tend to have lower infrastructure reliability. This is by design! They also tend to offer reliable services. This is also by design! These changes encourage, if not require, rethinking how applications are built and deployed, utilizing services over a roll-your-own approach.
Once you move to the public cloud you can no longer call a virtual infrastructure operator and request increased reliability and you can no longer call a storage administrator to increase the performance of a data volume. And while SLAs exist, they are rarely published within IT organizations. Knowing where the infrastructure targets lie has led me to many a debate on where the performance and reliability must lie. Is it in the application? Or, is it in the infrastructure?
The short answer is that stability of a platform and application is shared. In the case of public cloud, SLAs are published and are not open for debate…this is by design. This understanding helps us hyper focus on the application because the infrastructure is not longer a moving target and no longer malleable. If greater performance or reliability is desired, the only option is to build this into the application. This truly is an innovation catalyst!
When comparing a private data center to a public cloud, you will find many comparisons simply cannot be made. Some will try, and cost is one of those comparisons that are attempted. What is lost in most comparisons are the opportunities I’ve outlined above.
I’ll say it again…public cloud is not likely to be cheaper. But should cost be used as a comparison, cost is only relative to the benefits that are provided. If I told you that a doubling of operating costs would increase your revenue by 4x or more, would that be a good decision? Probably.
Attempting to solve for common data center services, geographical distribution, dynamic scaling, and a collective security mindset can be done by almost anyone with an unlimited budget and unlimited time. The public cloud has many of these components available, and in many cases they are quite mature.
There will always be new reasons to migrate to the cloud, but I hope these current reasons encourage you to explore options beyond simply cost. Determining your reasons will likely have the largest impact on the overall success of a public cloud deployment. In the end, your decision is yours alone and cannot be overly generalized, but you must consider more than money.
As you progress, remember your data, your employees and your requirements for innovation. Focus on how these aspects will be supported, and hopefully enhanced, by a move to public cloud, making certain to bring your fellow colleagues and leaders along for the ride.
What are your reasons for moving to the cloud? Please share your ideas in the comments below.