Written By Ben Rockwood.
We are excited to share our Guest Blog from Ben Rockwood. Ben is a former Taoser and huge community advocate of Taos. Currently, you can find him at Chef leading from the front, as the Director of IT & Operations.
I’ve always found it funny that it seemed to take years for people to become comfortable
with the term “cloud”. When I was at Joyent, we created what would become one of the first public Infrastructure as a Service (IaaS) clouds and I spent nearly 3 years giving very basic “What is the Cloud?” talks. Even Larry Ellison, CEO of Oracle, famously said “Maybe I’m an idiot, but I have no idea what anyone is talking about. What is it? It’s complete gibberish. It’s insane. When is this idiocy going to stop?” Even though “cloud” confused and confounded, as soon as they understood what it was they had no misunderstanding about the notion of “Hybrid Cloud”.
For any company that began life before the era of Cloud, there has been a big architectural problem: How do I connect up the computing resources I’ve already invested in with these new resources in the cloud?
Hybrid cloud creates a host of problems. As long as we’ve had servers we’ve been trying to reduce the complexity involved in integrating a wide variety of components into a manageable system. Buying fully integrated solutions from companies like IBM, HP or Sun was the solution in the 1990’s, complete with complex systems management software that gave you a “single integrated view of your data center”. With the coming of virtualization the solution was to virtualize everything and run it under a single comprehensive management layer (namely, VMware). When cloud came along there began a race to create an on-premise equivalent architecture to provide many of the clouds benefits to you within your organization, a battle ultimately won by OpenStack.
Despite all the changes, today we have more options available to us than ever before which allow us to solve challenges in new and remarkable ways. In order to leverage this flexibility we’ll need to ensure we’re leveraging 3 core technologies as much as possible to reduce complexity and increase speed: Configuration Management, Application Automation, and SD-WAN technologies.
Configuration Management tools like Chef have been the cornerstone technology for taking advantage of the speed of the cloud. Now that compute resources can be obtained in seconds and we’re charged by the hour, we must be able to configure those resources and put them into use within seconds. Moreover, by leveraging a tool like Chef which abstracts the change you want to make from the implementation of a specific operating system, we can reduce the complexity inherent to automating different operating systems and distributions which differ slightly. Once your configuration is codified in this way, you can quickly and reliably bring new resources, in your data center or in the cloud, online and providing value in a minutes or seconds.
Additionally, as systems become more and more distributed, variability between nodes must be kept to a minimum. By codifying your complete system configuration into a tool like Chef, you can ensure that every system is identically configured.
Whether you’re creating infrastructure in your data center with VMware or OpenStack, or if you’re deploying on AWS, Google, or Azure, its essential that you have one single system for configuration management which is intelligent to do the right thing regardless of the underlying system.
Docker has provided the world with an exciting new way of improve the speed and reliability with which we deploy applications. In the past we were forced to leverage a variety of different technologies to address the problem of ensuring that your application and all its dependancies were properly deployed.
In the beginning we utilized a “Run Book”, a (hopefully) detailed list of instructions on how to install your application, its various dependancies, how to configure them all properly, tune the environment, start and stop applications, and possibly even how to troubleshoot the inevitable problems that would result. In reality, most of these Run Books were little more than a collection of notes that were poorly written and even more poorly understood. Over time these instructions might be distilled into one or more install scripts, but ultimately this only reduced the number of steps of the deployment but refining these scripts to include all possible problems and edge cases which may occur took a lot of time and care. Moreover, these procedures and scripts were typically written for a very specific deployment scenario, making transitions to new operating systems or distributions extremely difficult. Moving from Red Hat Linux on Dell servers in your data center to Amazon Linux in the cloud could be a daunting task.
Configuration Management worked so well for infrastructure that it was adopted as a solution for application automation as well, but as soon as you started deploying your application very frequently or running multiple applications on the same host you began to encounter problems of ensuring that Cookbooks didn’t conflict with each other and that all parts of the systems configuration worked well together. This required extensive testing to be introduced at every level, both locally prior to committing a change (using tools like Test Kitchen) and in pre-production test environments.
Despite these problems, there is an elephant in the room that isn’t sufficiently appreciated. Because most applications today are “web” applications (whether its a simple REST API or Single Page App). In order to effectively develop and test these application you need to run them on your laptop or workstation in a simple and efficient manner. Docker provided an ideal solution to this problem by allowing developers to run the services they needed for development and testing, such as NGINX or PostgreSQL, right on their laptops, and then to bundle their code together with the services they used in development into a new container. This explains why the number of people using Docker in development is significantly higher than those running it “in production”.
By leveraging Docker containers, infrastructure teams can now focus on building infrastructure ideally suited to running containers, leveraging Configuration Management and other tools to ensure consistency and reliability, while enjoying the benefits of simplified application deployment.
Chef has introduced a new product, Habitat, which enhances the capabilities of containers with embedded automation, health checks, the ability to interact with other containers via a Gossip protocol, react to changes in other containers and much more. When combined with Configuration Management at the host level this allows for applications that can be packaged once and deployed anywhere, on your laptop, your data center or the cloud!
Several years “Software Defined Networking” (SDN) became all the rage. OpenFlow was going to change networking as we know it and every network device would have an easy to use API. While some of that did happen, the complexity of the solutions has caused most of us to either depend on big vendors like Cisco, Juniper or Big Switch more that we ever did before, or to stick with traditional CLI based network interfaces (even though you can use the CLI via REST now). Despite these set backs, the adoption of these technologies into Telco’s has revolutionized the types of options available for WAN connectivity and is affectionately called “SD-WAN”.
SD-WAN can provide new and exciting options for enabling Hybrid Cloud without having to rely on slow and possibly unreliable site-to-site VPN’s across the public internet. Many telco’s have been able to interconnect diverse sites by means of large, private MPLS networks, but the fiber connectivity to it was typically prohibitively expensive.
Today, thanks to falling bandwidth and fiber costs and increasing flexibility of connectivity in and out of these MPLS networks, we can now provide Gigabit connectivity between your office, data center, and cloud providers in a secure, fast and affordable way. Equinix’s Cloud Exchange provides a wide range of options for private, high-speed connections to popular cloud services. New services leveraging these rich interconnects and SDN technology, such as MegaPort, reduce the pain, hassle and cost associated with adding new interconnects by giving you a single fiber connection which can then be virtually attached to AWS as a DirectConnect , Azure or Office365 as an Express Route, and many more.
Along with these connectivity capabilities, telco’s now offer “Network Function Virtualization” (NFV), which is just a buzzword that means that they can provide you with capabilities like firewalls, anti-virus, intrusion detection, email services, etc, via a virtualized instance running in their own data centers on the other side of the fibre instead of on your premises on an appliance. (In some cases on-prem services are being replaced by virtualized X86 servers offering these services on your site but fully managed by the carrier).
These exciting enhancements in WAN capabilities allow for a better, faster, more secure hybrid story that we believed possible several years ago. Having sub-10ms latency between an AWS EC2 instance and your on premises NetApp Filer opens up some really awesome new options!
These technologies combine to create an environment that allows for incredible control and reliability, by reducing variability and ensuring quality in our ability to connect to, configure, and deploy into cloud services.
Once it was believed that cloud would take over the world and no one would run their own infrastructure. While that may be true for some customers, the reality is that servers in the cloud are still much more expensive than physical servers and the Total Cost of Ownership (TCO) savings are only realized if you haven’t already invested in your own infrastructure and you are very disciplined in your cloud utilization. Most customers today are spending far more on their cloud bills than they would if they brought it all in house due to over-provisioning and poor house cleaning habits.
Hybrid cloud allows for the best of both worlds and the capabilities to maximize its potential are only getting better year by year.