Why Kubernetes is taking DevOps by storm
The Coming of Borg
In the mid 2000s, Google unleashed Borg upon its datacenters, to assimilate datacenter resources, and manage clusters of containers across them. This system enabled a combination of high availability, efficient resource utilization, and ease of consumption of datacenter resources for developers running their workloads. Everything at Google runs in containers, and by 2014, they were spinning up over two billion containers per week. Borg has been there now for over a decade, the puppet master behind the scenes.
In 2014, Google packed up a major part of Borg, and released it open source, under the now famous name Kubernetes (sometimes abbreviated “K8s”). Like the word “cybernetics,” Kubernetes gets its name from the Greek word “kubernetes,” which means pilot, steersman, or navigator (that’s why Google’s icon for it is the helm of a ship). It should be no surprise then that K8s is an orchestration system, that guides containerized workloads (K8s packages these into “Pods”) across the compute resources it has been given control over (physical machines or VMs that K8s calls “Nodes”).
Just a few years later, K8s had become a household name among DevOps folk. Today, not only Google Cloud Platform (GCP) offers managed K8s as a service (GKE), but so does AWS with EKS and Azure with AKS. If you think containers are just for the public cloud, think again. A couple of weeks ago at the opening VMworld keynote, VMware reaffirmed what it has said before, that the company is making a huge bet on Kubernetes and sees it as a central part of their offerings going forward. Specifically, they now offer PKS (Pivotal Container Service), which is enterprise-grade, on-prem, managed Kubernetes as a service. Meanwhile, Red Hat has placed its own bet on K8s with OpenShift.
Wait, but why? Why is Kubernetes the popular new kid in town? Well, by 2013, running workloads in Docker containers had become wildly popular. Containers proved to have myriad advantages over running directly on physical machines, or even on virtual machines. Containers boot up far faster, have a much smaller footprint, and are easier to port than VMs. With lightweight containers at the ready, developers broke their code down further and further into what is now called Microservice Architecture, and in many cases, ran each microservice in a container.
This was great, until it wasn’t, because as you might expect containers came with some drawbacks, as did running code in dozens of tiny containers floating around the datacenter. To run code like this efficiently, the tech community needed an orchestration system capable of bringing order to the chaos. As you might expect, when Google released Kubernetes in 2014, it was a sight for sore eyes. Not only did the tech community use it, but also contributed to it, and eventually began extending it through a robust plugin system.
It’s easier to look backward than forward, but the future looks bright for Kubernetes. Though it is still a platform in the process of maturing, it has come a long way in its few short years. We are just now getting some of the robust complimentary tools like PKS and Istio that are signs of that maturity. There is still plenty of room for the tools and best practices that will cement K8s as a pillar in the evolution of computing.
DevOps seems to be in a rapid state of transition, very similar to the Agile transition that the programming world went through not long ago. Business leaders want speed and agility, infrastructure teams want security and control, and developers want portability and ease of consumption. All of this drives a growing wave of container adoption, cloud native best practices, microservice architecture, and of course, Kubernetes.