Last week, Google’s Cloud Next conference kicked off with a major announcement: Cloud Services Platform is expanding and being rebranded as Anthos. This platform could already be used to deploy Kubernetes workloads on prem and in GKE (Google Kubernetes Engine). Now it’s expanding even further and allows IT staff to deploy their applications across Kubernetes running on vSphere (on prem), GCP, AWS and even Azure.
Kicking off the GCP conference with a tool that spreads your workloads across providers (Google’s competitors) sends quite a signal. It’s a good reminder that in IT, there’s a very fine line between partners and competitors. And it’s an acknowledgement that despite the financial interests in vendor lock-in, large enterprise clients are going with a multi-cloud strategy, and being the primary tool is more pragmatic than trying to be the only tool. With Anthos you can use many clouds, but fundamentally you control it from a Google platform.
Components of Anthos
Anthos is made up of several things, but most fundamentally, it is Kubernetes with Istio. Istio is a Service Mesh technology that allows your services that are deployed far and wide to be accessed in a consistent way from your application code. It also does a lot more like security and analytics; check out our blog on Kubernetes and Istio for more details. For this post, we’re going to focus on Istio’s ability to make service discovery easy and consistent, and load balance across multiple instances of your service (for example, one hosted on GCP, another on vSphere, and a third on AWS).
To be more specific, you would have Kubernetes running on each of those providers, and what I’m calling service instances are actually Kubernetes Services. But Istio also has a concept service, and an Istio Service can be backed by several things. So in this case, Istio would have a service, say the Inventory Service for your ERP app, and you’d have a separate copy running in Kubernetes on GCP, Kubernetes on vSphere, etc. Your application code from any Kubernetes install will have access to Istio Services via the Envoy proxy, and when called, Istio makes sure your request gets served by one of the healthy Kubernetes instances.
One major piece of Anthos is GKE on prem, which was announced at last year’s Cloud Next conference. Before that, GKE was only available on Google’s hardware – the Kubernetes Nodes were deployed on GCE. With GKE on prem, you get Google’s managed Kubernetes, but on your own on prem hardware instead of Google’s. The Kubernetes Nodes are installed on VMs provided by vSphere.
Anthos in Action
With these components working in concert, you’ll be able to start in GCP and deploy Kubernetes across providers, with Istio baked in to combine the environments into a giant Service Mesh. This gives developers and IT teams a consistent management experience across clouds, and the potential for an extremely high degree of availability.
However, if you went by VMware’s booth at the conference, they showed you Anthos architectural designs from a vSphere-first perspective. They envision customers using GKE in the cloud, but instead of GKE on prem, they offer PKS (Pivotal Container Service, which we have a whole blog series on), and they show their networking tool, NSX.
In that way, the providers seem to be agreeing that hybrid cloud is here, and they need to support each other to make clean hybrid cloud features available to their customers. But battles still exist regarding whose tool is primary, and whose tool get used for each given component.