Deploying Workloads on Kubernetes

Overview

Since this is the first week of our new PKS Blog Series, we’re going to start off with the basics of Kubernetes and cover how to deploy workloads to Kubernetes, then expose them internally or externally. Kubernetes is an open-source container orchestration platform designed to help wrangle the complexity of a cloud native, microservice world. While you don’t have to use microservice architecture to leverage Kubernetes, its features are especially useful for breaking your application into at least some size of services. That is why when you expose a workload, the Kubernetes object is called a Service.

Kubernetes in a Nutshell

Kubernetes is a set of several programs, some that run on the K8s (common abbreviation for Kubernetes) master, and some that run on each K8s Node. This master with its nodes makes up a K8s Cluster, which you interact with via the API running on the master. The master does not run any workloads on itself, but rather takes your instructions and manages the workloads across the nodes.

The master and the nodes themselves are generally VMs but could theoretically be bare metal machines. They simply run the appropriate software and communicate together to form the Kubernetes cluster. Your application code runs inside containers, the containers are grouped into K8s Pods, and the K8s Master schedules the Pods across the Nodes.

Most commonly, you instruct your Kubernetes cluster by using the CLI, affectionately referred to as kubectl (can be pronounced “kube control,” “kube kettle,” and even “kube cuddle”). While you can bring up infrastructure on Kubernetes entirely through the commands themselves, it is more common to write YAML configuration files and just use the CLI to apply those configuration files to your environment.

Kubernetes is declarative; it expects you to declare your desired state of the app, get out of the way while it goes about its business bringing itself into parity with your description, and then actively that parity via health checks. The things you declare, such as Deployments and Services, are called Kubernetes objects, and there’s a standard syntax for describing them in YAML files. If you’d like to hear more about how to write these YAML files for Kubernetes, checkout our blog post on that very topic.

Deployments

The Kubernetes object that most commonly handles deployments is conveniently called Deployments. If you are just getting started, review the documentation on Deployments and start there. Under the hood, there is another K8s object, ReplicaSets, that you could also manage directly, but these days, it’s more common to use Deployments and let Kubernetes handle the nitty gritty of the ReplicaSets for you. They are called ReplicaSets as it is common to duplicate your Pods and split traffic across the replicas for high availability.

First, you could create another Kubernetes object called a Pod Template, to describe what container image should be run on the Pod and a command to be run on the container once it is up. Then you would create a Deployment, or more commonly you could define the Pod spec as part of the Deployment, instead of first creating a Pod Template. Once a YAML file (for example, “my-deployment.yaml”) is ready to be applied, you may apply it to your cluster with the command:

kubectl apply –f my-deployment.yaml

The –f flag simply refers to the fact that it’s a file that’s being applied. As a side note, this assumes you have already authenticated the kubectl CLI with your K8s cluster.

There are other types of workloads that can be deployed; for example, a StatefulSet is like a Deployment except it has features for handling persistent state. We’ll cover that in detail in a few weeks. There are also DaemonSets which are generally for running utilities (such as logging and monitoring agents) on each Node.

Services

Services, load balancing, and scaling are all things we’ll cover in more detail in coming weeks, but it’s important to mention Services briefly here as they are needed to expose your Deployments. Deployments simply manage the Pods that run that workload, but since those Pods may be destroyed and recreated for a variety of reasons, you would not want to access them directly through their own IP addresses, since those could change at any time.

This is related to the modern infrastructure principle of managing cattle not pets, meaning you don’t grow and groom giant monolithic VMs that never get restarted, let alone replaced. Instead, you run highly ephemeral workloads that are duplicated and destroyed as needed. When you update a Pod, you don’t actually update it – you replace it with a new version and then kill the old one.

Enter Services, a Kubernetes object for giving you a consistent way to access your Pods. When you make a request to a specific Service, Kubernetes knows what Deployment is backing that Service, and will send your request to an available Pod from within the Deployment. You can think of Services like the front end of a load balancer, the endpoint you send requests to from the outside, and Deployments like the backend which organizes the containers, which have the code, that process the requests and send the response.

Services come in four types which you can read about here, but we’re mainly going to talk about two of the types for now, ClusterIP and LoadBalancer. These are the most common types, and while NodePort is somewhat common, it is not compatible with PKS and NSX-T, so it’s less relevant to this series.

ClusterIP is the default type, and the one you’d want to use if you just wanted to expose your Service to your own cluster. This is useful for middle tier Services that are only accessed by your own Services. For frontend Services that are accessible to the public, you’d generally use type LoadBalancer.

Note, each of these Kubernetes Objects may be defined in their own YAML file, or you may put several in one file, with each object definition separated by a line with three dashes on it.

Conclusion

In summary, to deploy a workload on Kubernetes, you first must create a container image that has your workload ready to go. Next, you’d reference that image in a Pod Template, and then use a Deployment to reference that Pod Template and configure how it should be replicated and setup. Finally, you’d use a Kubernetes Service to expose that Deployment internally (Service type ClusterIP) or externally (Service type LoadBalancer).

Hopefully, you’re starting to see how Kubernetes enables modern best practices for both ops and dev (often called cloud native), a topic which will repeat throughout this series. Microservice architecture is flexible and powerful, but also can create a hairy mess as apps grow into a network of semi-independent services. Fortunately, Kubernetes offers tools to replicate, load balance, health check, and discover those Servi ces. We are to treat our workloads like cattle not pets, and again Kubernetes gives us tools to deploy and update our workloads that follow this model.

I think this should say “and then actively maintain that partity”, not “maintaining.” Didn’t want to change it without your review.

I would change this to “and begin there” since you already referenced getting started.

(I don’t know what I’m doing to get these out of order, but this is in reference to line 2 under Deployments.)