PKS and vSAN Design

Today we are going to talk about storage design with PKS. Before we can talk about what is supported and how it works, let’s make sure we understand why we need persistent storage.

Datadog has given us statistics that in 2018, seven out of ten cloud-native applications needed persistent storage. So how can the application consume persistent storage? There is a vSphere special Cloud Provider plugin that allows the consumption of storage. If you recall from one of our previous PKS blogs, applications are going to be deployed in the containers that are deployed via Kubernetes, and the VM’s are the nodes. So the cloud provider/project Hatchway will allow the consumption of storage.

Applications can leverage HCI benefits like capacity, performance, device and health monitoring. Provide protection and data services, consumed via policies, to ensure cloud native applications have the right level of availability.

This picture below illustrates how the Cloud Provider is available via a storage plugin to the K8s cluster

 

 

 

We do have some challenges when provisioning persistent storage: first, you need a platform with the right plugins for Kubernetes. You have to configure the K8 cluster and nodes to use the storage platform, and maintaining the platform over time can become a big challenge.

So to save the day we can leverage vSAN. You are probably wondering why vSAN is such a great idea with PKS; Well think about all the benefits that you get with HCI: flexibility, control in supporting a diverse mix of traditional and cloud-native apps deployed on top of PKS. Of course, a major differentiator is the ability to use SPBM. Storage Policy Base Management is a great feature of vSAN with a lot of granular design. Other storage backends only support the tag-based placement capability of SPBM.

Single HCI platform, same operational model, provides unified volume monitoring & reporting console within PKS.

SPBM is about ease and agility.  Traditional architectural models relied heavily on the capabilities of an independent storage system in order to meet the protection and performance requirements of workloads.  Unfortunately, the traditional model was overly restrictive, in part because standalone hardware-based storage solutions were not VM-aware and were limited in their abilities to apply unique settings to various workloads.

Storage Policy Based Management (SPBM) lets you define requirements for VMs or collection of VMs. This SPBM framework is the same framework used for storage arrays supporting VVOLs.

So, to summarize why you want vSAN with PKS and what the value add actually is: The entire HCL storage products under vSphere Supported with Kubernetes will have operational consistency, all of the tooling and SPBM, self-service storage provisioning through storage class mapping to SPBM policies.

PKS delivers the consistent, repeatable configuration & maintenance of Hatchway

 

 

Now that we understand why vSAN is such a good idea for PKS, let’s take a look in the supported design and topologies with vSAN.

Storage for K8s Persistent Volumes (PV), PV mapped to a VMDK file on vSphere using vCP (Cloud Provider) plugin. Datastore, where VMDK is stored, must be accessible by all K8s Worker Nodes in the K8s cluster (in case the stateful POD is rescheduled to another location).

 

 

 

Here are some considerations that we have to look at today with vSAN, but that will be changing shortly, so keep your eyes open because we will be updating these restrictions soon.

  • Using vSAN, a vSphere Cluster must start with a minimum of 3 ESXi hosts to guarantee data protection (in this case for RAID1 with Failure to Tolerate set to 1). PKS AZ  (Availability Zone) does not map with vSAN Fault Domain.

 

  • PKS with single Compute Cluster is currently supported with vSAN (all ESXi hosts located in the same site).

 

  • Caution: PKS with vSAN stretched cluster is NOT a supported configuration as of now
    (no mapping of AZ with vSAN Fault Domain).

 

  • PKS with multiple Compute Clusters is not a supported configuration with vSAN only datastore

 

  • Master and Worker Nodes can be created across the different ESXi clusters (Bosh tile allows to specify multiple persistent and ephemeral datastore for the VMs)

 

  • PV VMDK disks are created to only 1 vSAN datastore (and no replication across the different vSAN datastores will be performed automatically)
    • NOTE: This means that if a stateful POD dies and is rescheduled to a different AZ, associated PV VMDK disk is no longer reachable.

 

The First Topology that we support today: vSAN Datastore/Single vSphere Compute Cluster

 

 

The Second Topology that we support today: vSAN Datastores/Multiple vSphere Compute Clusters

 

 

 

Please stay tuned for future blogs in our PKS series!