PKS and NSX-T Design

VMware PKS fully leverages vSphere NSX-T. Let’s take a look at the supported NSX-T design and implementation. NSX-T licenses come with PKS, so there is no reason not to leverage the best in class network virtualization.

First, let’s take a look at what is available by using open source networking, and what the advantages are of using NSX-T.

So if you think about it, there are a lot of choices with open source, and other products and solutions out there to provision networking for Kubernetes, but PKS comes with NSX-T included. It’s honestly a no-brainer when it comes to the networking platform choice.

The only concern is that folks that have been using open source, a lot of times, like to stick to what they know and have been using. But if you step out of the box a little bit and take a look at the big picture, with NSX-T, you will not only gain all the networking services, you have something that open source cannot provide – micro-segmentation of Microservices. You will also gain all of the monitoring and troubleshooting that is built in with tools in NSX-T

Take a look at the graphic below and see the comparison from NSX-T versos Open Source.

 

 

Now that we understand the benefits of leveraging NSX-T with PKS, let’s talk about NSX-T and supported topologies and design with PKS.

So with NSX-T and PKS, the first consideration is what the networks that NSX actually provides services for.

There are three major blocks to consider here. We need NSX to service the management networks for PKS, the Node networks, and the POD networks.

The graphic below shows you the major building blocks.

 

 

There are a couple more considerations on the design and integration of PKS and NSX-T.  You will enable NAT MODE for the node network configuration. The Node IP Block will be carved out to create networks to host Kubernetes cluster node VMs, which should be a multiple of /24.

The POD IP BLOCK will be carved out to create networks to host Kubernetes pods belonging to the same namespace, and it should be a multiple of /24.

The POOL ID is used for: K8S Master VIP, SNAT from pods, Kubernetes Service kind (LoadBalancer L4), Kubernetes Ingress kind (L7). It cannot be on the same subnet as the uplink/transit network.

There is one last design consideration note before we move on to the supported topologies. The T0 router has to be configured in Active-Standby, regardless of networking topology.

Okay, now we finally get to the 3 options of topology for NSX-T with PKS.

 

Option 1 Networking Topology: PKS Management external to NSX-T + NO-NAT

PKS Management external to NSX-T, deployed on a classic vSphere port group.

PKS Management and vSphere / NSX Management networks can be combined.

 

 

 

Option 2 Networking Topology: PKS Management internal to NSX-T + NO-NAT

PKS Management internal to NSX-T, deployed on a logical switch.

The tier-1 logical router and logical switch required for the PKS Management network must be created upfront.

 

 

 

Option 3 Networking Topology: PKS Management internal to NSX-T + NAT

PKS Management internal to NSX-T, deployed on a logical switch

The tier-1 logical router and logical switch required for the PKS Management network must be created upfront.

DNAT rules required for PKS Management.

 

 

This gives us our 3 options of topology design of PKS and NSX-T. The 3rd option is probably the one you will see out there.

Just to summarize, here is the full view of the topology: