Microservices Security with PKS and NSX-T

The digital transformation era has brought us some really innovating technologies and in order to allow our developers to leverage the new trends of microservices architecture, the infrastructure team is starting to support container based deployments.

So of course the first thing that comes to mind is: how do we secure this new environment? The good news is we do have NSX-T micro segmentation for microservices.

So how does this really work? Well first let’s take a look on how NSX-T integrates with containers and handles the networking, and once we understand that will be easier to dive in to security.

NSX-T and Networking with Containers

The NSX native container networking integrates with something called the container network interface (CNI). This is a network driver used by Kubernetes, OpenShift, Cloud Foundry and Mesos. The CNI spec is   leveraged as the networking option in the Pivotal Container Service (PKS).

The NSX-T Container Plug-in (NCP) was built to provide direct integration with a number of environments where container-based applications could reside. The primary component of NCP runs in a container and communicates with both NSX Manager and the Kubernetes API server, in the case of k8s. NCP monitors changes to containers and other resources, but it also manages networking resources such as logical ports, switches, routers, and security groups for the containers through the NSX API.

Below is diagram 1, which describes CNI

Diagram 1- Container Network Interface

 

Now to understand how communication happens between the NSX Manager and k8s take a look at the diagram 2 below:

Diagram 2: NSX-T container Plugin

 

With NSX-T and k8s integration we dynamically build separate network topologies per k8s namespace. This means every k8s namespace gets one or more logical switches (each k8s pod has its own logical port on the NSX logicalswitch) and one Tier 1 router. The “blue containers” near the bottom represent k8s pods, which are simply a group of one or more containers. Additionally, everynode can have pods from different Namespaces and different IPsubnets/topologies.

The Kubernetes NSX integration takes advantage of the optimized high-performing east/west and north/south routing topology (including dynamic routing to the physical network).

From the security standpoint, every pod has distributed firewall (DFW) rules applied on its interface. Further solution highlights include the ability to leverage NSX IP address management (IPAM), which supplies subnets from IP Blocks to k8s Namespaces, and individual IP/MACs to pods.

The diagram below only shows a NAT topology. PKS and NSX-T support both NAT and non-NAT topologies.

NSX-T brings its network toolbox to the table within this integration, in order to address visibility and troubleshooting in a container networking environment. Most network administrations and operators rely on certain tools and features to do their job effectively, and starting at NSX 2.0 the support for port counters, SPAN/mirroring, IPFIX, the Traceflow/port connection tool, and Spoofguard.

Looking deeper into a NSX-T and Kubernetes setup, we can view the default settings in from a Kubernetes deployment. The standard out of the box k8s includes a default, kube-system, and kube-public namespace. Quickly from the NSX GUI or via API call, we are able to locate the logical switches and Tier 1 routers associated with this basic k8s configuration, and verify our NAT and no-NAT IPAM configuration.

The diagram below only shows a NAT topology. PKS and NSX-T support both NAT and non-NAT topologies.

Diagram 3: NSX-T topology with Kubernetes

NSX-T and Micro Segmentation with Containers

Now that we understand networking with containers it becomes clear how we can leverage NSX-T security with PKS.

The question now is: now that we are supporting containers, and microservices architecture where developers basically split services into possibly multiple containers, how do we provide security, east-west and north south?

The idea here is to leverage the best of DFW in NSX-T. Previously with the classic design of applications the services were embedded within the application, making it harder to gain visibility and even to apply rules.

So if you think about it, architecting the application with microservices is actually helping the security and infrastructure teams with segmentation, and with services being separated we can actually look into having specific securitypolices per service. Now our developers can modernize their applications and we can provide security and compliance.

 

 

Building Policies and Rules with NSX-T and K8’s

The data model to describe segmentation policies between Namespaces, and within Namespaces is called Network Policies and wasreleased in Kubernetes 1.7.

NSX-T can utilize k8s Network Policies to define Dynamic Security Groups & Policies. Capabilities are limited to k8s Network Policy capabilities.

Security Groups & Policies can be predefined on NSX. Labels are used to specify Pod’s Membership Mapping of IP based groups, egress rules, and VM based matching can be used in the policy definition.The NSX/k8s integration supports both the pre-defined label based rules and K8s network policy.

Steps to Create and Apply Policies with NSX-T and K8’s

At Hydra1303 we have experience applying micro segmentation to microservices and here is an example of some of the steps taken to effectively separate containers and services.

Security Groups are defined in NSX with ingress and egress policies. Each Security Group could be micro-segmented to protect Pods from each other.

Diagram 4: CLI to create the security group labels execute by K8’s admin

Diagram 5: The logical view of this topology in NSX-T with K8’s

 

After creating the Labelsthe k8s’administrator will create the policy.

Diagram 6: Policy Example

 

Once the Network Policy is applied, NSX will dynamically create source & destination Security Groups and apply the right policy.

Diagram 7: NSX GUI Policy Creation Verification

 

Comments