Five is No Longer Enough! Moving past the 5 tuple-based rules in NSX
When we talk about security, as a rule of thumb, the closer we can push security towards the end user or the application, the better the solution becomes and the more visibility we have. In NSX versions prior to 6.4, we were limited to the 5 tuple-based rules when configuring the NSX DFW, at least natively within what the NSX DWF could provide. This worked well when we coupled it by inserting third-party security solutions or hair-pinning traffic upstream; however, the latter adds additional latency, and in today's fast-paced IT world we need to be able to provide the lowest latency solution possible for our workloads and to also keep our data secure.
What is the 5 tuple, you ask? When we look at the columns available when we create a firewall policy in NSX, it provides us the ability to add the source and destination IP address(s), source and destination port values and the protocol. This is great until you need a more granular approach at securing your environment, or you need deeper visibility into that environment. With NSX version 6.4 and later, we now have the ability to move past the 5 tuple limitations of the past. In other words, we can now decouple the security policy from the traditional attributes mentioned above (5-tuples) and migrate to implementing policy based on the specifics of an application. This has the ability to bring tremendous value to an organization by pushing layer 7 security as close to the workload as possible, reducing an attackers ability to move laterally inside the network.
Micro-segmentation and Context Aware Security
Micro-segmentation is a concept that sounds complicated, but simply means we are able to enforce policy at the individual workload level and provide security for traffic moving laterally. The traditional edge-security deployment methodology is no longer sufficient for organizations. The level of sophistication of today's applications has greatly increased, as well as the knowledge level of those who wish do harm to those applications. With NSX version 6.4 and later, we can look much deeper into the workloads to provide more granular security policies and enforcement. This means we can define consistent security policies across the entire environment, regardless of application or location. This provides a more secure and lower latency solution. Traffic no longer needs be hair-pinned upstream to a dedicated security appliance to do layer 7 inspection; now it can be done directly in kernel by NSX. Of course, we still have the ability to direct traffic upstream to be processed, as well as the option to forward traffic to a security solution that has been inserted virtually into the environment.
With context security, we can define security policies that can be based off attributes such as: user identity, workload attributes such as the operating system, the patch level, Microsoft Active Directory, Certification Authorities (CA's), specific types or versions of HTTP or HTTPS traffic, and many more.
To aid us in our rule deployment, we can leverage Application Rule Manger, sometimes referred to as "ARM," to analyze traffic and let it suggest rules for us. ARM can also help us determine what application security groups are recommended to use. We can then take the recommendations ARM gives us and have it directly publish the rules to the DFW. We have the ability to edit this, if needed, to meet the needs of our organization. This feature can help in planning our security policies, and greatly reduces the amount of time it takes to microsegment our applications, thus yielding a better time to market.
We will take a quick look into how the service(s) can be detected, as well as how it can be dynamically applied in the DFW rule set. In Figure1 below, we are using the Application Rule Manager to analyze the flows between our HR application (between hr-app-01a and hr-db-01a). After the objects (in this case, two virtual machines) have been selected, click start and let ARM analyze the traffic. If you are in a lab environment, you will have to make sure traffic is being generated between the two entities.
Figure 1: ARM Session Creation
We now see that we have some flows moving between these two VM's as shown below.
Figure 2: ARM Flow Collection
After collection has ran for a sufficient amount of time, we can click the radio button next to the session name. In this case, "HRApp" is the name I gave it. We can then click Stop, and then lastly click Analyze. Once we click Analyze, we should see something similar to Figure 3 below.
Figure 3: ARM Session Analysis
We now click Analyze and let it do its magic! When the analysis is complete, click on the session name to bring back the results as shown below inFigure 4.
Figure 4: ARM Rule Planning
In Image 4 above, we can see that ARM has identified the flows between the two HR entities, analyzed them and brought back "ARM Recommended" rules or suggestions. These rules show the source and destination information and the specific layer7/application layer services used for communication between the two VM's. Once we validate that the rules are what we want, we can then click the "Publish to Firewall" button in the upper right corner. That then brings us to a page that looks like Figure 5 below. We will name the section, in this case HR_APP, determine where we would like to insert the rule-set and then click OK.
Figure 5: ARM Publish to Firewall
We should get verification that it has been published to the firewall as shown below in Picture 6.
Figure 6: ARM Publish to Firewall Verification
We can also go look at the DFW rules to verify the rules have been added, as well as inserted, in the correct place and order and adjust as needed. We see below in Figure 7 that the HR_App policy has been added to the DFW, and it was inserted above the "Flow Monitoring & Traffic Rules section.
Figure 7: DFW Firewall Rules
That all there is to it.
We at Hydra 1303 hope this short explanation and demonstration was helpful, and can make your life a little easier when trying to secure your environment.