How Microservices Bridge Infrastructure and Development teams 

Introduction

Coming from the infrastructure side of the Datacenter you are probably wondering: How Am I going to be able to support the new trend that developers are using now with application architecture?

You probably heard a lot of your developers talking about microservices and requesting infrastructure that will support their new age way of deploying application.

So how can we support this cutting edge architecture of application development in house? How can we get our infrastructure ready? What we want to avoid is that our developers end up taking matters on their own hands and going to a public cloud provider without our knowledge and creating a shadow IT world that could jeopardize IT security and compliancy for the enterprise, and also compromise intellectual property.

What Does Microservices Architecture Means

So we been hearing a lot of buzz around microservices, but what does that actually mean? If you take a look into development world, for many years they had a traditional way to develop applications, which was called a monolithic architecture.

In the monolithic/traditional architecture the developers used to put all services for an application with the application itself, so on the end of the day all services were embedded with the code in the app. What that really meant for us in the infrastructure side was that visibility of services was next to impossible before, so applying rules and policies to these applications was challenging to say the least.

Now with the new cutting edge architecture, called microservices, developers are splitting their code and services in multiple instances, with that there are several benefits to the application and how services, and updates are handled and also several benefits for us in the infrastructure side.

Because they are literally segmenting the services, we can have visibility into each service and apply policies and rules per instance, not only that we can help application with high availability, and scalability, which before was doable, but harder.

So developers are really helping us in the infrastructure side, now we can handle the applications security and availability much better, and provide them with better services.

Below you have a picture that describes what the traditional architecture looks like and what the new architecture looks like.

 

Diagram 1: Traditional Versus Cutting Edge Architectures for developing apps

Infrastructure Team Challenges

Well, now you are probably wondering, how am I going to provision an infrastructure that on Prem can support the requirements of my development team and their new architecture? At Hydra1303 we have been helping customers design and implement the datacenter infrastructure required to be able to support this new model.

Let me tell you a little bit of how we can get to our final goal which is helping our dev team so we can get applications out fast and shorten the cycle of testing and development.

I think that the biggest challenges when it comes to infrastructure and this new model of application development are the scalability, the patching, of course the security, the visibility and monitoring and a big one is also integration with cloud.

The last one is really something that customers are thinking about now, and they may not be going to cloud today, but they really want to set the datacenter infrastructure to be cloud ready, so when management makes the decision, little to no effort has to be executed to do the integration with a cloud provider

Infrastructure that Support Microservices

Knowing the challenges that we have, we can now think about what capabilities doe the infrastructure need to offer to overcome those challenges.

In order to support microservices architecture on premise, here are some of the major requirements from the infrastructure, also keeping in mind that our “ infrastructure competitor” at this point is a cloud provider and our developers will be comparing we provide them, as a services, with services that they can get from a cloud provider.

So how do we raise up to the challenge? Our infrastructure is going to have to provision several features, and I will list some of the major ones here.

  • Accelerates the deployment of Kubernetes clusters
  • Eliminates manual steps for deploying Kubernetes clusters
  • Minimizes mistakes
  • Scales the cluster capacity easily
  • Optimizes resource utilization
  • Deep operational visibility and faster troubleshooting
  • Micro-segmentation with Microservices Architecture
  • Security Policies
  • Minimizes application breaches with enhanced container security
  • Secure Container registry
  • Secures workloads between tenants and provides privacy

We also can’t forget, we need to be ready to optimizes workload deployment in multi-cloud environments, and provide a single consistent interface to deploy and manage Kubernetes on both vSphere and Google Cloud Platform. If we want to be really cloud ready.

The picture below shows the logical view of the infrastructure that will support all the requirements listed above and more.

Diagram 2 Logical View of Infrastructure

Microservices bridges Infrastructure and Development Teams

 

Solution wise for your datacenter what are we actually talking about? Well if you want to rise up to the challenge and really provide to your developers the high availability and flexibility they need, and give them the same experience or better that they would get on a cloud provider, while maintain the security and control by having a platform running in your Datacenter.

The solution you are looking for is VMware PKS. Let me show you a little bit of what PKS can do for you and we can help you reach that goal.

PKS is a purpose-built container solution to operationalize Kubernetes for multi-cloud enterprises. It significantly simplifies the deployment and management of Kubernetes clusters with day 1 and day 2 operations support.

With hardened production-grade capabilities, PKS takes care your container deployment from the application layer all the way to the infrastructure layer.

PKS is built in with critical production capabilities such as high availability, auto-scaling, health-checks and self-healing of underlying VMs, and rolling upgrades for Kubernetes clusters. And with constant compatibility to GKE, PKS provides the latest stable Kubernetes release so developers have the latest features and tools available to them. This allows you to on premise “compete” with any cloud provider and give you dev team the tools that they are used to play with in the cloud, no more shadow IT.

It also integrates with VMware NSX-T for advanced container networking including micro segmentation, ingress controller, load balancing and security policy. Through an integrated private registry, PKS secures container image via vulnerability scanning, image signing and auditing. This gives you the ability to apply policies and micro segmentation rules to the application on a per services bases. All under your enterprise control and administration, ensuring the requirements of regulations and compliance.

PKS exposes Kubernetes in its native form without adding any layers of abstraction or proprietary extensions, which lets developers use the native Kubernetes CLI that they are most familiar with.

PKS can be easily deployed and operationalized via Pivotal Operations Manager, which allows a common operating model to deploy PKS across multiple IaaS abstractions like vSphere and Google Cloud Platform.

With VMware PKS we now finally have a solution that allow us to check all the boxes for developers requirements, while maintaining the security and compliance of the implementation on our infrastructure and reaching a major goal of our CIO which was be cloud ready.

Comments