How container-based microservice architecture will do you a favor!

It may surprise you to learn that when your development teams tell you they are moving to container-based microservice architecture, they are actually doing you a favor! They are organizing their code into separately executable pieces, each with a defined purpose, that is to say, a microservice.

Not only does this allow you to scale and secure each piece of the application individually, but it also creates natural separation of responsibility between ops and dev. Ops teams provide healthy, secure containers that can communicate properly, and dev teams put the code on for each service.

In a way you can compare containers to the JVM, Java Virtual Machine. Java devs are often responsible for the code running on top of the JVM, but not for anything below the JVM (anything that was involved in running the JVM). And for infrastructure and ops teams it was the opposite, they were responsible for providing the JVM to devs, but not for issues with the code running on top of it.

It’s now much the same with containers or Kubernetes as a whole. The developers interact with Kubernetes and the containers and the code running on top of them, but not with anything below. The ops team feeds resources into Kubernetes (primarily making more Nodes available, along with network and storage), and then the developers could spread their workloads across pods and containers inside the K8s (Kubernetes) cluster. Alternatively, the ops team might manage K8s too, and the devs might just push their code into a Git repo that was then deployed automatically through a CICD pipeline.

While at a conference, we overheard a developer saying to his friend as they left an exciting K8s talk, “Wow, do we even need an IT team anymore?!” And so I asked him, “How would you scale up the app?” And he said he’d do it with the K8s CLI, kubectl apply, specifying more instances for the service. And indeed, that would put more Pods of the same service on the Nodes (presumably VMs, maybe bare metal machines) that were available to K8s. But then I asked him what he would do if the whole K8s cluster needed more resources, more Nodes, and he had no answer.

This demonstrates a disconnect between ops and dev teams, which has likely been created by the conveniences of the public cloud, where devs truly may not need an IT team. But the public cloud has its drawbacks (relatively high cost for consistent workloads, lack of control, compliance), and most large companies run their own datacenters. However, increasingly, with tools like PKS and GKE On-Prem, companies are realizing the benefits of the public cloud on their own on-prem hardware.

Now more than ever ops and dev need to understand these new modern stacks, and leverage their advantages, while mitigating any drawbacks. Here at Hydra we help companies who have a more traditional datacenter but are under pressure to modernize into SDDCs (Software-Defined Data Centers) utilizing some combination of virtualization, containerization, microservice architecture, and CICD pipelines. This involves installing new software, but it also involves training staff and refining procedures.

Modernizing has a wide array of advantages, from security, to flexibility, to more efficient utilization of resources. And best of all, if services live on infrastructure-agnostic containers, developers will have little to no concern with where you run them, since their configurations and dependencies won’t have to change one bit. K8s integrates directly with networking technologies like NSX to provide ultra-specific micro-segmentation (firewall rules specific to each service), and since each microservice is like its own mini app, you can load balance and scale specific services instead of an entire monolithic app.

Comments