Microservice architecture is way of designing applications where developers split a large application into many independent services, or microservices. Applications that are not composed of services are called monolithic, as the entire code base runs as one giant process.
When you start a monolithic web app, you typically start it up by running a single file, for example “python app.py” or “node app.js.” Perhaps you go one step further, and at least split the application into three tiers, one to serve the UI, another with the core logic, and a third to interact with the database. That would be better, but is still monolithic, and probably running on VMs.
You could start down the path with Service Oriented Architecture, and at least divide the app further into services that have specific functions. Some of these services themselves could have tiers, but they would still be relatively bulky, and would probably live on VMs. If you broke them down even further into smaller services (microservices) and ran them on containers or FaaS platforms, then you would be leveraging microservice architecture.
This approach has a wide array of advantages. For example, each service can be developed by a separate team, that team can choose whatever dependencies, versions, and even languages best suited for that service. Since it will run independently, and services simply access each other by making HTTP requests, it isn’t necessary to have everyone in the company write in the same language.
However, the approach also has its drawbacks. Since services access each other by HTTP request, they need a way to keep track of where all the other services on the network are. And when new services are added, they must be registered with the application in a way that all existing services know they are there now and where to find them. Managing this process is called service discovery.
Kubernetes has an aptly named object to handle this problem: Services. They are named that because Kubernetes expects you to use microservice architecture, though in practice you could serve one giant app as a single Service, or each tier of a three-tier app as a Service. In any case, the Service provides an endpoint, which is a consistent IP address and DNS name to use to access that microservice. The Service then load balances requests across a Deployment, which has a set of identical Pods each running your application code for that microservice.
Pods are ephemeral, they may be destroyed and recreated frequently. When they are, there’s no guarantee they will come up on the same IP address. In fact, normally Kubernetes doesn’t even keep track of which Pod is which, it just wants to make sure there’s the right number of Pods for that Deployment running. The exception to the rule is StatefulSets which were covered in our post a few weeks ago, “Managing State in Kubernetes.” And for more information on configuring Services with Deployments see our earlier post on “Scaling and Load Balancing in Kubernetes.”
While Kubernetes has service discovery baked in, Istio adds to it and can do a whole lot more. It’s designed to make complex microservice applications run predictably and securely, while giving you enhanced visibility into the complex interactions going on between your microservices.
Istio has a wide range of features to help you connect, secure, control, and observe your microservices. It is probably best known for traffic management, which it handles by installing Envoy in all your Pods. Envoy is a proxy that will intercept all your HTTP requests and help handle how they are routed and secured.
The Envoy Proxy is installed in your Pods as what is often referred to as a “sidecar.” A sidecar is simply a secondary container, separated from your application code which runs in the primary container. As a general best practice, you include only 1 container per Pod, rather than squishing separate containers doing separate things into the same one. Sidecars are the exception, and they often contain operational things like the proxies and monitoring agents.
Istio is also great for combining multiple Kubernetes clusters into one giant mesh that works together. While you can achieve this with Kubernetes Federated Clusters, it’s a newer and less battle tested feature, and Istio is known for being the more robust and established way to go about it. Similarly, Istio can extend most of it’s features to external services, allowing you to access your own Kubernetes services (across multiple clusters), non-Kubernetes internal services, and even external services (such as Twilio) through the Envoy proxy.