CI/CD-Controlled Infrastructure

Wait What? 

“Wait What?” was my first response when I heard of CI/CD to deploy infrastructure. I knew that modern infrastructure housed CI/CD Pipelines to deploy code, but I had not heard of CI/CD Pipelines deploying the infrastructure itself, even the very infrastructure the CI/CD Pipeline was running on! When you step back and consider the idea of SDDCs (Software-Defined Data Centers) and Infrastructure as Code, managing infrastructure with a CI/CD Pipeline seems like the natural next step.  

A traditional CI/CD Pipeline is a software tool (like Jenkins) or series of tools that allows hosted software to be updated by a simple push of a Git repo. Without a CI/CD Pipeline, a developer would finish an update, push the repo, and then someone on the infrastructure team might go deploy the new code by logging into the server, doing a Git Pull, updating packages, restarting the application, etc. This would often be at least partially automated. With a CI/CD Pipeline, the developer merely pushes the repo, and the rest happens automatically. When this is well-supported with high test coverage and solid best practices, the result is the ability to Continually Integrate and Continually Deploy (CI/CD) software. 

For the most part, CI/CD Pipelines have been tools setup by the infrastructure teams for the development teams. At the same time, infrastructure teams were increasingly embracing the Infrastructure as Code movement and creating Software-Defined Data Centers, which meshed well with public cloud offerings like GCP and AWS, which were ready to be software-defined. These twin revolutions perfectly paved the way for the strategy this article describes, which is turning those CI/CD Pipelines back on the infrastructure itself and making changes to it in the same way a developer would update an application in a pipeline (change some text, then Git push). 

Wait, but Why? 

Okay, so now you understand what is meant by CI/CD-Controlled Infrastructure, and perhaps it even sounds cool and cutting edge, but what benefits can you expect to see from this strategy? The answer: a whole heck of a lot – from consistency and repeatability, to ease of rollbacks and records of accountability, not to mention saving time and headache.  

Defining your infrastructure in configuration files gives you a nice, clean, readable, consistent place to see the whole picture. You can easily make changes through quick text edits, rather than hunting through a variety of GUIs, or even messing with a bunch of SDKs or CLIs. Many small human errors are removed through the consistency of declaring the desired state of the infrastructure in standardized config files.  

Real magic reveals itself as you start to consider the inherent advantages of using Git for infrastructure. Have you ever wondered who keeps making certain configuration changes? Git provides a complete record of changes. Need to roll back to a previous state? No problem, Git is literally all about version history. Curious when a certain change happened? Use Git to diff between various versions. Additionally, you can use branches to organize who is working on what, and even swap between configurations.  

As you step back and consider all the advantages, it becomes clear that this method is an easy choice, for the same reasons a CI/CD pipeline (and a version-controlled repository) for deploying software is an easy choice.  

Okay, so How?  

This question is both easy and difficult to answer. On one hand, it’s easy – just use a standard CI/CD Pipeline and configure it to run the appropriate commands when a change is made. On the other hand, this is a new strategy, there are not a lot of best practices are out there, and there are potential downsides (it could be pretty easy to spread a bad configuration far and wide).  

Let’s consider a simple example, like scaling a Service in Kubernetes. You would have a bunch of YAML files describing the desired state of your application. Let’s say one of the (micro) Services is currently scaled across 3 identical Pods, and then you want to double the number of Pods for this Service. You would find the YAML file for that Service, change the number of pods from 3 to 6, and then run the CLI command: kubectl apply -f myYaml.yaml 

To move that process into a CI/CD pipeline is actually pretty simple. First, you’d store all your YAML files in a Git repo, then put in the webhook that will trigger your pipeline when the dev branch of that repo gets pushed. Finally, you’d configure the pipeline so that it knew to run kubectl apply on any changed YAML files. Boom – CI/CD for Infra complete (though this is just a simplistic example).  

Now to go back to adding Pods to the Service. All that the ops team would have to do is change the YAML file on their local computer (assuming they’d cloned the Git repo there), then push the dev branch (assuming that’s the branch that had the pipeline webhook). The webhook would fire, triggering the pipeline to do its thing, namely running kubectl apply on the updated YAML file. The ops team would just have to make the change locally on one of their laptops and push the repo.  

Now imagine your whole infrastructure is described in config files, and they are all able to be applied in some automated way. You’d then be able to make changes across the infrastructure in the config files, push the repo, and your new infrastructure design would just roll itself out; however, keep in mind this is a fairly new strategy and will undoubtedly be fraught with peril. Be sure to research and test carefully before using it in production.  

If you are ready to give it a whirl, try hooking up Terraform to Github or Bitbucket with Jenkins. Basically you house your Terraform config files in a Git repo, then use a CI/CD Pipeline like Jenkins to watch for changes to the repo, and configure Jenkins to apply the new Terraform files when they are changed (on a certain branch at least). Please do not play with this in production, or even your dev environment. Test this out in a safe space, then as you are comfortable bring the strategies into test and perhaps eventually production environments.

If you have any further questions about these strategies, or need assistance in implementation, don’t hesitate to reach out to us. 

Comments