In a previous article, I asked the question What Does it Mean to be Cloud Native? I explored some general properties that all Cloud Native applications should meet. Here I want to go into more detail to give guidance on how high up the triangle you should take your project. Not all projects need to climb up to the top of the pyramid. As with all things in software development, the correct answer to how high I should go is, "it depends." It doesn’t necessarily make sense to climb to infrastructure as code for a single service application. You may be ok with going to the containerization layer. Some people even ask should I even consider containerization at all. Let’s start looking at the different layers.
Most people agree that Cloud Native applications should be containerized. The preferred containerization software is Docker. Containerization checks off most of the boxes (properties) I talked about in my previous post. Most DevOps systems can automate the creation of containers. They can read the Docker file and build the container, especially if the compilation of the application happens within the container. Containers remove the dependency of the underlying hardware. Now your application can run on different operating systems, promoting flexibility. When a running container exits unexpectedly, you can usually spin up another instance. That is also true should you need to scale up to handle larger loads. This covers resiliency and scalability associated with Cloud Native apps. Since the container can run on different hardware, it can also run on various cloud providers as well, taking advantage of the distributed nature of cloud computing.
Containers provide an added benefit when it comes to development and testing. Developers can build their applications and host them in containers. Testers can take those containers and run tests against those applications. This is the same instance of the application that was built. Once all the tests have container can be deployed. The key is the same bits and configuration that were on the dev machine are now deployed in the cloud. Reducing the likelihood of a developer saying, "it worked on my machine."
The next step up the pyramid is Orchestration. There are many orchestration engines out there, but the one that has edged out the other is Kubernetes(k8s). An orchestration engine is responsible for grouping containers into a logical application and centralizes management of the system. Google has spent years building highly available systems based on the foundation that became Kubernetes. Since k8s is an orchestrator for containers, they have check all the same checkboxes but added benefit. You also get a lot more control with k8s. Scaling and resiliency become configurations. Orchestration works on the desired state. You tell the k8s how many instances of each container, and it works to maintain that state. There are also means to automate scalability through spinning up a new container or launches a job, perhaps. Monitoring applications in k8s becomes easier as there are many third-party tools designed just for that purpose. Most cloud platforms have managed Kubernetes offering. Azure Kubernetes, for instance. A managed solution means you don’t have to worry about the hardware nor worry about maintaining the control plane. The provider handles that. So, for the most part, your k8s deployments should run on each provider’s offering.
Orchestration is as high as some organizations need to climb. If your applications are deployed and stable, hosting in k8s is fine. You can automate deployments through updating the yaml files you used to build your application. K8s is great had handling upgrades, especially zero downtime upgrades. But if your hardware changes often or you plan on deploying many instances like in a SaaS offering, you may need to climb to the top.
Infrastructure as Code
The top of the hill is Infrastructure as Code (IaC). IaC is the process of managing and provisioning your application, including the hardware, by describing it in a file. This file can then be checked into your source control system. You can track changes over time. A tool that I use to manage my environments is Terraform. Terraform has modules that allow you to define your infrastructure for most of the cloud platforms and on-premises infrastructure as well. There are also Kubernetes and Helm modules that allow you to define your applications alongside the infrastructure needed to run them. Being able to group your application deployment with its infrastructure is excellent for SaaS offerings. It also helps out in your DevOps environment. Wouldn’t it be nice to deploy a new test environment with your nightly build? Need to stand up an environment? Run a script. Not all projects require this level of development, but it is nice to know that this is an option. This level checks all the boxes when it comes to the properties of Cloud Native that I laid out in my last post. IaC does take a commitment to get all the configurations correct, especially in large applications. But once you have it dialed in, this time commitment pays for itself.
This is a pyramid that I came up with myself. I feel that it encapsulates how to deploy a Cloud Native Applications. It provides some tools that help you at each layer. The complexity our your application and the time you are willing to commit determines how high up the pyramid you need to travel. There are other ways to achieve the same results. This technique has proven successful for projects that I have been on in the past. Comment down below with ways you have developed and deployed Cloud Native applications.