In the previous article, we have discussed the theory of DevOps and Infrastructure as Code. It's now time to dig into the practical concepts of these topics and see actual results from a CI/CD pipeline that runs Terraform and deploys the infrastructure for a Kubernetes cluster that can host a 5G networking solution.
The entire talk can be watched on the platform of the Xchange conference. It's available for all Replyers and Reply's customers.
First, in this practical article, we wanted to share the code used for the different Terraform demos. The <main.tf> files of each demo are shown also in the screenshots below.
Of course, we can do all these manually by typing a couple of commands on our local machine (assuming AWS credentials are set up). The commands we need to do this locally are:
Automation couldn't be missing from one of our articles since we have talked about DevOps, CI/CD and other automation concepts recently in previous blogs. Below we can see how a Gitlab CI (CI/CD) pipeline looks like in the context of provisioning the infrastructure. The three steps are responsible for automatically building (init & validate), deploy and, when needed, destroy the infrastructure.
As with any code the tools we need to trigger automatically this pipeline is our IDE (we used IntelliJ), git and the setup on Gitlab CI (gitlab-ci.yml is attached below). Once we make a change and push the code to our repository the pipeline will be triggered and the steps will be executed.
image: name: rockcontent/terraform-awscli-kubectl-kops:0.14.3-1.17.3-1.17.1 entrypoint: - '/usr/bin/env' - 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin' - 'AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}' - 'AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}' - 'AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}' stages: - build - deploy - destroy before_script: - rm -rf .terraform - terraform --version - cd infrastructure/03-EKS-Cluster-Deployment-Auto - terraform init build: stage: build script: - terraform validate - terraform plan deploy: stage: deploy allow_failure: true script: - aws --version - terraform apply -auto-approve=true destroy: stage: destroy script: - terraform destroy -auto-approve when: manual
Finally, following the successful execution of the pipeline i.e. green pipeline, we will be able to browse to our AWS account in the EC2 section and see what instances have been initiated. Below we can see the t2.small EC2 instances created as part of the Elastic Kubernetes Service (EKS) when executing the TF files within the directory of "03-EKS-Cluster-Deployment-Auto". Also a t2.micro server is created by the TF code in the directory of "02-AWS-EC2-Provision-Auto" when the gitlab-ci.yml is pointing to this directory (in the before_script stage).
Before you go away, we would like to thank you for reaching this point. We really hope we have managed to transfer some knowledge to you. This article has delved into the practical concepts of DevOps and Infrastructure as Code (IaC) and you can check out our previous article for theoretical explanations. If you would like to explore more of DevOps and IaC, at Reply we are offering the opportunity to discuss these topics and adjust them to your services and needs. Feel free to browse at Reply Online Services (ROSe) - rose.reply.com and book an appointment with us.
Otherwise...
We are technology consultants at Net Reply UK. The team consists of software developers, technology enthusiasts specialising in Telecommunications and technological concepts such as Software Defined Networks (SDN), Network Function Virtualisation (NFV), DevOps. Our mission is to build the Next Generation Networks leveraging the art of software and the latest technological trends. If you would like more information, feel free to reach out on LinkedIn (stelios-moschos) LinkedIn (thayná-dorneles). Alternatively, you can learn more about us on LinkedIn (Net UK) and Twitter (Net UK)