Skip to content

An AWS pipeline created with Terraform. Ansible playbooks build a Jenkins server, which analyzes code with SonarQube, then builds a container with Docker, publishes on JFrog, and deploys on Kubernetes.

Notifications You must be signed in to change notification settings

mfkimbell/terraform-aws-DevOps

Repository files navigation

terraform-aws-DevOps-pipeline

workflow devops 2

Tools Used:

  • AWS Hosting of all CI/CD resources and permissions to those resources
  • EC2 Manage instances that run various servers for the application architecture
  • Maven Managing dependencies and compiling Java code
  • Jenkins Build and monitor automation of the application deployment process
  • Ansible Scripts for establishing jenkins masters and slave servers
  • SonarQube Automatic analysis of code for bugs and bad design
  • JFrog Artifactory Manage build artifacts and store Docker containers
  • Docker Containerize the application and server
  • Kubernetes Docker container management and fault tolerance with load balancer
  • EKS AWS management of Kubernetes
  • eksctl Command line management of EKS clusters on AWS CLI
  • kubectl Command line management of Kubernetes clusters

Purpose:

The purpose of this project was to learn about using Terraform in a CICD pipeline, along with implementation of SonarQube for analysis. Compared to the last CICD pipeline I worked on, I appreciated how easy terraform made it to build and destroy all of the resources, this is especially helpful for working with AWS where resources can be expensive if you leave them running.

Overview:

This project uses Terraform to establish a VPC, subnets, and security groups, and to build the base EC2 servers for Jenkins-Master, Jenkins-BuildSlave, and Ansible. Then, run ansible playbooks to connect to the Jenkins EC2s and alter them for operation (things like installing Jenkins, Java, Maven, Docker, and starting those services). We then establish a build pipeline on Jenkins so that changes in code trigger builds in Jenkins by using a WebHook on Github. Additionally, we create a Jenkinsfile so we can establish stages to execute, first off being build and unit test stages. We then establish a SonarQube build stage that reports code deficiencies, and we create a Quality Gate to fail builds when thresholds are crossed. Then we used JFrog Artifactory to receive build artifacts (.jar) and the docker image, but first, we use our build slave to build a Docker Container to upload as an artifact. We create a Jenkins stage for both building and publishing of the container. Finally, we create terraform scripts for eks and its necessary security groups (resource policies etc). We then create Kubernetes manifest files along with a shell script to execute them. These files create a namespace, connect to JFrog, deploy eks, and establish a service. We add a “deploy” stage on Jenkinsfile with the shell script, and at that point the pipeline is complete.

Ansible in this instance is used to quickly setup the enviornements the Jenksins servers on Ubuntu distributions with the ability to do things like connect to docker, run the local project, and actually run Jenkins. Jenkins needs the ability to build the code and build the container since it's responsible to sending updated containers to Jfrog. Ansible has to be setup manually, but it's the only thing that needs to.

Jenkins Jobs running:

Screenshot 2023-11-26 at 12 07 45 PM

Jenkins multi-pipeline job for branches:

Screenshot 2023-11-26 at 12 37 13 PM

Aws instances for jenkins, build slave, ansible, and EKS:

Screenshot 2023-11-26 at 12 16 03 PM

Documentation from start to end:

As I will be deleting all of these AWS resources to prevent charges on my account, I have documented thoroughly this entire process.

Install Terraform and create a PATH so it can be executed from anywhere. I simply added it to /user/local/bin.

Install AWS CLI

Create a new IAM user and grab its credentials for the CLI

Screenshot 2023-11-24 at 10 51 38 AM

Test command to ensure AWS CLI is working. Obviously, we could use CloudShell to connect as well.

Screenshot 2023-11-24 at 10 55 08 AM

Now we create a basic terraform file for an EC2 instance

Screenshot 2023-11-24 at 2 49 03 PM

Now we run terraform init terraform validate terraform plan and terraform apply. Which will start the EC2 instance:

Screenshot 2023-11-24 at 2 55 33 PM

Note: previously I was able to use instance connect by default cause I had been using Amazon Linux 2 EC2 Instances. This time, I had to SSH into the instance from my Bash terminal with a pem key as Amazon Linux 2 is becoming outdated soon

Amazon Linux

ssh -i <path to pem>.pem ec2-user@<public IP>

Ubuntu

ssh -i <path to pem>.pem ubuntu@<public IP>

Additionally, we need to alter the terraform file to create a security group that will allow SSH:

Screenshot 2023-11-24 at 3 45 41 PM

We can see the security group was sucessfully added

Screenshot 2023-11-24 at 4 04 02 PM

Now we continue to alter the terraform file adding a VPC

Screenshot 2023-11-24 at 4 49 22 PM

Two subnets in different availability zones

Screenshot 2023-11-24 at 4 51 51 PM

An internet gateway plus a route table that connects to that gateway:

Screenshot 2023-11-24 at 4 54 00 PM

Then we need to do the route table assocaition:

Screenshot 2023-11-24 at 5 00 30 PM

And we can see all these resources being formed:

Screenshot 2023-11-24 at 5 14 19 PM

I altered the EC2 instances to dynamically create for Jenkins master, slave, and ansible

Screenshot 2023-11-24 at 5 22 58 PM

And here we can see those running

Screenshot 2023-11-24 at 5 31 36 PM

Roadblock that took me a couple hours to debug When you SSH into an Ubuntu instance, you need to set the user to "Ubuntu" not "ec2-user"

run the following commands to install ansible on the ubuntu server

sudo apt update
sudo apt install software-properties-common
sudo add-apt-repository --yes --update ppa:ansible/ansible
sudo apt install ansible

For Jenkins-master setup, I'm going to use the private IP since the public IP will change over time. These servers are in the same VPC so connection will be fine.

I create a "hosts" file in the ansible server that containers the jenkins master IP as well as some variables that will allow connection, a username of "ubuntu" and I'll use the dpp.pem file which i'll copy into the server.

Screenshot 2023-11-24 at 6 12 28 PM

Roadblock If you want ansible to have access to another EC2, it needs the keypair pem file. I normally program on mac and tried to use SCP command

scp -i dpp.pem dpp.pem ubuntu@<address>:/opt

to get it into my folder of interest, however, it would only do it for the base image and not for when I was logged in as root user. So as you can see, I switched to windows, downloaded MobaXTerm, and imported the file that way.

image

Now, ansible can successfully connect to the Jenkins-master

image

I also added the slave to the hosts file

image

Now, we can add an ansible playbook to our Ansible server for setting up Jenkins:

Screenshot 2023-11-24 at 9 07 23 PM

I add port 8080 to the ingress of our terraform so we can access the Jenkins server via browser, and it sucessfully loads:

Screenshot 2023-11-24 at 9 40 10 PM

Now, we want to write an ansible playbook to install maven on our jenkins-slave, which will be executing tasks queued by the jenkins-master:

Screenshot 2023-11-24 at 9 42 52 PM

We want our Jenkins-master to be able to access the build server, so we add the .pem credentials:

Screenshot 2023-11-25 at 12 00 13 PM

Then we create the slave node:

Screenshot 2023-11-25 at 12 13 48 PM

I create a test job and successfully execute a test command:

Screenshot 2023-11-25 at 12 28 08 PM

Now we create a new job to run the local application. We create a test pipeline script:

Screenshot 2023-11-25 at 1 02 04 PM

Which runs successfully:

Screenshot 2023-11-25 at 1 04 10 PM

Now, we change the code to a Jenkinsfile on the local repository and connect it via SCM:

Screenshot 2023-11-25 at 1 25 39 PM

I then update the Jenkins file to build with maven:

stage("build"){
            steps {
                 echo "----------- build started ----------"
                sh 'mvn clean deploy -Dmaven.test.skip=true'
                 echo "----------- build complted ----------"
            }
        }

Then add my github repository to the Jenkins credentials, however, this is only necessary if I make my repository private:

Screenshot 2023-11-25 at 1 44 58 PM

I created a multi-branch pipeline for the project and add a dev and stage branch:

Screenshot 2023-11-25 at 2 53 04 PM

I install the Multibranch Scan Webhook Trigger plugin on Jenkins. I then add a webhook on github that triggers the multi-branch pipeline job.

Screenshot 2023-11-25 at 2 36 59 PM Screenshot 2023-11-25 at 2 33 57 PM

Now after a commit, we can see the webhook automatically triggers which will run the multibranch job and update the main branch, however, it will automatically make changes for whichever branch has changes made to it:

Screenshot 2023-11-25 at 2 38 29 PM

Now we load up SonarQube and create a security token, and add that token to Jenkins.

Screenshot 2023-11-25 at 6 41 02 PM

Then, we add SonarQube server to Jenkins.

Screenshot 2023-11-25 at 6 44 02 PM Screenshot 2023-11-25 at 6 45 52 PM

I create an "organization" on sonarcloud.io, add a project key, and then add a sonar properties file to my repo:

Screenshot 2023-11-25 at 6 58 49 PM

Then we add a stage to the Jenkinsfile for SonarQube analysis:

Screenshot 2023-11-25 at 7 09 56 PM

Now we are able to get stats about the quality of our code from SonarQube like bugs, code-smell, and others. You can add quality gates for things thigns like code-duplication or bugs, to fail the builds if thresholds are passed. I built one for 50+ bugs in the code (however sonar considers a LOT of things "bugs"), then added a "Quality Gate" stage to the Jenkins file:

Screenshot 2023-11-25 at 9 06 49 PM

Now I would like to use Jfrog Artifactory to publish a Docker Image of my application. First, we create an access token on Jfrog and add it to Jenkins credentials. Also need to install the Artifactory plugin on Jenkins.

Screenshot 2023-11-25 at 10 01 05 PM

Now, we add a stage to our Jenkinsfile to capture the .jar file created by our project and store it on Jfrog Artifactory.

Screenshot 2023-11-25 at 10 07 26 PM

RoadBlock If at first you don't succeed... I ended up having a couple typos that made this frustating to figure out.

Screenshot 2023-11-25 at 10 19 06 PM

Here we can see the files successfully uploaded to JFrog:

Screenshot 2023-11-25 at 10 21 56 PM

However, we want to deploy this as a microservice, so this needs to be deployed on Docker. So first, we need to install docker on our jenkins slave by updating the ansible playbook jenkins-slave-setup.yaml

Screenshot 2023-11-25 at 10 32 34 PM

And it succeeded! (after some syntax changes not shown)

Screenshot 2023-11-25 at 10 51 59 PM

We create a dockefile to host the jar execution:

Screenshot 2023-11-25 at 11 02 50 PM

Here we can see our dockerfile in our slave after committing:

Screenshot 2023-11-25 at 11 06 09 PM

I create a docker repository on JFrog:

Screenshot 2023-11-25 at 11 13 58 PM

I install docker pipeline on Jenkins, then I add Docker Build and Docker Publish stages to the Jenkinsfile:

Screenshot 2023-11-25 at 11 23 37 PM Screenshot 2023-11-25 at 11 24 33 PM

We have been blessed with no failures:

Screenshot 2023-11-25 at 11 33 59 PM Screenshot 2023-11-25 at 11 35 07 PM Screenshot 2023-11-25 at 11 36 02 PM

We can manually start the container now:

Screenshot 2023-11-25 at 11 38 57 PM

Now, after we open port 8000 on Jenkins-Slave, we can access the application:

Screenshot 2023-11-25 at 11 43 23 PM

I renamed the original terraform file, since now we are going to have terraform files for eks/kubernetes as well as security group management:

Screenshot 2023-11-26 at 12 10 26 AM

I will not show these files in their entirety since they are long, however, we allocate the necessary resources for eks, policies for eks to access what it needs to, an iam role for ec2 eks worker, an autoscaling policy, and various resource policies for s3, daemon (for tracking), and other things. We have an output file for out endpoint, and we have a variable file to keep track of ids like security group, subnet, and vpc ids.

To our original vpc terraform file, we add lines to execute our other terraform files:

Screenshot 2023-11-26 at 12 34 51 AM Screenshot 2023-11-26 at 12 30 07 AM

We can see my EKS cluster has been created upon running the terraform file:

Screenshot 2023-11-26 at 12 23 51 AM

We can also see all of our EC2 instances running:

Screenshot 2023-11-26 at 12 32 20 AM

Now, we setup the aws cli on the build slave:

Screenshot 2023-11-26 at 12 48 41 AM

We get the kubernetes credentials and now we can use kubectl commands:

Screenshot 2023-11-26 at 12 49 55 AM

Now we can add kubernetes manifest files to our project. Files that 1. create a namespace, 2. establish secret credentials with Jfrog, 3. deploy the pods, and 4. create a service to define this execution.

Screenshot 2023-11-26 at 11 32 28 AM

We do a test run on the build slave:

Screenshot 2023-11-26 at 11 50 07 AM Screenshot 2023-11-26 at 11 53 13 AM

We can see its in "ImagePullBackOff" since it's failing to pull the JFrog image. I make a user called dockercred on JFrog, and then I use docker login https://mfkimbell.jfrog.io to login to my user on the build slave. We setup the deply.sh to run, and after running the service.yaml and open up port 3082 on the EKS containers, we can see our appliation running:

Screenshot 2023-11-26 at 12 19 33 PM

Now, we go to our Jenkinsfile and add a stage to automatically execute deploy.sh, which executes the kubernetes manifest files:

Screenshot 2023-11-26 at 12 29 21 PM

After the commit we can see our job rerunning:

Screenshot 2023-11-26 at 12 31 49 PM

And again we see our applicatin up and running via Kubernetes pods:

Screenshot 2023-11-26 at 12 32 46 PM

You live and you learn.... Don't leave EKS running...

Screenshot 2023-11-26 at 12 49 27 PM

About

An AWS pipeline created with Terraform. Ansible playbooks build a Jenkins server, which analyzes code with SonarQube, then builds a container with Docker, publishes on JFrog, and deploys on Kubernetes.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages