AWS
Hosting of all CI/CD resources and permissions to those resourcesEC2
Manage instances that run various servers for the application architectureMaven
Managing dependencies and compiling Java codeJenkins
Build and monitor automation of the application deployment processAnsible
Scripts for establishing jenkins masters and slave serversSonarQube
Automatic analysis of code for bugs and bad designJFrog Artifactory
Manage build artifacts and store Docker containersDocker
Containerize the application and serverKubernetes
Docker container management and fault tolerance with load balancerEKS
AWS management of Kuberneteseksctl
Command line management of EKS clusters on AWS CLIkubectl
Command line management of Kubernetes clusters
The purpose of this project was to learn about using Terraform in a CICD pipeline, along with implementation of SonarQube for analysis. Compared to the last CICD pipeline I worked on, I appreciated how easy terraform made it to build and destroy all of the resources, this is especially helpful for working with AWS where resources can be expensive if you leave them running.
This project uses Terraform to establish a VPC, subnets, and security groups, and to build the base EC2 servers for Jenkins-Master, Jenkins-BuildSlave, and Ansible. Then, run ansible playbooks to connect to the Jenkins EC2s and alter them for operation (things like installing Jenkins, Java, Maven, Docker, and starting those services). We then establish a build pipeline on Jenkins so that changes in code trigger builds in Jenkins by using a WebHook on Github. Additionally, we create a Jenkinsfile so we can establish stages to execute, first off being build and unit test stages. We then establish a SonarQube build stage that reports code deficiencies, and we create a Quality Gate to fail builds when thresholds are crossed. Then we used JFrog Artifactory to receive build artifacts (.jar) and the docker image, but first, we use our build slave to build a Docker Container to upload as an artifact. We create a Jenkins stage for both building and publishing of the container. Finally, we create terraform scripts for eks and its necessary security groups (resource policies etc). We then create Kubernetes manifest files along with a shell script to execute them. These files create a namespace, connect to JFrog, deploy eks, and establish a service. We add a “deploy” stage on Jenkinsfile with the shell script, and at that point the pipeline is complete.
Ansible in this instance is used to quickly setup the enviornements the Jenksins servers on Ubuntu distributions with the ability to do things like connect to docker, run the local project, and actually run Jenkins. Jenkins needs the ability to build the code and build the container since it's responsible to sending updated containers to Jfrog. Ansible has to be setup manually, but it's the only thing that needs to.
Jenkins Jobs running:
Jenkins multi-pipeline job for branches:
Aws instances for jenkins, build slave, ansible, and EKS:
As I will be deleting all of these AWS resources to prevent charges on my account, I have documented thoroughly this entire process.
Install Terraform and create a PATH so it can be executed from anywhere. I simply added it to /user/local/bin.
Install AWS CLI
Create a new IAM user and grab its credentials for the CLI
Test command to ensure AWS CLI is working. Obviously, we could use CloudShell to connect as well.
Now we create a basic terraform file for an EC2 instance
Now we run terraform init
terraform validate
terraform plan
and terraform apply
. Which will start the EC2 instance:
Note: previously I was able to use instance connect by default cause I had been using Amazon Linux 2 EC2 Instances. This time, I had to SSH into the instance from my Bash terminal with a pem key as Amazon Linux 2 is becoming outdated soon
Amazon Linux
ssh -i <path to pem>.pem ec2-user@<public IP>
Ubuntu
ssh -i <path to pem>.pem ubuntu@<public IP>
Additionally, we need to alter the terraform file to create a security group that will allow SSH:
We can see the security group was sucessfully added
Now we continue to alter the terraform file adding a VPC
Two subnets in different availability zones
An internet gateway plus a route table that connects to that gateway:
Then we need to do the route table assocaition:
And we can see all these resources being formed:
I altered the EC2 instances to dynamically create for Jenkins master, slave, and ansible
And here we can see those running
Roadblock that took me a couple hours to debug When you SSH into an Ubuntu instance, you need to set the user to "Ubuntu" not "ec2-user"
run the following commands to install ansible on the ubuntu server
sudo apt update
sudo apt install software-properties-common
sudo add-apt-repository --yes --update ppa:ansible/ansible
sudo apt install ansible
For Jenkins-master setup, I'm going to use the private IP since the public IP will change over time. These servers are in the same VPC so connection will be fine.
I create a "hosts" file in the ansible server that containers the jenkins master IP as well as some variables that will allow connection, a username of "ubuntu" and I'll use the dpp.pem file which i'll copy into the server.
Roadblock If you want ansible to have access to another EC2, it needs the keypair pem file. I normally program on mac and tried to use SCP command
scp -i dpp.pem dpp.pem ubuntu@<address>:/opt
to get it into my folder of interest, however, it would only do it for the base image and not for when I was logged in as root user. So as you can see, I switched to windows, downloaded MobaXTerm, and imported the file that way.
Now, ansible can successfully connect to the Jenkins-master
I also added the slave to the hosts file
Now, we can add an ansible playbook to our Ansible server for setting up Jenkins:
I add port 8080 to the ingress of our terraform so we can access the Jenkins server via browser, and it sucessfully loads:
Now, we want to write an ansible playbook to install maven on our jenkins-slave, which will be executing tasks queued by the jenkins-master:
We want our Jenkins-master to be able to access the build server, so we add the .pem credentials:
Then we create the slave node:
I create a test job and successfully execute a test command:
Now we create a new job to run the local application. We create a test pipeline script:
Which runs successfully:
Now, we change the code to a Jenkinsfile on the local repository and connect it via SCM:
I then update the Jenkins file to build with maven:
stage("build"){
steps {
echo "----------- build started ----------"
sh 'mvn clean deploy -Dmaven.test.skip=true'
echo "----------- build complted ----------"
}
}
Then add my github repository to the Jenkins credentials, however, this is only necessary if I make my repository private:
I created a multi-branch pipeline for the project and add a dev and stage branch:
I install the Multibranch Scan Webhook Trigger
plugin on Jenkins. I then add a webhook on github that triggers the multi-branch pipeline job.
Now after a commit, we can see the webhook automatically triggers which will run the multibranch job and update the main branch, however, it will automatically make changes for whichever branch has changes made to it:
Now we load up SonarQube and create a security token, and add that token to Jenkins.
Then, we add SonarQube server to Jenkins.
I create an "organization" on sonarcloud.io, add a project key, and then add a sonar properties file to my repo:
Then we add a stage to the Jenkinsfile for SonarQube analysis:
Now we are able to get stats about the quality of our code from SonarQube like bugs, code-smell, and others. You can add quality gates for things thigns like code-duplication or bugs, to fail the builds if thresholds are passed. I built one for 50+ bugs in the code (however sonar considers a LOT of things "bugs"), then added a "Quality Gate" stage to the Jenkins file:
Now I would like to use Jfrog Artifactory to publish a Docker Image of my application. First, we create an access token on Jfrog and add it to Jenkins credentials. Also need to install the Artifactory
plugin on Jenkins.
Now, we add a stage to our Jenkinsfile to capture the .jar
file created by our project and store it on Jfrog Artifactory.
RoadBlock If at first you don't succeed... I ended up having a couple typos that made this frustating to figure out.
Here we can see the files successfully uploaded to JFrog:
However, we want to deploy this as a microservice, so this needs to be deployed on Docker. So first, we need to install docker on our jenkins slave by updating the ansible playbook jenkins-slave-setup.yaml
And it succeeded! (after some syntax changes not shown)
We create a dockefile to host the jar execution:
Here we can see our dockerfile in our slave after committing:
I create a docker repository on JFrog:
I install docker pipeline
on Jenkins, then I add Docker Build
and Docker Publish
stages to the Jenkinsfile:
We have been blessed with no failures:
We can manually start the container now:
Now, after we open port 8000 on Jenkins-Slave, we can access the application:
I renamed the original terraform file, since now we are going to have terraform files for eks/kubernetes as well as security group management:
I will not show these files in their entirety since they are long, however, we allocate the necessary resources for eks, policies for eks to access what it needs to, an iam role for ec2 eks worker, an autoscaling policy, and various resource policies for s3, daemon (for tracking), and other things. We have an output file for out endpoint, and we have a variable file to keep track of ids like security group, subnet, and vpc ids.
To our original vpc terraform file, we add lines to execute our other terraform files:
We can see my EKS cluster has been created upon running the terraform file:
We can also see all of our EC2 instances running:
Now, we setup the aws cli on the build slave:
We get the kubernetes credentials and now we can use kubectl
commands:
Now we can add kubernetes manifest files to our project. Files that 1. create a namespace, 2. establish secret credentials with Jfrog, 3. deploy the pods, and 4. create a service to define this execution.
We do a test run on the build slave:
We can see its in "ImagePullBackOff" since it's failing to pull the JFrog image. I make a user called dockercred
on JFrog, and then I use docker login https://mfkimbell.jfrog.io
to login to my user on the build slave. We setup the deply.sh
to run, and after running the service.yaml
and open up port 3082 on the EKS containers, we can see our appliation running:
Now, we go to our Jenkinsfile and add a stage to automatically execute deploy.sh
, which executes the kubernetes manifest files:
After the commit we can see our job rerunning:
And again we see our applicatin up and running via Kubernetes pods:
You live and you learn.... Don't leave EKS running...