-
Notifications
You must be signed in to change notification settings - Fork 7
Helm_Lab_0
- Kubectl
Allows us to run commands against Kubernetes clusters to deploy applications, inspect and manage cluster resources, and view logs.
We can follow the instructions from Install and Set Up kubectl. We'll install kubectl binary with curl:
- Download the latest release with the command:
$ curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
- Make the kubectl binary executable.
$ chmod +x ./kubectl
- Move the binary in to your PATH.
$ sudo mv ./kubectl /usr/local/bin/kubectl
- Test to ensure the version you installed is up-to-date
$ kubectl version --client
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4",
GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean",
BuildDate:"2020-11-11T13:17:17Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
Before starting, let's take a quick look
You can create and manage a Deployment by using the Kubernetes command-line interface, Kubectl. Kubectl uses the Kubernetes API to interact with the cluster. In this module, you'll learn the most common Kubectl commands needed to create Deployments that run your applications on a Kubernetes cluster.
OK, so now that we’ve got the basics out of the way, let’s look at putting this to use. We’re going to first create the first Pod of MySql, then a Deployment, using YAML.
Create kubernetes config file in your home directory ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJWekNCL3FBREFnRUNBZ0VBTUFvR0NDcUdTTTQ5QkFNQ01DTXhJVEFmQmdOVkJBTU1HR3N6Y3kxelpYSjIKWlhJdFkyRkFNVFl3TnpReE1UUTRNVEFlRncweU1ERXlNRGd3TnpFeE1qRmFGdzB6TURFeU1EWXdOekV4TWpGYQpNQ014SVRBZkJnTlZCQU1NR0dzemN5MXpaWEoyWlhJdFkyRkFNVFl3TnpReE1UUTRNVEJaTUJNR0J5cUdTTTQ5CkFnRUdDQ3FHU000OUF3RUhBMElBQkhaS0xwWEVuWVJMeTRSa1FJYmptaHNiNGdwcjJ5ZmlLVVU5QXQwRDVnNS8KY0lxLzJ5THliYnVqVXNrSzhGWEQrZStiK0xIUGNKRWlhSzBDTjZSVE1yZWpJekFoTUE0R0ExVWREd0VCL3dRRQpBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUFvR0NDcUdTTTQ5QkFNQ0EwZ0FNRVVDSVFDTFR2UjVYMzVlCmVweW1KeEd5WTI3UHllb244ZHNZUjRBWEVSb0N5T1FCMlFJZ1dESlZzYnJwWGkzNUY2endpNFRKb1hRUitiaFMKN3AxZy9MbVdRWnM0TjlrPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://91.211.152.138:6443
name: helm-poc
contexts:
- context:
cluster: helm-poc
namespace: sandy
user: helm-poc
name: helm-poc
current-context: helm-poc
kind: Config
preferences: {}
users:
- name: helm-poc
user:
password: 0864eeafb41906e5797fc1aad1ff60e2
username: admin
Validate if connectivity with kubernetes cluster is in place or not
$kubectl cluster-info
Kubernetes master is running at https://91.211.152.138:6443
CoreDNS is running at https://91.211.152.138:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://91.211.152.138:6443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
$ kubectl create namespace <Team Namespace>
$ kubectl config set-context $(kubectl config current-context) --namespace=<team_name>;
i.e kubectl config set-context $(kubectl config current-context) --namespace=ethanhunt
-
Step 1:
Create file mysql.yaml
Here we creating MySql pod as a micro service keeping environment variable of database as:
* MYSQL_ROOT_PASSWORD:password
* MYSQL_USER:root
* MYSQL_DATABASE:employeedb
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: mysql
name: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.7.30
ports:
- containerPort: 3306
name: mysql
protocol: TCP
env:
- name: MYSQL_ROOT_PASSWORD
value: password
- name: MYSQL_USER
value: root
- name: MYSQL_DATABASE
value: employeedb
-
Step 2:
Now applying the pod using YAML file
$ kubectl apply -f mysql.yaml
deployment.apps/mysql created
-
Step 3:
Verify the pod is up and running
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mysql-68cdc487d5-d2w9l 1/1 Running 0 28s
Pods are nonpermanent resources? If you use a Deployment to run your app, it can create and destroy Pods dynamically. Kubernetes Pods are created and destroyed to match the state of your cluster
Each Pod gets its own IP address, however in a Deployment, the set of Pods running in one moment in time could be different from the set of Pods running that application a moment later.
This leads to a problem: if some set of Pods (call them "backends") provides functionality to other Pods (call them "frontends") inside your cluster, how do the frontends find out and keep track of which IP address to connect to, so that the frontend can use the backend part of the workload?
In Kubernetes, a Service is an abstraction which defines a logical set of Pods and a policy by which to access them.
-
Step 1:
Create file mysql-svc.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: mysql
name: mysql-svc
spec:
ports:
- name: mysql
port: 3306
protocol: TCP
targetPort: mysql
selector:
app: mysql
type: ClusterIP
-
Step 2:
Now applying the service using YAML file
$ kubectl apply -f mysql-svc.yaml
service/mysql-svc created
-
Step 3:
Verify the service is up and running
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mysql-svc ClusterIP 192.168.174.136 <none> 3306/TCP 15s
-
Step 1:
Create file deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: app
name: goweb-app
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: app
template:
metadata:
labels:
app.kubernetes.io/name: app
spec:
containers:
- env:
- name: DB_URL
value: mysql-svc
- name: DB_PORT
value: "3306"
- name: DB_USER
value: root
- name: DB_PASSWORD
value: password
image: opstreedevops/ot-go-webapp:v1
name: goweb-app
ports:
- containerPort: 8080
name: http
protocol: TCP
-
Step 2:
Now applying the pod using YAML file
$ kubectl apply -f deployment.yaml
-
Step 3:
Now creating the service YAML file
Create deployment-svc.yaml file
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: app
name: goweb-app
spec:
ports:
- name: http
port: 8080
protocol: TCP
targetPort: http
selector:
app.kubernetes.io/name: app
type: ClusterIP
-
Step 4:
Now applying the service file
$ kubectl apply -f deployment-svc.yaml
-
Step 1:
Create file ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app.kubernetes.io/name: app
name: goweb-app
spec:
rules:
- host: opstree.com
http:
paths:
- backend:
serviceName: goweb-app
servicePort: 8080
path: /
pathType: ImplementationSpecific
-
Step 2:
Now applying the ingress using YAML file
$ kubectl apply -f ingress.yaml
To check on web enter the details in /etc/hosts file depicting hostname and IP Address:-
91.211.152.138 opstree.com