Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi-tenant 101 #201

Merged
merged 34 commits into from
Sep 22, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
34 commits
Select commit Hold shift + click to select a range
43e3385
Started Helm templates
Phantom-Intruder Feb 18, 2024
d2a1158
fluentbit sidecar
Phantom-Intruder Feb 19, 2024
ea1bf7c
Helm templates cont.
Phantom-Intruder Feb 20, 2024
0f0579e
Helm templates cont.
Phantom-Intruder Feb 22, 2024
c825881
Helm template cont
Phantom-Intruder Feb 23, 2024
98abe18
Helm templates changes
Phantom-Intruder Feb 25, 2024
f6153e3
Service template changes
Phantom-Intruder Feb 28, 2024
383a5be
overriding values yaml
Phantom-Intruder Feb 29, 2024
c235eca
overriding values yaml
Phantom-Intruder Mar 1, 2024
e7e7f9c
Helm templates cont.
Phantom-Intruder Mar 3, 2024
f845994
Helm templates cont
Phantom-Intruder Mar 5, 2024
c4b78a7
New Relic changes
Phantom-Intruder Mar 6, 2024
d186759
Helm templates cont.
Phantom-Intruder Mar 7, 2024
7b5d0e1
Start multi-tenant
Phantom-Intruder Mar 9, 2024
5cee8e5
Merge branch 'tenant' of https://github.com/Phantom-Intruder/kubelabs…
Phantom-Intruder Mar 10, 2024
5280f44
Multi-tenant cont.
Phantom-Intruder Mar 10, 2024
0f26730
Multi-tenant cont.
Phantom-Intruder Mar 11, 2024
283d5e3
Multi-tenant cont.
Phantom-Intruder Mar 12, 2024
c7166c9
Multi-tenant cont.
Phantom-Intruder Mar 13, 2024
810dcf0
Multi-tenant cont.
Phantom-Intruder Mar 14, 2024
1f46cac
Start multi-tenant
Phantom-Intruder Mar 16, 2024
78e9c5a
Multi-tenant cont.
Phantom-Intruder Mar 18, 2024
1233b85
Multi-tenant cont.
Phantom-Intruder Mar 22, 2024
3fc0b36
Start multi-tenant
Phantom-Intruder Mar 23, 2024
ca5a9d7
Start multi-tenant
Phantom-Intruder Mar 24, 2024
76d97f1
Multi-tenant cont.
Phantom-Intruder Mar 27, 2024
2d19edc
Multi-tenant cont.
Phantom-Intruder Mar 28, 2024
e459cda
Multi-tenant cont.
Phantom-Intruder Mar 29, 2024
da49fc0
Tenant cont.
Phantom-Intruder Mar 30, 2024
f520d52
Multi-teant cont.
Phantom-Intruder Mar 31, 2024
10b4094
QoS
Phantom-Intruder Apr 2, 2024
6392aa6
QoS
Phantom-Intruder Apr 3, 2024
b3df2ad
Multi-tenant cont.
Phantom-Intruder Apr 4, 2024
20b669a
Finished multi-tenent
Phantom-Intruder Apr 5, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 13 additions & 1 deletion Autoscaler101/what-are-autoscalers.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,18 @@ A horizontal pod autoscaler works in the same way as a VPA for the most part. It

Scaling down is handled in roughly the same way. When scaling down, HPA reduces the number of pod replicas. It terminates existing pods to bring the number of replicas in line with the configured target metric. The scaling decision is based on the comparison of the observed metric with the target value. HPA does not modify the resource specifications (CPU and memory requests/limits) of individual pods. Instead, it adjusts the number of replicas to match the desired metric target.

Now that we have thoroughly explored both types of autoscalers, let's go on to a lab where we will look at the scalers in more detail.
Before we go into the lab, since we are talking about metrics, let's take a breif look at the quality of service classes for pod metrics:

## Quality of Service classes

In Kubernetes, Guaranteed, Burstable, and BestEffort are Quality of Service (QoS) classes that define how pods are treated in terms of resource allocation and management. These classes help Kubernetes prioritize and manage workload resources effectively. Here's what each term means:

**Guaranteed**: Pods with Guaranteed QoS are allocated the amount of CPU and memory resources they request, and these resources are guaranteed to be available when needed. Kubernetes reserves resources for pods with Guaranteed QoS, ensuring that they will not be throttled or terminated due to resource shortages. Pods in this class are expected to consume all the resources they request, so they must be careful with their resource requests to avoid wasting resources.

**Burstable**: Pods with Burstable QoS may use more resources than they request, but only up to a certain limit. Kubernetes allows pods in this class to use additional CPU and memory resources if they're available, but there's no guarantee that these resources will always be available. Pods in this class may be throttled or terminated if they exceed their resource limits and there's contention for resources with other pods.

**BestEffort**: Pods with BestEffort QoS have the lowest priority for resource allocation.These pods are not guaranteed any specific amount of CPU or memory resources, and they are the first to be evicted if the node runs out of resources.BestEffort pods are typically used for non-critical workloads or background tasks that can tolerate resource contention or occasional interruptions.

Now that we have thoroughly explored both types of autoscalers and taken a breif look at how QoS classes work, let's go on to a lab where we will look at the scalers in more detail.

[Next: Autoscaler lab](../Autoscaler101/autoscaler-lab.md)
4 changes: 2 additions & 2 deletions Helm101/helm-charts.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,6 +99,6 @@ This template can then be used within all helm charts:
.dockerconfigjson: {{ template "imagePullSecret" . }}
```

This covers the basics of Helm charts, should you need to create one. However, only narrowly covers the full breadth of what Helm has to offer. For more tips and tricks, visit Helm [official docs](https://helm.sh/docs/howto/charts_tips_and_tricks/). Now, let's move on to Chart hooks.
This covers the basics of Helm charts, should you need to create one. However, only narrowly covers the full breadth of what Helm has to offer. For more tips and tricks, visit Helm [official docs](https://helm.sh/docs/howto/charts_tips_and_tricks/). In this section, we briefly touched up on Helm templates as a means to start off the creation of your new Helm chart. However, templates can be a really powerful tool if you want to reduce repetition in your deployment manifests. Therefore, in the next section, we will be diving deep into creating your own helm templates.

[Next: Chart Hooks](chart-hooks.md)
[Next: Helm templates](./helm-templates.md)
171 changes: 171 additions & 0 deletions Helm101/helm-templates.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,171 @@
# Helm templates

In even a small-scale organization, you would have at least a couple of applications that work together inside a Kubernetes cluster. This means you would have a minimum of 5-6 microservices. As your organization grows, you could go on the have 10, then 20, even 50 microservices, at which point a problem arises: the deployment manifests. Handling just one or two is fairly simple, but when it comes to several dozen, updating and adding new manifests can be a real problem. If you have a separate git repository for each microservice, you will likely want to keep each deployment yaml within the repo. If this is a regular organization that follows best practices, you will be required to create pull requests and have them reviewed before you merge to master. This means if you want to do something as simple as change the image pull policy for several microservices, you will have to make the change in each repo, create a pull request, have it reviewed by someone else, and then merge the changes. This is a pretty large number of steps that a Helm template can reduce to just 1.

To start, we will need a sample application. We could use the same charts that we used in the previous section, but instead let's go with a new application altogether: nginx.

This will be our starting point:

```
# nginx-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
```

```
# nginx-service.yaml

apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
```

The above is a rather basic implementaion of an nginx server with 3 replicas, and allows connections on port 80. For starters, let's create a Helm chart from this nginx application.

For starters, let's create the Helm chart. Go into a folder you plan to run this from and type:

```
helm create nginx-chart
```

This will create a chart with the basic needed files. The directory structure should look like this:

```
nginx-chart/
├── Chart.yaml
├── templates
│ ├── deployment.yaml
│ └── service.yaml
└── values.yaml
```

By looking at the above structure, you should be able to see where the deployment and service yamls fit in. You will see that there are sample yamls created here. However, you will also notice that these yamls are go templates which have placeholders instead of hardcoded values. We will be converting our existing yamls into this format. But first, update the Chart.yaml file to include relevant metadata for nginx if you require so. Generally, the default Chart.yaml is fine. You can also optionally modify values.yaml. Things such as the number of replicas can be managed here.

Next, we get to the templating part. We will have to convert our existing deployment yaml into a Helm template file. This is what the yaml would look like after it is converted:

```
# templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-nginx-deployment
labels:
app: nginx
spec:
replicas: {{ .Values.nginx.replicaCount }}
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: "{{ .Values.nginx.image.repository }}:{{ .Values.nginx.image.tag }}"
ports:
- containerPort: {{ .Values.nginx.containerPort }}
```

The first thing to change is the naming convention: In the metadata.name field, {{ .Release.Name }}- has been added to prefix the deployment name. This ensures that each deployment has a unique name when installed via Helm, with .Release.Name representing the release name generated by Helm. The replica count has been replaced with {{ .Values.nginx.replicaCount }}. This allows the user to set the number of replicas in the values.yaml file of the Helm chart. When it comes to the image tag and repository, the hardcoded image name nginx:latest has been replaced with {{ .Values.nginx.image.repository }}:{{ .Values.nginx.image.tag }}. This allows the user to specify the image repository and tag in the values.yaml file. Finally, the container port's hardcoded port 80 has been replaced with {{ .Values.nginx.containerPort }}, allowing the user to specify the container port in the values.yaml file.

These changes make the Helm template more flexible and configurable, allowing you to customize the deployment according to their requirements using the values.yaml file. Now let's take a look at the service yaml and how it would look after it is converted:

```
# templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}-nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: {{ .Values.nginx.servicePort }}
targetPort: {{ .Values.nginx.containerPort }}
type: {{ .Values.nginx.serviceType }}

```

Similar to the deployment template, the service name has been replaced with {{ .Release.Name }}- to ensure uniqueness when installed via Helm. For the service port, the hardcoded service port 80 has been changed to {{ .Values.nginx.servicePort }}. This allows you to specify the service port in the values.yaml file. We also replaced the hardcoded target port 80 with {{ .Values.nginx.containerPort }}, allowing you to specify the target port in the values.yaml file. This should match the container port defined in the deployment template. For the service type we replaced the hardcoded service type ClusterIP with {{ .Values.nginx.serviceType }}, allowing users to specify the service type in the values.yaml file. This provides flexibility in choosing the appropriate service type based on the environment or requirements.

Now that we have defined both the deployment and the service in a template format, let's take a look at what the overriding values file would look like:

```
nginx:
replicaCount: 3
image:
repository: nginx
tag: latest
containerPort: 80
servicePort: 80
serviceType: ClusterIP
```

In this values.yaml file, the replicaCount specifies the number of replicas for the nginx deployment image.repository and image.tag specify the Docker image repository and tag for the nginx container. The containerPort specifies the port on which the nginx container listens and servicePort specifies the port exposed by the nginx service. Finally, the serviceType specifies the type of Kubernetes service to create for nginx. You might want to change this to NodePort or LoadBalancer if you plan to provide external access (or use kubectl port forwarding).

With this structure, users can now install your Helm chart, and they'll be able to customize the number of replicas and the Nginx image tag through the values.yaml file. Let's go ahead and do the install using the below command:

```
helm install my-nginx-release ./my-nginx-chart --values values.yaml
```

Make sure you run the above command in the same directory as the values.yaml. This will create a release called "my-nginx-release" based on the chart overriding the values.yaml in your Kubernetes cluster. You should be able to run and test the Nginx server that comes up as a result. However, you will notice that we have gone out of our way to define templates and overriding files for something that a simple yaml file could have accomplished. There is more code now than before. So what is the advantage?

For starters, you get all the perks that come with using Helm charts. But now you also have a template you can use to generate additional helm releases. For example, if you want to run another Nginx server with different arguments (different number of replicas, a different image version, different port, etc...), you can use this template. This is especially true if you are working in an organization that has multiple services that require different Nginx setups. You could even consider a situation where your organization has 10+ microservices where the pods you spin up for each microservice are largely boilerplate. The only things that would likely change are the names of the microservice and the image that would spin up in the container. In a situation like this, you could easily create a values file that has a handful of lines that override the Helm template.

Let's try this. Create a new values-new.yaml and set the below values:

```
nginx:
replicaCount: 2
image:
repository: nginx
tag: alpine3.18-perl
containerPort: 80
servicePort: 8080
serviceType: ClusterIP
```

The new yaml has a changed replica count, gets a different image tag, and serves on port 8080 instead of 80. In order to deploy this, you can use the same

```
helm install my-nginx-release-alpine ./my-nginx-chart --values values-new.yaml
```

The release name and the yaml that gets picked up need to be changed here. In this same way, you could create different values.yaml with different overriding properties and end up with an infinite number of nginx servers, each with different values.

This brings us to the end of the section on the powerful use of Helm templates. Now, let's move on to Chart hooks.

[Next: Chart Hooks](chart-hooks.md)
Loading
Loading