-
Notifications
You must be signed in to change notification settings - Fork 22
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Adding some tutorials and quickstartguide for PSKE
- Loading branch information
Showing
7 changed files
with
535 additions
and
8 deletions.
There are no files selected for viewing
42 changes: 42 additions & 0 deletions
42
...en/docs/20-container/10-managed-kubernetes/01-introduction/limits-and-quotas.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,42 @@ | ||
--- | ||
title: "Limits" | ||
linkTitle: "Limits" | ||
weight: 20 | ||
date: 2024-01-19 | ||
--- | ||
In this tabular list you will find important information about limits in Kubernetes and Compute Quotas, the latter can be increased via a support ticket. | ||
|
||
Resources marked with an * are theoretical limits in Kubernetes. We recommend not exceeding the limits with one deployment and instead implement the deployment across multiple clusters. | ||
|
||
### Cluster | ||
|
||
| Resource | Limit | | ||
| --- | --- | | ||
| Nodes* | 5.000 | | ||
| Pods* | 110.000 | | ||
| Containers* | 300.000 | | ||
|
||
### Node | ||
| Resource | Limit | | ||
| --- | --- | | ||
| Pods* | 110 | | ||
| Max. Volumes | 128 | | ||
|
||
### Cilium (CNI) | ||
|
||
| Resource | Limit | | ||
| --- | --- | | ||
| Identities | 64.000 | | ||
|
||
All endpoints (Pods, Services, etc.) which are managed by Cilium will be assigned an identity. | ||
|
||
### Compute Quotas | ||
| Resource | Limit | | ||
| --- | --- | | ||
| Cores | 256 | | ||
| RAM | 512 GB | | ||
| Floating IPs | 10 | | ||
| Instances | 500 | | ||
| Max. Volumes | 1000 | | ||
| Max. Volume Size | 4000 GB | | ||
| Max. Volume Snapshots | 99 | |
229 changes: 229 additions & 0 deletions
229
content/en/docs/20-container/10-managed-kubernetes/01-introduction/quickstart.md
Large diffs are not rendered by default.
Oops, something went wrong.
5 changes: 2 additions & 3 deletions
5
...nt/en/docs/20-container/10-managed-kubernetes/03-tutorials/forward-source-ip.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
129 changes: 129 additions & 0 deletions
129
...en/docs/20-container/10-managed-kubernetes/03-tutorials/permanent-kubeconfig.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,129 @@ | ||
--- | ||
title: "Permanent Kubeconfig" | ||
linkTitle: "Permanent Kubeconfig" | ||
weight: 20 | ||
date: 2024-01-18 | ||
description: -> | ||
Create a kubeconfig with unlimited lifetime | ||
--- | ||
|
||
# General | ||
By default you can only download kubeconfigs with a maximum lifetime of 24h from the gardener Dashboard. With this guide you'll be able to create your own permanent kubeconfig for your cluster. | ||
|
||
## Step 1: Create a service account | ||
The service account name will be the user name in Kubeconfig. Here we are creating the service account in the kube-system as I am creating a clusterRole. If you want to create a config to give namespace level limited access, create the service account in the required namespace. | ||
|
||
`kubectl -n kube-system create serviceaccount perm-cluster-admin` | ||
|
||
## Step 2: Create a secret for the service account | ||
From Kubernetes Version 1.24, the secret for the service account has to be created separately with an annotation kubernetes.io/service-account.name and type kubernetes.io/service-account-token | ||
Hence we will create a yaml with a secret named perm-cluster-admin-secret with the according annotation and type. | ||
|
||
```yaml | ||
apiVersion: v1 | ||
kind: Secret | ||
metadata: | ||
name: perm-cluster-admin-secret | ||
namespace: kube-system | ||
annotations: | ||
kubernetes.io/service-account.name: devops-cluster-admin | ||
type: kubernetes.io/service-account-token | ||
``` | ||
And apply the created yaml with | ||
`kubectl apply -f perm-cluster-admin-secret.yaml` | ||
|
||
## Step 3: Create a cluster role | ||
Now continue with creating a clusterRole with limited privileges to cluster objects. You can add the required object access as per your requirements. Refer to the service account and clusterRole documentation for more information. | ||
If you want to create a namespace-scoped role, you can use namespaced roles instead of clusterroles. | ||
Create the following yaml to create the clusterRole: | ||
|
||
```yaml | ||
apiVersion: rbac.authorization.k8s.io/v1 | ||
kind: ClusterRole | ||
metadata: | ||
name: perm-cluster-admin | ||
rules: | ||
- apiGroups: [""] | ||
resources: | ||
- nodes | ||
- nodes/proxy | ||
- services | ||
- endpoints | ||
- pods | ||
verbs: ["get", "list", "watch"] | ||
- apiGroups: | ||
- extensions | ||
resources: | ||
- ingresses | ||
verbs: ["get", "list", "watch"] | ||
``` | ||
|
||
`kubectl apply -f perm-cluster-admin.yaml` | ||
|
||
|
||
## Step 4: Create cluster role binding | ||
The following YAML is a ClusterRoleBinding that binds the perm-cluster-admin service account with the perm-cluster-admin clusterRole. | ||
|
||
```yaml | ||
apiVersion: rbac.authorization.k8s.io/v1 | ||
kind: ClusterRoleBinding | ||
metadata: | ||
name: perm-cluster-role-binding-admin | ||
roleRef: | ||
apiGroup: rbac.authorization.k8s.io | ||
kind: ClusterRole | ||
name: dperm-cluster-admin | ||
subjects: | ||
- kind: ServiceAccount | ||
name: perm-cluster-admin | ||
namespace: kube-system | ||
``` | ||
|
||
Apply with: | ||
|
||
`kubectl apply -f perm-cluster-role-binding-admin.yaml` | ||
|
||
## Step 5: Get all Cluster Details & Secrets | ||
|
||
We will retrieve all the required kubeconfig details and save them in variables. Then, finally, we will substitute it directly with the Kubeconfig YAML. | ||
If you have used a different names for the ressources, replace them accordingly. | ||
|
||
```bash | ||
SA_SECRET_TOKEN= kubectl -n kube-system get secret/devops-cluster-admin-secret -o=go-template='{{.data.token}}' | base64 --decode | ||
CLUSTER_NAME= kubectl config current-context | ||
CURRENT_CLUSTER= kubectl config view --raw -o=go-template='{{range .contexts}}{{if eq .name "'''${CLUSTER_NAME}'''"}}{{ index .context "cluster" }}{{end}}{{end}}' | ||
CLUSTER_CA_CERT= kubectl config view --raw -o=go-template='{{range .clusters}}{{if eq .name "'''${CURRENT_CLUSTER}'''"}}"{{with index .cluster "certificate-authority-data" }}{{.}}{{end}}"{{ end }}{{ end }}' | ||
CLUSTER_ENDPOINT= kubectl config view --raw -o=go-template='{{range .clusters}}{{if eq .name "'''${CURRENT_CLUSTER}'''"}}{{ .cluster.server }}{{end}}{{ end }}' | ||
``` | ||
|
||
## Step 6: Generate the kubeconfig with the variables | ||
|
||
Now fill in the variables of the kubeconfig.yaml accordingly: | ||
|
||
```yaml | ||
apiVersion: v1 | ||
kind: Config | ||
current-context: ${CLUSTER_NAME} | ||
contexts: | ||
- name: ${CLUSTER_NAME} | ||
context: | ||
cluster: ${CLUSTER_NAME} | ||
user: perm-cluster-admin | ||
clusters: | ||
- name: ${CLUSTER_NAME} | ||
cluster: | ||
certificate-authority-data: ${CLUSTER_CA_CERT} | ||
server: ${CLUSTER_ENDPOINT} | ||
users: | ||
- name: perm-cluster-admin | ||
user: | ||
token: ${SA_SECRET_TOKEN} | ||
``` | ||
|
||
## Step 7: Validate the generated Kubeconfig | ||
|
||
To validate the Kubeconfig, execute it with the kubectl command to see if the cluster is getting authenticated. | ||
|
||
`kubectl get pods --kubeconfig=kubeconfig.yaml` |
45 changes: 45 additions & 0 deletions
45
...kubernetes/03-tutorials/proxy-protocol-external-resolve-of-internal-services.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,45 @@ | ||
--- | ||
title: "Proxy Protocol - External resolve of internal services" | ||
linkTitle: "Proxy Protocol - External resolve of internal services" | ||
weight: 4 | ||
date: 2024-01-18 | ||
--- | ||
|
||
When using a Load Balancer with the Proxy Protocol in Kubernetes, there can be problems with accessing applications within the cluster from each other. | ||
|
||
If you are using a load balancer with the Proxy Protocol in your Kubernetes cluster, you may encounter issues when trying to access another Ingress/Service from a pod within the cluster. The reason for this is that kube-proxy adds an iptables rule for the external IP address of the load balancer, redirecting traffic around the load balancer. This leads to an error when the pod establishing the connection does not speak the Proxy Protocol and, in this case, communicates directly with the Ingress controller. | ||
|
||
### Solution | ||
|
||
To resolve this issue, you must add an additional annotation to the Load Balancer service. This annotation sets a hostname for the Load Balancer. | ||
|
||
```yaml | ||
apiVersion: v1 | ||
kind: Service | ||
metadata: | ||
name: my-load-balancer | ||
annotations: | ||
loadbalancer.openstack.org/hostname: 192.168.1.100.nip.io | ||
spec: | ||
type: LoadBalancer | ||
``` | ||
Create an A record for the Load Balancer IP address in your domain. The kube-proxy uses this A record to send traffic to the Load Balancer. | ||
### Outlook | ||
This issue was fixed in the upstream Kubernetes project for v1.20, but was later reverted. There is an open issue that addresses the Load Balancer issue in v1.28. The improvement proposal is KEP-1866. | ||
### Explanation | ||
The Proxy Protocol is a protocol used by Load Balancers and Ingress-Controller to identify the real client IP. The protocol adds additional information to the TCP header that identifies the client. | ||
The Cert-Manager doesn't use the Proxy Protocol but want to generate a TLS certificate request. If traffic is sent directly to the Services, the Cert-Manager cant connect to its endpoint and perform the initial self-check. | ||
### References | ||
Kubernetes issue with the Proxy Protocol: https://github.com/kubernetes/kubernetes/issues/66607 | ||
Proxy Protocol specification: http://www.haproxy.org/download/1.8/doc/proxy-protocol.txt | ||
Pull request to fix the issue in v1.20: https://github.com/kubernetes/kubernetes/pull/92312 | ||
Issue with the Load Balancer in v1.28: https://github.com/kubernetes/enhancements/issues/1860 | ||
Improvement proposal for KEP-1866:https://github.com/kubernetes/enhancements/tree/master/keps/sig-network/1860-kube-proxy-IP-node-binding |
83 changes: 83 additions & 0 deletions
83
content/en/docs/20-container/10-managed-kubernetes/03-tutorials/snapshot-pvcs.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,83 @@ | ||
--- | ||
title: "Volume Snapshots" | ||
linkTitle: "Volume Snapshots" | ||
weight: 20 | ||
date: 2024-01-18 | ||
--- | ||
|
||
### Volume Snapshots | ||
|
||
Volume Snapshots are a way to back up the contents of a volume at a specific point in time. They are represented in Kubernetes by the VolumeSnapshot resource type. | ||
|
||
To create a snapshot of a volume, you must create a VolumeSnapshot resource type. In this resource type, you must specify the name of the volume that you want to back up. You can also optionally specify a name for the snapshot. | ||
|
||
Here is an example of a VolumeSnapshot resource type: | ||
|
||
```yaml | ||
apiVersion: snapshot.storage.k8s.io/v1 | ||
kind: VolumeSnapshot | ||
metadata: | ||
name: my-volume-snapshot | ||
spec: | ||
source: | ||
name: my-volume | ||
``` | ||
Once you have created a VolumeSnapshot resource type, the snapshot will be created. The snapshot will be stored in a Kubernetes directory called /var/lib/kubelet/pods/<POD_NAME>/volumes/<VOLUME_NAME>/snapshots/<SNAPSHOT_NAME>. | ||
### Volume Snapshot Groups | ||
Volume Snapshot Groups (VSGs) are a way to group multiple Volume Snapshots into a single snapshot. VSGs are represented in Kubernetes by the VolumeSnapshotGroup resource type. | ||
To create a VSG, you must create a VolumeSnapshotGroup resource type. In this resource type, you must specify the names of the volumes that you want to group in the VSG. You can also optionally specify a name for the VSG. | ||
Here is an example of a VolumeSnapshotGroup resource type: | ||
```yaml | ||
apiVersion: snapshot.storage.k8s.io/v1 | ||
kind: VolumeSnapshotGroup | ||
metadata: | ||
name: my-volume-snapshot-group | ||
spec: | ||
snapshotSelector: | ||
matchLabels: | ||
app: my-app | ||
volumeSnapshots: | ||
- name: my-volume-snapshot-1 | ||
- name: my-volume-snapshot-2 | ||
``` | ||
Once you have created a VolumeSnapshotGroup resource type, the VSG will be created. The snapshot will be stored in a Kubernetes directory called /var/lib/kubelet/pods/<POD_NAME>/volumes/<VOLUME_NAME>/snapshots/<VSG_NAME>. | ||
### Solutions for Volume Snapshots | ||
There are a variety of solutions for Volume Snapshots in Kubernetes. One option is to use the native Kubernetes API. Another option is to use a third-party tool or service. | ||
#### Native Kubernetes API | ||
The native Kubernetes API provides a simple way to create Volume Snapshots. However, it is not as flexible as a third-party tool or service. | ||
#### Third-Party Tools and Services | ||
There are a number of third-party tools and services that support Volume Snapshots in Kubernetes. These tools and services often offer additional features and flexibility that the native Kubernetes API does not offer. | ||
Here are some examples of third-party tools and services that support Volume Snapshots in Kubernetes: | ||
- Portworx | ||
- Velero | ||
- NetApp Trident | ||
- Rook | ||
#### Examples of Using Volume Snapshots | ||
Volume Snapshots can be used for a variety of purposes, including: | ||
- Data backup: Volume Snapshots can be used to back up data in case of a failure or disaster. | ||
- Data migration: Volume Snapshots can be used to migrate data from one storage system to another. | ||
- Data recovery: Volume Snapshots can be used to recover data in case it is accidentally deleted or corrupted. | ||
Here are some examples of using Volume Snapshots: | ||
- A corporate application uses a persistent volume to store data. The application is regularly backed up with a snapshot. If the application fails, the snapshot is used to restore the application. | ||
- A cloud service provider offers a service that creates Volume Snapshots for its customers. Customers can use Volume Snapshots to back up and migrate data. | ||
- A research laboratory needs a way to store large amounts of data. The laboratory uses Volume Snapshots to back up data in the cloud. The data can then be accessed and used by other researchers in the cloud. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters