Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update install instructions for OKE on cloudscale.ch and Exoscale #333

Merged
merged 1 commit into from
Jun 10, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
115 changes: 35 additions & 80 deletions docs/modules/ROOT/partials/install/bootstrap-nodes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -68,23 +68,26 @@ openshift-install --dir "${INSTALLER_DIR}" \
wait-for bootstrap-complete --log-level debug
----

. Remove bootstrap node and provision infra nodes
. Remove bootstrap node and provision remaining nodes
+
[source,bash,subs="attributes+"]
----
cat > override.tf <<EOF
module "cluster" {
ifeval::["{provider}" == "exoscale"]
storage_count = 0
endif::[]
worker_count = 0
additional_worker_groups = {}
}
EOF
rm override.tf
terraform apply

popd
----

. Approve infra certs
. Review and merge the LB hieradata MR (listed in Terraform output `hieradata_mr`) and run Puppet on the LBs after the deploy job has completed
+
[source,bash]
----
for fqdn in "${LB_FQDNS[@]}"; do
ssh "${fqdn}" sudo puppetctl run
done
----

. Approve node certs
+
[source,bash]
----
Expand All @@ -97,11 +100,29 @@ include::partial$install/approve-node-csrs.adoc[]
+
[source,bash]
----
kubectl get nodes -lnode-role.kubernetes.io/worker
kubectl label node -lnode-role.kubernetes.io/worker \
node-role.kubernetes.io/infra=""
kubectl get node -ojson | \
jq -r '.items[] | select(.metadata.name | test("infra-")).metadata.name' | \
xargs -I {} kubectl label node {} node-role.kubernetes.io/infra=
----

ifeval::["{provider}" == "exoscale"]
. Label and taint storage nodes
+
include::partial$label-taint-storage-nodes.adoc[]
endif::[]

. Label worker nodes
+
[source,bash]
----
kubectl get node -ojson | \
jq -r '.items[] | select(.metadata.name | test("infra|master|storage-")|not).metadata.name' | \
xargs -I {} kubectl label node {} node-role.kubernetes.io/app=
----
+
[NOTE]
At this point you may want to add extra labels to the additional worker groups, if there are any.

. Enable proxy protocol on ingress controller
+
[source,bash]
Expand All @@ -121,76 +142,10 @@ This step isn't necessary if you've disabled the proxy protocol on the load-bala
By default, PROXY protocol is enabled through the VSHN Commodore global defaults.
====

. Review and merge the LB hieradata MR (listed in Terraform output `hieradata_mr`) and run Puppet on the LBs after the deploy job has completed
+
[source,bash]
----
for fqdn in "${LB_FQDNS[@]}"; do
ssh "${fqdn}" sudo puppetctl run
done
----

. Wait for installation to complete
+
[source,bash]
----
openshift-install --dir ${INSTALLER_DIR} \
wait-for install-complete --log-level debug
----

ifeval::["{provider}" == "exoscale"]
. Provision storage nodes
+
[source,bash]
----
cat > override.tf <<EOF
module "cluster" {
worker_count = 0
additional_worker_groups = {}
}
EOF
terraform apply
----

. Approve storage certs
+
include::partial$install/approve-node-csrs.adoc[]

. Label and taint storage nodes
+
include::partial$label-taint-storage-nodes.adoc[]
endif::[]

. Provision worker nodes
+
[source,bash]
----
rm override.tf
terraform apply

popd
----

. Approve worker certs
+
include::partial$install/approve-node-csrs.adoc[]

. Label worker nodes
+
[source,bash,subs="attributes"]
----
kubectl label --overwrite node -lnode-role.kubernetes.io/worker \
node-role.kubernetes.io/app=""
kubectl label node -lnode-role.kubernetes.io/infra \
node-role.kubernetes.io/app-
ifeval::["{provider}" == "exoscale"]
kubectl label node -lnode-role.kubernetes.io/storage \
node-role.kubernetes.io/app-
endif::[]

# This should show the worker nodes only
kubectl get nodes -l node-role.kubernetes.io/app
----
+
[NOTE]
At this point you may want to add extra labels to the additional worker groups, if there are any.
11 changes: 3 additions & 8 deletions docs/modules/ROOT/partials/label-taint-storage-nodes.adoc
Original file line number Diff line number Diff line change
@@ -1,13 +1,8 @@
[source,bash,subs="attributes"]
----
kubectl {kubectl_extra_args} label --overwrite node -lnode-role.kubernetes.io/worker \
node-role.kubernetes.io/storage=""
kubectl {kubectl_extra_args} label node -lnode-role.kubernetes.io/infra \
node-role.kubernetes.io/storage-
ifdef::delabel_app_nodes[]
kubectl {kubectl_extra_args} label node -lnode-role.kubernetes.io/app \
node-role.kubernetes.io/storage-
endif::delabel_app_nodes[]
kubectl get node -ojson | \
jq -r '.items[] | select(.metadata.name | test("storage-")).metadata.name' | \
xargs -I {} kubectl {kubectl_extra_args} label node {} node-role.kubernetes.io/storage=

kubectl {kubectl_extra_args} taint node -lnode-role.kubernetes.io/storage \
storagenode=True:NoSchedule
Expand Down