Skip to content

Commit

Permalink
Merge pull request #363 from appuio/update/osd-replace-docs
Browse files Browse the repository at this point in the history
Update change storage node size documentation
  • Loading branch information
DebakelOrakel authored Nov 6, 2024
2 parents 0fba7c0 + 045e73a commit 7e82a6c
Show file tree
Hide file tree
Showing 2 changed files with 17 additions and 3 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -100,8 +100,17 @@ echo $NODES_TO_REPLACE
[source,bash]
----
terraform state rm "module.cluster.module.storage.random_id.node_id"
terraform state rm "module.cluster.module.storage.exoscale_compute.nodes"
terraform state rm "module.cluster.module.storage.exoscale_compute_instance.nodes"
----
+
[NOTE]
====
If the cluster is using a dedicated hypervisor, you may need to also delete the affinity-group.
[source,bash]
----
terraform state rm "module.cluster.module.storage.exoscale_anti_affinity_group.anti_affinity_group[0]"
----
====

. Run Terraform to spin up replacement nodes
+
Expand Down
9 changes: 7 additions & 2 deletions docs/modules/ROOT/partials/storage-ceph-backfilling.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,12 @@ If the storage cluster is mostly idle, you can speed up backfilling by temporari
[source,bash]
----
kubectl --as=cluster-admin -n syn-rook-ceph-cluster exec -it deploy/rook-ceph-tools -- \
ceph config set osd osd_max_backfills 10 <1>
ceph config set osd osd_mclock_override_recovery_settings true <1>
kubectl --as=cluster-admin -n syn-rook-ceph-cluster exec -it deploy/rook-ceph-tools -- \
ceph config set osd osd_max_backfills 10 <2>
----
<1> The number of PGs which are allowed to backfill in parallel.
<1> Allow overwriting `osd_max_backfills`.
<2> The number of PGs which are allowed to backfill in parallel.
Adjust up or down depending on client load on the storage cluster.

After backfilling is completed, you can remove the configuration with
Expand All @@ -20,6 +23,8 @@ After backfilling is completed, you can remove the configuration with
----
kubectl --as=cluster-admin -n syn-rook-ceph-cluster exec -it deploy/rook-ceph-tools -- \
ceph config rm osd osd_max_backfills
kubectl --as=cluster-admin -n syn-rook-ceph-cluster exec -it deploy/rook-ceph-tools -- \
ceph config rm osd osd_mclock_override_recovery_settings
----
====
+
Expand Down

0 comments on commit 7e82a6c

Please sign in to comment.