diff --git a/Documentation/Getting-Started/quickstart.md b/Documentation/Getting-Started/quickstart.md index bac7e5f9b0d0..633a761f018d 100644 --- a/Documentation/Getting-Started/quickstart.md +++ b/Documentation/Getting-Started/quickstart.md @@ -36,7 +36,7 @@ To configure the Ceph storage cluster, at least one of these local storage optio A simple Rook cluster is created for Kubernetes with the following `kubectl` commands and [example manifests](https://github.com/rook/rook/blob/master/deploy/examples). ```console -$ git clone --single-branch --branch v1.13.9 https://github.com/rook/rook.git +$ git clone --single-branch --branch v1.13.10 https://github.com/rook/rook.git cd rook/deploy/examples kubectl create -f crds.yaml -f common.yaml -f operator.yaml kubectl create -f cluster.yaml diff --git a/Documentation/Storage-Configuration/Monitoring/ceph-monitoring.md b/Documentation/Storage-Configuration/Monitoring/ceph-monitoring.md index 150ba573dac5..e2c48ddb3340 100644 --- a/Documentation/Storage-Configuration/Monitoring/ceph-monitoring.md +++ b/Documentation/Storage-Configuration/Monitoring/ceph-monitoring.md @@ -44,7 +44,7 @@ There are two sources for metrics collection: From the root of your locally cloned Rook repo, go the monitoring directory: ```console -$ git clone --single-branch --branch v1.13.9 https://github.com/rook/rook.git +$ git clone --single-branch --branch v1.13.10 https://github.com/rook/rook.git cd rook/deploy/examples/monitoring ``` diff --git a/Documentation/Upgrade/rook-upgrade.md b/Documentation/Upgrade/rook-upgrade.md index be7fb6f4a45a..569689d27d59 100644 --- a/Documentation/Upgrade/rook-upgrade.md +++ b/Documentation/Upgrade/rook-upgrade.md @@ -78,11 +78,11 @@ With this upgrade guide, there are a few notes to consider: Unless otherwise noted due to extenuating requirements, upgrades from one patch release of Rook to another are as simple as updating the common resources and the image of the Rook operator. For -example, when Rook v1.13.9 is released, the process of updating from v1.13.0 is as simple as running +example, when Rook v1.13.10 is released, the process of updating from v1.13.0 is as simple as running the following: ```console -git clone --single-branch --depth=1 --branch v1.13.9 https://github.com/rook/rook.git +git clone --single-branch --depth=1 --branch v1.13.10 https://github.com/rook/rook.git cd rook/deploy/examples ``` @@ -94,7 +94,7 @@ Then, apply the latest changes from v1.13, and update the Rook Operator image. ```console kubectl apply -f common.yaml -f crds.yaml -kubectl -n rook-ceph set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:v1.13.9 +kubectl -n rook-ceph set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:v1.13.10 ``` As exemplified above, it is a good practice to update Rook common resources from the example @@ -129,7 +129,7 @@ In order to successfully upgrade a Rook cluster, the following prerequisites mus ## Rook Operator Upgrade The examples given in this guide upgrade a live Rook cluster running `v1.12.11` to -the version `v1.13.9`. This upgrade should work from any official patch release of Rook v1.12 to any +the version `v1.13.10`. This upgrade should work from any official patch release of Rook v1.12 to any official patch release of v1.13. Let's get started! @@ -156,7 +156,7 @@ by the Operator. Also update the Custom Resource Definitions (CRDs). Get the latest common resources manifests that contain the latest changes. ```console -git clone --single-branch --depth=1 --branch v1.13.9 https://github.com/rook/rook.git +git clone --single-branch --depth=1 --branch v1.13.10 https://github.com/rook/rook.git cd rook/deploy/examples ``` @@ -195,7 +195,7 @@ The largest portion of the upgrade is triggered when the operator's image is upd When the operator is updated, it will proceed to update all of the Ceph daemons. ```console -kubectl -n $ROOK_OPERATOR_NAMESPACE set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:v1.13.9 +kubectl -n $ROOK_OPERATOR_NAMESPACE set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:v1.13.10 ``` ### **3. Update Ceph CSI** @@ -225,16 +225,16 @@ watch --exec kubectl -n $ROOK_CLUSTER_NAMESPACE get deployments -l rook_cluster= ``` As an example, this cluster is midway through updating the OSDs. When all deployments report `1/1/1` -availability and `rook-version=v1.13.9`, the Ceph cluster's core components are fully updated. +availability and `rook-version=v1.13.10`, the Ceph cluster's core components are fully updated. ```console Every 2.0s: kubectl -n rook-ceph get deployment -o j... -rook-ceph-mgr-a req/upd/avl: 1/1/1 rook-version=v1.13.9 -rook-ceph-mon-a req/upd/avl: 1/1/1 rook-version=v1.13.9 -rook-ceph-mon-b req/upd/avl: 1/1/1 rook-version=v1.13.9 -rook-ceph-mon-c req/upd/avl: 1/1/1 rook-version=v1.13.9 -rook-ceph-osd-0 req/upd/avl: 1// rook-version=v1.13.9 +rook-ceph-mgr-a req/upd/avl: 1/1/1 rook-version=v1.13.10 +rook-ceph-mon-a req/upd/avl: 1/1/1 rook-version=v1.13.10 +rook-ceph-mon-b req/upd/avl: 1/1/1 rook-version=v1.13.10 +rook-ceph-mon-c req/upd/avl: 1/1/1 rook-version=v1.13.10 +rook-ceph-osd-0 req/upd/avl: 1// rook-version=v1.13.10 rook-ceph-osd-1 req/upd/avl: 1/1/1 rook-version=v1.12.11 rook-ceph-osd-2 req/upd/avl: 1/1/1 rook-version=v1.12.11 ``` @@ -246,13 +246,13 @@ An easy check to see if the upgrade is totally finished is to check that there i # kubectl -n $ROOK_CLUSTER_NAMESPACE get deployment -l rook_cluster=$ROOK_CLUSTER_NAMESPACE -o jsonpath='{range .items[*]}{"rook-version="}{.metadata.labels.rook-version}{"\n"}{end}' | sort | uniq This cluster is not yet finished: rook-version=v1.12.11 - rook-version=v1.13.9 + rook-version=v1.13.10 This cluster is finished: - rook-version=v1.13.9 + rook-version=v1.13.10 ``` ### **5. Verify the updated cluster** -At this point, the Rook operator should be running version `rook/ceph:v1.13.9`. +At this point, the Rook operator should be running version `rook/ceph:v1.13.10`. Verify the CephCluster health using the [health verification doc](health-verification.md). diff --git a/deploy/charts/rook-ceph/values.yaml b/deploy/charts/rook-ceph/values.yaml index 711e11b69c81..412533db9ec8 100644 --- a/deploy/charts/rook-ceph/values.yaml +++ b/deploy/charts/rook-ceph/values.yaml @@ -7,7 +7,7 @@ image: repository: rook/ceph # -- Image tag # @default -- `master` - tag: v1.13.9 + tag: v1.13.10 # -- Image pull policy pullPolicy: IfNotPresent diff --git a/deploy/examples/direct-mount.yaml b/deploy/examples/direct-mount.yaml index f6deb1582652..fa6aafa2ebfd 100644 --- a/deploy/examples/direct-mount.yaml +++ b/deploy/examples/direct-mount.yaml @@ -18,7 +18,7 @@ spec: dnsPolicy: ClusterFirstWithHostNet containers: - name: rook-direct-mount - image: rook/ceph:v1.13.9 + image: rook/ceph:v1.13.10 command: ["/bin/bash"] args: ["-m", "-c", "/usr/local/bin/toolbox.sh"] imagePullPolicy: IfNotPresent diff --git a/deploy/examples/images.txt b/deploy/examples/images.txt index 252460836e72..81c98f165d4e 100644 --- a/deploy/examples/images.txt +++ b/deploy/examples/images.txt @@ -8,4 +8,4 @@ registry.k8s.io/sig-storage/csi-provisioner:v4.0.0 registry.k8s.io/sig-storage/csi-resizer:v1.10.0 registry.k8s.io/sig-storage/csi-snapshotter:v7.0.1 - rook/ceph:v1.13.9 + rook/ceph:v1.13.10 diff --git a/deploy/examples/multus-validation.yaml b/deploy/examples/multus-validation.yaml index 133dbe745dd3..a6e50c0cecef 100644 --- a/deploy/examples/multus-validation.yaml +++ b/deploy/examples/multus-validation.yaml @@ -101,7 +101,7 @@ spec: serviceAccountName: rook-ceph-multus-validation containers: - name: multus-validation - image: rook/ceph:v1.13.9 + image: rook/ceph:v1.13.10 command: ["rook"] args: - "multus" diff --git a/deploy/examples/operator-openshift.yaml b/deploy/examples/operator-openshift.yaml index f448331da1d9..7b2ee43d5bfa 100644 --- a/deploy/examples/operator-openshift.yaml +++ b/deploy/examples/operator-openshift.yaml @@ -663,7 +663,7 @@ spec: serviceAccountName: rook-ceph-system containers: - name: rook-ceph-operator - image: rook/ceph:v1.13.9 + image: rook/ceph:v1.13.10 args: ["ceph", "operator"] securityContext: runAsNonRoot: true diff --git a/deploy/examples/operator.yaml b/deploy/examples/operator.yaml index d168584f0372..728e1927985e 100644 --- a/deploy/examples/operator.yaml +++ b/deploy/examples/operator.yaml @@ -588,7 +588,7 @@ spec: serviceAccountName: rook-ceph-system containers: - name: rook-ceph-operator - image: rook/ceph:v1.13.9 + image: rook/ceph:v1.13.10 args: ["ceph", "operator"] securityContext: runAsNonRoot: true diff --git a/deploy/examples/osd-purge.yaml b/deploy/examples/osd-purge.yaml index f170004292e2..4d289d19a5a9 100644 --- a/deploy/examples/osd-purge.yaml +++ b/deploy/examples/osd-purge.yaml @@ -28,7 +28,7 @@ spec: serviceAccountName: rook-ceph-purge-osd containers: - name: osd-removal - image: rook/ceph:v1.13.9 + image: rook/ceph:v1.13.10 # TODO: Insert the OSD ID in the last parameter that is to be removed # The OSD IDs are a comma-separated list. For example: "0" or "0,2". # If you want to preserve the OSD PVCs, set `--preserve-pvc true`. diff --git a/deploy/examples/toolbox-job.yaml b/deploy/examples/toolbox-job.yaml index 9bc8ac0667c9..094e9a9bb064 100644 --- a/deploy/examples/toolbox-job.yaml +++ b/deploy/examples/toolbox-job.yaml @@ -10,7 +10,7 @@ spec: spec: initContainers: - name: config-init - image: rook/ceph:v1.13.9 + image: rook/ceph:v1.13.10 command: ["/usr/local/bin/toolbox.sh"] args: ["--skip-watch"] imagePullPolicy: IfNotPresent @@ -29,7 +29,7 @@ spec: mountPath: /var/lib/rook-ceph-mon containers: - name: script - image: rook/ceph:v1.13.9 + image: rook/ceph:v1.13.10 volumeMounts: - mountPath: /etc/ceph name: ceph-config diff --git a/deploy/examples/toolbox-operator-image.yaml b/deploy/examples/toolbox-operator-image.yaml index cb1a102f273a..c77bc7a80564 100644 --- a/deploy/examples/toolbox-operator-image.yaml +++ b/deploy/examples/toolbox-operator-image.yaml @@ -24,7 +24,7 @@ spec: dnsPolicy: ClusterFirstWithHostNet containers: - name: rook-ceph-tools-operator-image - image: rook/ceph:v1.13.9 + image: rook/ceph:v1.13.10 command: - /bin/bash - -c