From 8965e678b29d388a00fd2dc75922b1038363343f Mon Sep 17 00:00:00 2001 From: Alexandr Stefurishin Date: Tue, 29 Oct 2024 12:31:47 +0300 Subject: [PATCH 1/5] [docs] Mention about CloudEphemeral node type restriction during ReplicatedStoragePool creation (#193) Signed-off-by: Alexandr Stefurishin --- docs/USAGE.md | 14 ++++++-------- docs/USAGE_RU.md | 14 ++++++-------- 2 files changed, 12 insertions(+), 16 deletions(-) diff --git a/docs/USAGE.md b/docs/USAGE.md index a04a1b6e..3d9d1ee4 100644 --- a/docs/USAGE.md +++ b/docs/USAGE.md @@ -56,19 +56,17 @@ spec: thinPoolName: thin-pool ``` -> **Caution!** All `LVMVolumeGroup` resources in the `spec` of the `ReplicatedStoragePool` resource must reside on different nodes. (You may not refer to multiple `LVMVolumeGroup` resources located on the same node). +Before working with `LINSTOR`, the controller will validate the provided configuration. If an error is detected, it will report the cause of the error. -The `sds-replicated-volume-controller` will then process the `ReplicatedStoragePool` resource defined by the user and create the corresponding `Storage Pool` in the `Linstor` backend. +Invalid `Storage Pools` will not be created in `LINSTOR`. -> The name of the `Storage Pool` being created will match the name of the created `ReplicatedStoragePool` resource. -> -> The `Storage Pool` will be created on the nodes defined in the LVMVolumeGroup resources. +For all `LVMVolumeGroup` resources in the `spec` of the `ReplicatedStoragePool` resource the following rules must be met: + - They must reside on different nodes. You may not refer to multiple `LVMVolumeGroup` resources located on the same node. + - All nodes should be of type other than `CloudEphemeral` (see [Node types](https://deckhouse.io/products/kubernetes-platform/documentation/v1/modules/040-node-manager/#node-types)) Information about the controller's progress and results is available in the `status` field of the created `ReplicatedStoragePool` resource. -> Before working with `LINSTOR`, the controller will validate the provided configuration. If an error is detected, it will report the cause of the error. -> -> Invalid `Storage Pools` will not be created in `LINSTOR`. +The `sds-replicated-volume-controller` will then process the `ReplicatedStoragePool` resource defined by the user and create the corresponding `Storage Pool` in the `Linstor` backend. The name of the `Storage Pool` being created will match the name of the created `ReplicatedStoragePool` resource. The `Storage Pool` will be created on the nodes defined in the LVMVolumeGroup resources. #### Updating the `ReplicatedStoragePool` resource diff --git a/docs/USAGE_RU.md b/docs/USAGE_RU.md index d9621193..318b704c 100644 --- a/docs/USAGE_RU.md +++ b/docs/USAGE_RU.md @@ -56,19 +56,17 @@ spec: thinPoolName: thin-pool ``` -> Внимание! Все ресурсы `LVMVolumeGroup`, указанные в `spec` ресурса `ReplicatedStoragePool`, должны быть на разных узлах. (Запрещено указывать несколько ресурсов `LVMVolumeGroup`, которые расположены на одном и том же узле). +Перед фактической работой с `LINSTOR` контроллер провалидирует предоставленную ему конфигурацию и в случае ошибки предоставит информацию о причинах неудачи. -Результатом обработки ресурса `ReplicatedStoragePool` станет создание необходимого `Storage Pool` в бэкенде `LINSTOR`. +Невалидные `Storage Pool`'ы не будут созданы в `LINSTOR`. -> Имя созданного `Storage Pool` будет соответствовать имени созданного ресурса `ReplicatedStoragePool`. -> -> Узлы, на которых будет создан `Storage Pool`, будут взяты из ресурсов LVMVolumeGroup. +Для всех ресурсов `LVMVolumeGroup`, указанных в `spec` ресурса `ReplicatedStoragePool` должны быть соблюдены следующие правила: + - Они должны быть на разных узлах. Запрещено указывать несколько ресурсов `LVMVolumeGroup`, которые расположены на одном и том же узле. + - Все узлы должны иметь тип отличный от `CloudEphemeral` (см. [Типы узлов](https://deckhouse.ru/products/kubernetes-platform/documentation/v1/modules/040-node-manager/#%D1%82%D0%B8%D0%BF%D1%8B-%D1%83%D0%B7%D0%BB%D0%BE%D0%B2)) Информацию о ходе работы контроллера и ее результатах можно посмотреть в поле `status` созданного ресурса `ReplicatedStoragePool`. -> Перед фактической работой с `LINSTOR` контроллер провалидирует предоставленную ему конфигурацию и в случае ошибки предоставит информацию о причинах неудачи. -> -> Невалидные `Storage Pool'ы` не будут созданы в `LINSTOR`. +Результатом обработки ресурса `ReplicatedStoragePool` станет создание необходимого `Storage Pool` в бэкенде `LINSTOR`. Имя созданного `Storage Pool` будет соответствовать имени созданного ресурса `ReplicatedStoragePool`. Узлы, на которых будет создан `Storage Pool`, будут взяты из ресурсов LVMVolumeGroup. #### Обновление ресурса `ReplicatedStoragePool` From b66516aace167368aae8a334e0af6f714cd73a91 Mon Sep 17 00:00:00 2001 From: Makeev Ivan <1791673+Ranger-X@users.noreply.github.com> Date: Fri, 1 Nov 2024 19:13:53 +0400 Subject: [PATCH 2/5] [monitoring] Fix alerts for LinstorSchedulerAdmission pods (#195) Signed-off-by: Ivan.Makeev --- ...scheduler-admission.yaml => linstor-scheduler-admission.tpl} | 2 ++ 1 file changed, 2 insertions(+) rename monitoring/prometheus-rules/{linstor-scheduler-admission.yaml => linstor-scheduler-admission.tpl} (94%) diff --git a/monitoring/prometheus-rules/linstor-scheduler-admission.yaml b/monitoring/prometheus-rules/linstor-scheduler-admission.tpl similarity index 94% rename from monitoring/prometheus-rules/linstor-scheduler-admission.yaml rename to monitoring/prometheus-rules/linstor-scheduler-admission.tpl index 4fc9a6e2..3d8e4b08 100644 --- a/monitoring/prometheus-rules/linstor-scheduler-admission.yaml +++ b/monitoring/prometheus-rules/linstor-scheduler-admission.tpl @@ -1,3 +1,4 @@ +{{- if and (ne "dev" .Values.global.deckhouseVersion) (semverCompare "<1.64" .Values.global.deckhouseVersion) }} - name: kubernetes.linstor.scheduler_state rules: - alert: D8LinstorSchedulerAdmissionPodIsNotReady @@ -34,3 +35,4 @@ The recommended course of action: 1. Retrieve details of the Deployment: `kubectl -n d8-sds-replicated-volume describe deploy linstor-scheduler-admission` 2. View the status of the Pod and try to figure out why it is not running: `kubectl -n d8-sds-replicated-volume describe pod -l app=linstor-scheduler-admission` +{{- end }} From 0ca6a8ff05ba5d460de062f0620142c56730663c Mon Sep 17 00:00:00 2001 From: Dmitri Popkov <45928193+dmitrpopkov@users.noreply.github.com> Date: Sun, 3 Nov 2024 13:11:38 +0300 Subject: [PATCH 3/5] [docs] Tech reqs addon for sds replicated (#197) Signed-off-by: Dmitri Popkov <45928193+dmitrpopkov@users.noreply.github.com> --- docs/README.md | 8 ++++++-- docs/README_RU.md | 6 +++++- 2 files changed, 11 insertions(+), 3 deletions(-) diff --git a/docs/README.md b/docs/README.md index 3829187c..a20f8052 100644 --- a/docs/README.md +++ b/docs/README.md @@ -265,9 +265,11 @@ kubectl get sc replicated-storage-class ## System requirements and recommendations -### Requirements +### Requirements +(Applies for signle-zone and multi-zone deployments) - Stock kernels shipped with the [supported distributions](https://deckhouse.io/documentation/v1/supported_versions.html#linux). -- High-speed 10Gbps network. +- High-speed 10Gbps network or more. +- Network latency between nodes must be between 0.5ms - 1ms in order to achive highest performance - Do not use another SDS (Software defined storage) to provide disks to our SDS. ### Recommendations @@ -275,3 +277,5 @@ kubectl get sc replicated-storage-class - Avoid using RAID. The reasons are detailed in the [FAQ](./faq.html#why-is-it-not-recommended-to-use-raid-for-disks-that-are-used-by-the-sds-replicated-volume-module). - Use local physical disks. The reasons are detailed in the [FAQ](./faq.html#why-do-you-recommend-using-local-disks-and-not-nas). + +- In order for cluster to be operational, but with performance degradation, network latency should not be higher than 20ms between nodes diff --git a/docs/README_RU.md b/docs/README_RU.md index 2319faa3..ca93a4a7 100644 --- a/docs/README_RU.md +++ b/docs/README_RU.md @@ -268,8 +268,10 @@ kubectl get sc replicated-storage-class ## Системные требования и рекомендации ### Требования +(Применительно как к однозональным кластерам, так и к кластерам с использованием нескольких зон доступности) - Использование стоковых ядер, поставляемых вместе с [поддерживаемыми дистрибутивами](https://deckhouse.ru/documentation/v1/supported_versions.html#linux); -- Использование сети 10Gbps. +- Использование сети 10Gbps и выше. +- Сетевая задержка между узлами должна быть от 0.5мс до 1мс для достижения максимальной производительности - Не использовать другой SDS (Software defined storage) для предоставления дисков нашему SDS ### Рекомендации @@ -277,3 +279,5 @@ kubectl get sc replicated-storage-class - Не использовать RAID. Причины подробнее раскрыты в нашем [FAQ](./faq.html#почему-не-рекомендуется-использовать-raid-для-дисков-которые-используются-модулем-sds-replicated-volume). - Использовать локальные "железные" диски. Причины подробнее раскрыты в нашем [FAQ](./faq.html#почему-вы-рекомендуете-использовать-локальные-диски-не-nas). + +- Для штатного функционирования кластера, но с деградацией производительности, сетевая задержка не должна превышать 20мс между узлами. From a1ae840eb5db69bc509372d8da4f2a47cf7889a5 Mon Sep 17 00:00:00 2001 From: Vasily Oleynikov Date: Sat, 9 Nov 2024 17:28:39 +0500 Subject: [PATCH 4/5] [internal] Add backup control label (#196) Signed-off-by: v.oleynikov --- crds/replicatedstoragebackup.yaml | 1 + crds/replicatedstorageclass.yaml | 1 + crds/replicatedstoragepool.yaml | 1 + 3 files changed, 3 insertions(+) diff --git a/crds/replicatedstoragebackup.yaml b/crds/replicatedstoragebackup.yaml index c0ea74bd..1dbd58e7 100644 --- a/crds/replicatedstoragebackup.yaml +++ b/crds/replicatedstoragebackup.yaml @@ -5,6 +5,7 @@ metadata: labels: heritage: deckhouse module: storage + backup.deckhouse.io/cluster-config: "true" spec: group: storage.deckhouse.io scope: Cluster diff --git a/crds/replicatedstorageclass.yaml b/crds/replicatedstorageclass.yaml index a0d85763..eaa5c0b5 100644 --- a/crds/replicatedstorageclass.yaml +++ b/crds/replicatedstorageclass.yaml @@ -5,6 +5,7 @@ metadata: labels: heritage: deckhouse module: storage + backup.deckhouse.io/cluster-config: "true" spec: group: storage.deckhouse.io scope: Cluster diff --git a/crds/replicatedstoragepool.yaml b/crds/replicatedstoragepool.yaml index f0a14cb2..b0d2a290 100644 --- a/crds/replicatedstoragepool.yaml +++ b/crds/replicatedstoragepool.yaml @@ -5,6 +5,7 @@ metadata: labels: heritage: deckhouse module: storage + backup.deckhouse.io/cluster-config: "true" spec: group: storage.deckhouse.io scope: Cluster From 5069896772e33ece3deae8aae3c816cba94eb5d0 Mon Sep 17 00:00:00 2001 From: kneumoin Date: Fri, 15 Nov 2024 01:03:17 +0300 Subject: [PATCH 5/5] [controller] Add finalizers to StorageClass (#192) Signed-off-by: Neumoin, Konstantin --- .../src/go.sum | 6 - .../pkg/controller/controller_suite_test.go | 4 +- .../controller/replicated_storage_class.go | 399 ++++++++++++------ .../replicated_storage_class_test.go | 36 +- images/webhooks/src/go.sum | 4 - 5 files changed, 302 insertions(+), 147 deletions(-) diff --git a/images/sds-replicated-volume-controller/src/go.sum b/images/sds-replicated-volume-controller/src/go.sum index 68f138fc..a468f1a4 100644 --- a/images/sds-replicated-volume-controller/src/go.sum +++ b/images/sds-replicated-volume-controller/src/go.sum @@ -9,14 +9,8 @@ github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSs github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM= github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/deckhouse/sds-node-configurator/api v0.0.0-20240805103635-969dc811217b h1:EYmHWTWcWMpyxJGZK05ZxlIFnh9s66DRrxLw/LNb/xw= -github.com/deckhouse/sds-node-configurator/api v0.0.0-20240805103635-969dc811217b/go.mod h1:H71+9G0Jr46Qs0BA3z3/xt0h9lbnJnCEYcaCJCWFBf0= -github.com/deckhouse/sds-node-configurator/api v0.0.0-20240919102704-a035b4a92e77 h1:Y3vswUk/rnCpkZzWBk+Mlr9LtMg6EI5LkQ4GvgHCslI= -github.com/deckhouse/sds-node-configurator/api v0.0.0-20240919102704-a035b4a92e77/go.mod h1:H71+9G0Jr46Qs0BA3z3/xt0h9lbnJnCEYcaCJCWFBf0= github.com/deckhouse/sds-node-configurator/api v0.0.0-20240925090458-249de2896583 h1:HQd5YFQqoHj/CQwBKFCyuVCQmNV0PdML8QJiyDka4fQ= github.com/deckhouse/sds-node-configurator/api v0.0.0-20240925090458-249de2896583/go.mod h1:H71+9G0Jr46Qs0BA3z3/xt0h9lbnJnCEYcaCJCWFBf0= -github.com/deckhouse/sds-replicated-volume/api v0.0.0-20240812165341-a73e664454b9 h1:keKcnq6do7yxGZHeNERhhx3dH1/wQmj+x5vxcWH3CcI= -github.com/deckhouse/sds-replicated-volume/api v0.0.0-20240812165341-a73e664454b9/go.mod h1:6yz0RtbkLVJtK2DeuvgfaqBZRl5V5ax1WsfPF5pbnvo= github.com/donovanhide/eventsource v0.0.0-20210830082556-c59027999da0 h1:C7t6eeMaEQVy6e8CarIhscYQlNmw5e3G36y7l7Y21Ao= github.com/donovanhide/eventsource v0.0.0-20210830082556-c59027999da0/go.mod h1:56wL82FO0bfMU5RvfXoIwSOP2ggqqxT+tAfNEIyxuHw= github.com/emicklei/go-restful/v3 v3.11.0 h1:rAQeMHw1c7zTmncogyy8VvRZwtkmkZ4FxERmMY4rD+g= diff --git a/images/sds-replicated-volume-controller/src/pkg/controller/controller_suite_test.go b/images/sds-replicated-volume-controller/src/pkg/controller/controller_suite_test.go index 25c03881..3ee1eb06 100644 --- a/images/sds-replicated-volume-controller/src/pkg/controller/controller_suite_test.go +++ b/images/sds-replicated-volume-controller/src/pkg/controller/controller_suite_test.go @@ -18,6 +18,7 @@ package controller_test import ( "context" + "slices" "testing" . "github.com/LINBIT/golinstor/client" @@ -37,7 +38,7 @@ import ( ) const ( - testNamespaceConst = "test-namespace" + testNamespaceConst = "" testNameForAnnotationTests = "rsc-test-annotation" ) @@ -227,6 +228,7 @@ func getAndValidateSC(ctx context.Context, cl client.Client, replicatedSC srv.Re Expect(*storageClass.AllowVolumeExpansion).To(BeTrue()) Expect(*storageClass.VolumeBindingMode).To(Equal(volumeBindingMode)) Expect(*storageClass.ReclaimPolicy).To(Equal(corev1.PersistentVolumeReclaimPolicy(replicatedSC.Spec.ReclaimPolicy))) + Expect(slices.Contains(storageClass.ObjectMeta.Finalizers, controller.StorageClassFinalizerName)).To(BeTrue()) return storageClass } diff --git a/images/sds-replicated-volume-controller/src/pkg/controller/replicated_storage_class.go b/images/sds-replicated-volume-controller/src/pkg/controller/replicated_storage_class.go index 215818c8..8bfe22a7 100644 --- a/images/sds-replicated-volume-controller/src/pkg/controller/replicated_storage_class.go +++ b/images/sds-replicated-volume-controller/src/pkg/controller/replicated_storage_class.go @@ -20,6 +20,7 @@ import ( "context" "errors" "fmt" + "maps" "reflect" "slices" "strings" @@ -46,10 +47,13 @@ import ( const ( ReplicatedStorageClassControllerName = "replicated-storage-class-controller" - ReplicatedStorageClassFinalizerName = "replicatedstorageclass.storage.deckhouse.io" - StorageClassProvisioner = "replicated.csi.storage.deckhouse.io" - StorageClassKind = "StorageClass" - StorageClassAPIVersion = "storage.k8s.io/v1" + // TODO + ReplicatedStorageClassFinalizerName = "replicatedstorageclass.storage.deckhouse.io" + // TODO + StorageClassFinalizerName = "storage.deckhouse.io/sds-replicated-volume" + StorageClassProvisioner = "replicated.csi.storage.deckhouse.io" + StorageClassKind = "StorageClass" + StorageClassAPIVersion = "storage.k8s.io/v1" ZoneLabel = "topology.kubernetes.io/zone" StorageClassLabelKeyPrefix = "class.storage.deckhouse.io" @@ -157,69 +161,79 @@ func NewReplicatedStorageClass( return c, err } -func ReconcileReplicatedStorageClassEvent(ctx context.Context, cl client.Client, log logger.Logger, cfg *config.Options, request reconcile.Request) (bool, error) { - log.Trace(fmt.Sprintf("[ReconcileReplicatedStorageClassEvent] Try to get ReplicatedStorageClass with name: %s", request.Name)) +func ReconcileReplicatedStorageClassEvent( + ctx context.Context, + cl client.Client, + log logger.Logger, + cfg *config.Options, + request reconcile.Request, +) (bool, error) { + log.Trace(fmt.Sprintf("[ReconcileReplicatedStorageClassEvent] Try to get ReplicatedStorageClass with name: %s", + request.Name)) replicatedSC, err := GetReplicatedStorageClass(ctx, cl, request.Namespace, request.Name) if err != nil { if k8serrors.IsNotFound(err) { - log.Info(fmt.Sprintf("[ReconcileReplicatedStorageClassEvent] ReplicatedStorageClass with name: %s not found. Finish reconcile.", request.Name)) + log.Info(fmt.Sprintf("[ReconcileReplicatedStorageClassEvent] "+ + "ReplicatedStorageClass with name: %s not found. Finish reconcile.", request.Name)) return false, nil } - return true, fmt.Errorf("[ReconcileReplicatedStorageClassEvent] error getting ReplicatedStorageClass: %w", err) + return true, fmt.Errorf("error getting ReplicatedStorageClass: %w", err) } - shouldRequeue, err := ReconcileReplicatedStorageClass(ctx, cl, log, cfg, replicatedSC) + sc, err := GetStorageClass(ctx, cl, replicatedSC.Name) if err != nil { - replicatedSC.Status.Phase = Failed - replicatedSC.Status.Reason = err.Error() - log.Trace(fmt.Sprintf("[ReconcileReplicatedStorageClass] update ReplicatedStorageClass %+v", replicatedSC)) - if updateErr := UpdateReplicatedStorageClass(ctx, cl, replicatedSC); updateErr != nil { - // save err and add new error to it - err = errors.Join(err, updateErr) - err = fmt.Errorf("[ReconcileReplicatedStorageClassEvent] error after ReconcileReplicatedStorageClass and error after UpdateReplicatedStorageClass: %w", err) - shouldRequeue = true + if k8serrors.IsNotFound(err) { + log.Info("[ReconcileReplicatedStorageClassEvent] StorageClass with name: " + + replicatedSC.Name + " not found.") + } else { + return true, fmt.Errorf("error getting StorageClass: %w", err) } } - return shouldRequeue, err -} + if sc != nil && sc.Provisioner != StorageClassProvisioner { + return false, fmt.Errorf("Reconcile StorageClass with provisioner %s is not allowed", sc.Provisioner) + } -func ReconcileReplicatedStorageClass(ctx context.Context, cl client.Client, log logger.Logger, cfg *config.Options, replicatedSC *srv.ReplicatedStorageClass) (bool, error) { + // Handle deletion if replicatedSC.ObjectMeta.DeletionTimestamp != nil { - log.Info("[ReconcileReplicatedStorageClass] ReplicatedStorageClass with name: " + replicatedSC.Name + " is marked for deletion. Removing it.") - switch replicatedSC.Status.Phase { - case Failed: - log.Info("[ReconcileReplicatedStorageClass] StorageClass with name: " + replicatedSC.Name + " was not deleted because the ReplicatedStorageClass is in a Failed state. Deleting only finalizer.") - case Created: - sc, err := GetStorageClass(ctx, cl, replicatedSC.Namespace, replicatedSC.Name) - if err != nil { - if k8serrors.IsNotFound(err) { - log.Info("[ReconcileReplicatedStorageClass] StorageClass with name: " + replicatedSC.Name + " not found. No need to delete it.") - break - } - return true, fmt.Errorf("[ReconcileReplicatedStorageClass] error getting StorageClass: %s", err.Error()) - } - - log.Info("[ReconcileReplicatedStorageClass] StorageClass with name: " + replicatedSC.Name + " found. Deleting it.") - if err := DeleteStorageClass(ctx, cl, sc); err != nil { - return true, fmt.Errorf("[ReconcileReplicatedStorageClass] error DeleteStorageClass: %s", err.Error()) + log.Info("[ReconcileReplicatedStorageClass] ReplicatedStorageClass with name: " + + replicatedSC.Name + " is marked for deletion. Removing it.") + shouldRequeue, err := ReconcileDeleteReplicatedStorageClass(ctx, cl, log, replicatedSC, sc) + if err != nil { + if updateErr := updateReplicatedStorageClassStatus(ctx, cl, log, replicatedSC, Failed, err.Error()); updateErr != nil { + err = errors.Join(err, updateErr) + err = fmt.Errorf("[ReconcileReplicatedStorageClassEvent] error after "+ + "ReconcileDeleteReplicatedStorageClass and error after UpdateReplicatedStorageClass: %w", err) + shouldRequeue = true } - log.Info("[ReconcileReplicatedStorageClass] StorageClass with name: " + replicatedSC.Name + " deleted.") } + return shouldRequeue, err + } - log.Info("[ReconcileReplicatedStorageClass] Removing finalizer from ReplicatedStorageClass with name: " + replicatedSC.Name) - - replicatedSC.ObjectMeta.Finalizers = RemoveString(replicatedSC.ObjectMeta.Finalizers, ReplicatedStorageClassFinalizerName) - if err := UpdateReplicatedStorageClass(ctx, cl, replicatedSC); err != nil { - return true, fmt.Errorf("[ReconcileReplicatedStorageClass] error UpdateReplicatedStorageClass after removing finalizer: %s", err.Error()) + // Normal reconciliation + shouldRequeue, err := ReconcileReplicatedStorageClass(ctx, cl, log, cfg, replicatedSC, sc) + if err != nil { + if updateErr := updateReplicatedStorageClassStatus(ctx, cl, log, replicatedSC, Failed, err.Error()); updateErr != nil { + err = errors.Join(err, updateErr) + err = fmt.Errorf("[ReconcileReplicatedStorageClassEvent] error after ReconcileReplicatedStorageClass"+ + "and error after UpdateReplicatedStorageClass: %w", err) + shouldRequeue = true } - - log.Info("[ReconcileReplicatedStorageClass] Finalizer removed from ReplicatedStorageClass with name: " + replicatedSC.Name) - return false, nil } + return shouldRequeue, err +} + +func ReconcileReplicatedStorageClass( + ctx context.Context, + cl client.Client, + log logger.Logger, + cfg *config.Options, + replicatedSC *srv.ReplicatedStorageClass, + oldSC *storagev1.StorageClass, +) (bool, error) { log.Info("[ReconcileReplicatedStorageClass] Validating ReplicatedStorageClass with name: " + replicatedSC.Name) zones, err := GetClusterZones(ctx, cl) @@ -230,51 +244,51 @@ func ReconcileReplicatedStorageClass(ctx context.Context, cl client.Client, log valid, msg := ValidateReplicatedStorageClass(replicatedSC, zones) if !valid { - err := fmt.Errorf("[ReconcileReplicatedStorageClass] Validation of ReplicatedStorageClass %s failed for the following reason: %s", replicatedSC.Name, msg) + err := fmt.Errorf("[ReconcileReplicatedStorageClass] Validation of "+ + "ReplicatedStorageClass %s failed for the following reason: %s", replicatedSC.Name, msg) return false, err } - log.Info("[ReconcileReplicatedStorageClass] ReplicatedStorageClass with name: " + replicatedSC.Name + " is valid") - - log.Info("[ReconcileReplicatedStorageClass] Try to get StorageClass with name: " + replicatedSC.Name) - oldSC, err := GetStorageClass(ctx, cl, replicatedSC.Namespace, replicatedSC.Name) - if err != nil { - if !k8serrors.IsNotFound(err) { - return true, fmt.Errorf("[ReconcileReplicatedStorageClass] error getting StorageClass: %w", err) - } - } + log.Info("[ReconcileReplicatedStorageClass] ReplicatedStorageClass with name: " + + replicatedSC.Name + " is valid") - log.Trace("[ReconcileReplicatedStorageClass] Check if virtualization module is enabled and if the ReplicatedStorageClass has VolumeAccess set to Local") + log.Trace("[ReconcileReplicatedStorageClass] Check if virtualization module is enabled and if " + + "the ReplicatedStorageClass has VolumeAccess set to Local") var virtualizationEnabled bool if replicatedSC.Spec.VolumeAccess == VolumeAccessLocal { - virtualizationEnabled, err = GetVirtualizationModuleEnabled(ctx, cl, log, types.NamespacedName{Name: ControllerConfigMapName, Namespace: cfg.ControllerNamespace}) + virtualizationEnabled, err = GetVirtualizationModuleEnabled(ctx, cl, log, + types.NamespacedName{Name: ControllerConfigMapName, Namespace: cfg.ControllerNamespace}) if err != nil { err = fmt.Errorf("[ReconcileReplicatedStorageClass] error GetVirtualizationModuleEnabled: %w", err) return true, err } - log.Trace(fmt.Sprintf("[ReconcileReplicatedStorageClass] ReplicatedStorageClass has VolumeAccess set to Local and virtualization module is %t", virtualizationEnabled)) + log.Trace(fmt.Sprintf("[ReconcileReplicatedStorageClass] ReplicatedStorageClass has VolumeAccess set "+ + "to Local and virtualization module is %t", virtualizationEnabled)) } + newSC := GetNewStorageClass(replicatedSC, virtualizationEnabled) + if oldSC == nil { - log.Info("[ReconcileReplicatedStorageClass] StorageClass with name: " + replicatedSC.Name + " not found. Create it.") - newSC := GetNewStorageClass(replicatedSC, virtualizationEnabled) + log.Info("[ReconcileReplicatedStorageClass] StorageClass with name: " + + replicatedSC.Name + " not found. Create it.") log.Trace(fmt.Sprintf("[ReconcileReplicatedStorageClass] create StorageClass %+v", newSC)) if err = CreateStorageClass(ctx, cl, newSC); err != nil { - return true, fmt.Errorf("[ReconcileReplicatedStorageClass] error CreateStorageClass %s: %w", replicatedSC.Name, err) + return true, fmt.Errorf("error CreateStorageClass %s: %w", replicatedSC.Name, err) } log.Info("[ReconcileReplicatedStorageClass] StorageClass with name: " + replicatedSC.Name + " created.") } else { - log.Info("[ReconcileReplicatedStorageClass] StorageClass with name: " + replicatedSC.Name + " found. Update it if needed.") - - shouldRequeue, err := UpdateStorageClassIfNeeded(ctx, cl, log, replicatedSC, oldSC, virtualizationEnabled) + log.Info("[ReconcileReplicatedStorageClass] StorageClass with name: Update " + replicatedSC.Name + + " storage class if needed.") + shouldRequeue, err := UpdateStorageClassIfNeeded(ctx, cl, log, newSC, oldSC) if err != nil { - return shouldRequeue, fmt.Errorf("[ReconcileReplicatedStorageClass] error updateStorageClassIfNeeded: %w", err) + return shouldRequeue, fmt.Errorf("error updateStorageClassIfNeeded: %w", err) } } replicatedSC.Status.Phase = Created replicatedSC.Status.Reason = "ReplicatedStorageClass and StorageClass are equal." if !slices.Contains(replicatedSC.ObjectMeta.Finalizers, ReplicatedStorageClassFinalizerName) { - replicatedSC.ObjectMeta.Finalizers = append(replicatedSC.ObjectMeta.Finalizers, ReplicatedStorageClassFinalizerName) + replicatedSC.ObjectMeta.Finalizers = append(replicatedSC.ObjectMeta.Finalizers, + ReplicatedStorageClassFinalizerName) } log.Trace(fmt.Sprintf("[ReconcileReplicatedStorageClassEvent] update ReplicatedStorageClass %+v", replicatedSC)) if err = UpdateReplicatedStorageClass(ctx, cl, replicatedSC); err != nil { @@ -285,6 +299,47 @@ func ReconcileReplicatedStorageClass(ctx context.Context, cl client.Client, log return false, nil } +func ReconcileDeleteReplicatedStorageClass( + ctx context.Context, + cl client.Client, + log logger.Logger, + replicatedSC *srv.ReplicatedStorageClass, + sc *storagev1.StorageClass, +) (bool, error) { + switch replicatedSC.Status.Phase { + case Failed: + log.Info("[ReconcileDeleteReplicatedStorageClass] StorageClass with name: " + replicatedSC.Name + + " was not deleted because the ReplicatedStorageClass is in a Failed state. Deleting only finalizer.") + case Created: + if sc == nil { + log.Info("[ReconcileDeleteReplicatedStorageClass] StorageClass with name: " + replicatedSC.Name + + " no need to delete.") + break + } + log.Info("[ReconcileDeleteReplicatedStorageClass] StorageClass with name: " + replicatedSC.Name + + " found. Deleting it.") + + if err := DeleteStorageClass(ctx, cl, sc); err != nil { + return true, fmt.Errorf("error DeleteStorageClass: %w", err) + } + log.Info("[ReconcileDeleteReplicatedStorageClass] StorageClass with name: " + replicatedSC.Name + + " deleted.") + } + + log.Info("[ReconcileDeleteReplicatedStorageClass] Removing finalizer from ReplicatedStorageClass with name: " + + replicatedSC.Name) + + replicatedSC.ObjectMeta.Finalizers = RemoveString(replicatedSC.ObjectMeta.Finalizers, + ReplicatedStorageClassFinalizerName) + if err := UpdateReplicatedStorageClass(ctx, cl, replicatedSC); err != nil { + return true, fmt.Errorf("error UpdateReplicatedStorageClass after removing finalizer: %w", err) + } + + log.Info("[ReconcileDeleteReplicatedStorageClass] Finalizer removed from ReplicatedStorageClass with name: " + + replicatedSC.Name) + return false, nil +} + func GetClusterZones(ctx context.Context, cl client.Client) (map[string]struct{}, error) { nodes := v1.NodeList{} if err := cl.List(ctx, &nodes); err != nil { @@ -368,7 +423,7 @@ func UpdateReplicatedStorageClass(ctx context.Context, cl client.Client, replica return nil } -func CompareStorageClasses(oldSC, newSC *storagev1.StorageClass) (bool, string) { +func CompareStorageClasses(newSC, oldSC *storagev1.StorageClass) (bool, string) { var ( failedMsgBuilder strings.Builder equal = true @@ -477,8 +532,10 @@ func GenerateStorageClassFromReplicatedStorageClass(replicatedSC *srv.Replicated Name: replicatedSC.Name, Namespace: replicatedSC.Namespace, OwnerReferences: nil, - Finalizers: nil, + Finalizers: []string{StorageClassFinalizerName}, ManagedFields: nil, + Labels: map[string]string{ManagedLabelKey: ManagedLabelValue}, + Annotations: nil, }, AllowVolumeExpansion: &allowVolumeExpansion, Parameters: storageClassParameters, @@ -497,14 +554,17 @@ func GetReplicatedStorageClass(ctx context.Context, cl client.Client, namespace, Namespace: namespace, }, replicatedSC) + if err != nil { + return nil, err + } + return replicatedSC, err } -func GetStorageClass(ctx context.Context, cl client.Client, namespace, name string) (*storagev1.StorageClass, error) { +func GetStorageClass(ctx context.Context, cl client.Client, name string) (*storagev1.StorageClass, error) { sc := &storagev1.StorageClass{} err := cl.Get(ctx, client.ObjectKey{ - Name: name, - Namespace: namespace, + Name: name, }, sc) if err != nil { @@ -515,49 +575,97 @@ func GetStorageClass(ctx context.Context, cl client.Client, namespace, name stri } func DeleteStorageClass(ctx context.Context, cl client.Client, sc *storagev1.StorageClass) error { - return cl.Delete(ctx, sc) + finalizers := sc.ObjectMeta.Finalizers + switch len(finalizers) { + case 0: + return cl.Delete(ctx, sc) + case 1: + if finalizers[0] != StorageClassFinalizerName { + return fmt.Errorf("deletion of StorageClass with finalizer %s is not allowed", finalizers[0]) + } + sc.ObjectMeta.Finalizers = nil + if err := cl.Update(ctx, sc); err != nil { + return fmt.Errorf("error updating StorageClass to remove finalizer %s: %w", + StorageClassFinalizerName, err) + } + return cl.Delete(ctx, sc) + } + // The finalizers list contains more than one element — return an error + return fmt.Errorf("deletion of StorageClass with multiple(%v) finalizers is not allowed", finalizers) } -func RemoveString(slice []string, s string) (result []string) { - for _, value := range slice { - if value != s { - result = append(result, value) +// areSlicesEqualIgnoreOrder compares two slices as sets, ignoring order +func areSlicesEqualIgnoreOrder(a, b []string) bool { + if len(a) != len(b) { + return false + } + + set := make(map[string]struct{}, len(a)) + for _, item := range a { + set[item] = struct{}{} + } + + for _, item := range b { + if _, found := set[item]; !found { + return false } } - return + + return true } -func ReconcileStorageClassLabelsAndAnnotationsIfNeeded(ctx context.Context, cl client.Client, oldSC, newSC *storagev1.StorageClass) error { - if !reflect.DeepEqual(oldSC.Labels, newSC.Labels) || !reflect.DeepEqual(oldSC.Annotations, newSC.Annotations) { - oldSC.Labels = newSC.Labels - oldSC.Annotations = newSC.Annotations - return cl.Update(ctx, oldSC) +func updateStorageClassMetaDataIfNeeded( + ctx context.Context, + cl client.Client, + newSC, oldSC *storagev1.StorageClass, +) error { + needsUpdate := !maps.Equal(oldSC.Labels, newSC.Labels) || + !maps.Equal(oldSC.Annotations, newSC.Annotations) || + !areSlicesEqualIgnoreOrder(newSC.Finalizers, oldSC.Finalizers) + + if !needsUpdate { + return nil } - return nil + + oldSC.Labels = maps.Clone(newSC.Labels) + oldSC.Annotations = maps.Clone(newSC.Annotations) + oldSC.Finalizers = slices.Clone(newSC.Finalizers) + + return cl.Update(ctx, oldSC) } -func canRecreateStorageClass(oldSC, newSC *storagev1.StorageClass) (bool, string) { +func canRecreateStorageClass(newSC, oldSC *storagev1.StorageClass) (bool, string) { newSCCopy := newSC.DeepCopy() oldSCCopy := oldSC.DeepCopy() - // We can recreate StorageClass only if the following parameters are not equal. If other parameters are not equal, we can't recreate StorageClass and users must delete ReplicatedStorageClass resource and create it again manually. + // We can recreate StorageClass only if the following parameters are not equal. + // If other parameters are not equal, we can't recreate StorageClass and + // users must delete ReplicatedStorageClass resource and create it again manually. delete(newSCCopy.Parameters, QuorumMinimumRedundancyWithPrefixSCKey) delete(oldSCCopy.Parameters, QuorumMinimumRedundancyWithPrefixSCKey) return CompareStorageClasses(newSCCopy, oldSCCopy) } -func recreateStorageClassIfNeeded(ctx context.Context, cl client.Client, log logger.Logger, oldSC, newSC *storagev1.StorageClass) (isRecreated, shouldRequeue bool, err error) { +func recreateStorageClassIfNeeded( + ctx context.Context, + cl client.Client, + log logger.Logger, + newSC, oldSC *storagev1.StorageClass, +) (isRecreated, shouldRequeue bool, err error) { equal, msg := CompareStorageClasses(newSC, oldSC) log.Trace(fmt.Sprintf("[recreateStorageClassIfNeeded] msg after compare: %s", msg)) if equal { - log.Info("[recreateStorageClassIfNeeded] Old and new StorageClass are equal. No need to recreate StorageClass.") + log.Info("[recreateStorageClassIfNeeded] Old and new StorageClass are equal." + + "No need to recreate StorageClass.") return false, false, nil } - log.Info("[recreateStorageClassIfNeeded] ReplicatedStorageClass and StorageClass are not equal. Check if StorageClass can be recreated.") - canRecreate, msg := canRecreateStorageClass(oldSC, newSC) + log.Info("[recreateStorageClassIfNeeded] ReplicatedStorageClass and StorageClass are not equal." + + "Check if StorageClass can be recreated.") + canRecreate, msg := canRecreateStorageClass(newSC, oldSC) if !canRecreate { - err := fmt.Errorf("[recreateStorageClassIfNeeded] The StorageClass cannot be recreated because its parameters are not equal: %s", msg) + err := fmt.Errorf("[recreateStorageClassIfNeeded] The StorageClass cannot be recreated because "+ + "its parameters are not equal: %s", msg) return false, false, err } @@ -578,56 +686,105 @@ func recreateStorageClassIfNeeded(ctx context.Context, cl client.Client, log log func GetNewStorageClass(replicatedSC *srv.ReplicatedStorageClass, virtualizationEnabled bool) *storagev1.StorageClass { newSC := GenerateStorageClassFromReplicatedStorageClass(replicatedSC) - newSC.Labels = map[string]string{ManagedLabelKey: ManagedLabelValue} if replicatedSC.Spec.VolumeAccess == VolumeAccessLocal && virtualizationEnabled { - newSC.Annotations = map[string]string{StorageClassVirtualizationAnnotationKey: StorageClassVirtualizationAnnotationValue} + if newSC.Annotations == nil { + newSC.Annotations = make(map[string]string, 1) + } + newSC.Annotations[StorageClassVirtualizationAnnotationKey] = StorageClassVirtualizationAnnotationValue } return newSC } -func GetUpdatedStorageClass(replicatedSC *srv.ReplicatedStorageClass, oldSC *storagev1.StorageClass, virtualizationEnabled bool) *storagev1.StorageClass { - newSC := GenerateStorageClassFromReplicatedStorageClass(replicatedSC) - - newSC.Labels = make(map[string]string, len(oldSC.Labels)) - for k, v := range oldSC.Labels { - newSC.Labels[k] = v +func DoUpdateStorageClass( + newSC *storagev1.StorageClass, + oldSC *storagev1.StorageClass, +) { + // Copy Labels from oldSC to newSC if they do not exist in newSC + if len(oldSC.Labels) > 0 { + if newSC.Labels == nil { + newSC.Labels = maps.Clone(oldSC.Labels) + } else { + updateMap(newSC.Labels, oldSC.Labels) + } } - newSC.Labels[ManagedLabelKey] = ManagedLabelValue - newSC.Annotations = make(map[string]string, len(oldSC.Annotations)) - for k, v := range oldSC.Annotations { - newSC.Annotations[k] = v - } + copyAnnotations := maps.Clone(oldSC.Annotations) + delete(copyAnnotations, StorageClassVirtualizationAnnotationKey) - if replicatedSC.Spec.VolumeAccess == VolumeAccessLocal && virtualizationEnabled { - newSC.Annotations[StorageClassVirtualizationAnnotationKey] = StorageClassVirtualizationAnnotationValue - } else { - delete(newSC.Annotations, StorageClassVirtualizationAnnotationKey) + // Copy relevant Annotations from oldSC to newSC, excluding StorageClassVirtualizationAnnotationKey + if len(copyAnnotations) > 0 { + if newSC.Annotations == nil { + newSC.Annotations = copyAnnotations + } else { + updateMap(newSC.Annotations, copyAnnotations) + } } - if len(newSC.Annotations) == 0 { - newSC.Annotations = nil + // Copy Finalizers from oldSC to newSC, avoiding duplicates + if len(oldSC.Finalizers) > 0 { + finalizersSet := make(map[string]struct{}, len(newSC.Finalizers)) + for _, f := range newSC.Finalizers { + finalizersSet[f] = struct{}{} + } + for _, f := range oldSC.Finalizers { + if _, exists := finalizersSet[f]; !exists { + newSC.Finalizers = append(newSC.Finalizers, f) + finalizersSet[f] = struct{}{} + } + } } - - return newSC } -func UpdateStorageClassIfNeeded(ctx context.Context, cl client.Client, log logger.Logger, replicatedSC *srv.ReplicatedStorageClass, oldSC *storagev1.StorageClass, virtualizationEnabled bool) (bool, error) { - newSC := GetUpdatedStorageClass(replicatedSC, oldSC, virtualizationEnabled) +func UpdateStorageClassIfNeeded( + ctx context.Context, + cl client.Client, + log logger.Logger, + newSC *storagev1.StorageClass, + oldSC *storagev1.StorageClass, +) (bool, error) { + DoUpdateStorageClass(newSC, oldSC) log.Trace(fmt.Sprintf("[UpdateStorageClassIfNeeded] old StorageClass %+v", oldSC)) log.Trace(fmt.Sprintf("[UpdateStorageClassIfNeeded] updated StorageClass %+v", newSC)) - isRecreated, shouldRequeue, err := recreateStorageClassIfNeeded(ctx, cl, log, oldSC, newSC) - if err != nil { + isRecreated, shouldRequeue, err := recreateStorageClassIfNeeded(ctx, cl, log, newSC, oldSC) + if err != nil || isRecreated { return shouldRequeue, err } - if !isRecreated { - err := ReconcileStorageClassLabelsAndAnnotationsIfNeeded(ctx, cl, oldSC, newSC) - if err != nil { - return true, err - } + if err := updateStorageClassMetaDataIfNeeded(ctx, cl, newSC, oldSC); err != nil { + return true, err } return shouldRequeue, nil } + +func RemoveString(slice []string, s string) (result []string) { + for _, value := range slice { + if value != s { + result = append(result, value) + } + } + return +} + +func updateReplicatedStorageClassStatus( + ctx context.Context, + cl client.Client, + log logger.Logger, + replicatedSC *srv.ReplicatedStorageClass, + phase string, + reason string, +) error { + replicatedSC.Status.Phase = phase + replicatedSC.Status.Reason = reason + log.Trace(fmt.Sprintf("[updateReplicatedStorageClassStatus] update ReplicatedStorageClass %+v", replicatedSC)) + return UpdateReplicatedStorageClass(ctx, cl, replicatedSC) +} + +func updateMap(dst, src map[string]string) { + for k, v := range src { + if _, exists := dst[k]; !exists { + dst[k] = v + } + } +} diff --git a/images/sds-replicated-volume-controller/src/pkg/controller/replicated_storage_class_test.go b/images/sds-replicated-volume-controller/src/pkg/controller/replicated_storage_class_test.go index e9594034..21e27b3e 100644 --- a/images/sds-replicated-volume-controller/src/pkg/controller/replicated_storage_class_test.go +++ b/images/sds-replicated-volume-controller/src/pkg/controller/replicated_storage_class_test.go @@ -112,7 +112,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Name: testName, Namespace: testNamespaceConst, OwnerReferences: nil, - Finalizers: nil, + Finalizers: []string{controller.StorageClassFinalizerName}, ManagedFields: nil, Labels: map[string]string{ "storage.deckhouse.io/managed-by": "sds-replicated-volume", @@ -150,7 +150,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { } Expect(err).NotTo(HaveOccurred()) - sc, err := controller.GetStorageClass(ctx, cl, testNamespaceConst, testName) + sc, err := controller.GetStorageClass(ctx, cl, testName) Expect(err).NotTo(HaveOccurred()) Expect(sc).NotTo(BeNil()) Expect(sc.Name).To(Equal(testName)) @@ -185,7 +185,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { err = controller.DeleteStorageClass(ctx, cl, storageClass) Expect(err).NotTo(HaveOccurred()) - sc, err := controller.GetStorageClass(ctx, cl, testName, testNamespaceConst) + sc, err := controller.GetStorageClass(ctx, cl, testName) Expect(err).NotTo(BeNil()) Expect(errors.IsNotFound(err)).To(BeTrue()) Expect(sc).To(BeNil()) @@ -207,7 +207,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { } Expect(err).NotTo(HaveOccurred()) - sc, err = controller.GetStorageClass(ctx, cl, testNamespaceConst, testName) + sc, err = controller.GetStorageClass(ctx, cl, testName) Expect(err).NotTo(HaveOccurred()) Expect(sc).NotTo(BeNil()) Expect(sc.Name).To(Equal(testName)) @@ -381,7 +381,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(err).NotTo(HaveOccurred()) Expect(reflect.ValueOf(resources[testName]).IsZero()).To(BeTrue()) - sc, err := controller.GetStorageClass(ctx, cl, testNamespaceConst, testName) + sc, err := controller.GetStorageClass(ctx, cl, testName) Expect(err).To(HaveOccurred()) Expect(errors.IsNotFound(err)).To(BeTrue()) Expect(sc).To(BeNil()) @@ -432,7 +432,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(err).NotTo(HaveOccurred()) Expect(requeue).To(BeFalse()) - storageClass, err := controller.GetStorageClass(ctx, cl, testNamespaceConst, testName) + storageClass, err := controller.GetStorageClass(ctx, cl, testName) Expect(err).NotTo(HaveOccurred()) Expect(storageClass).NotTo(BeNil()) Expect(storageClass.Name).To(Equal(testName)) @@ -680,7 +680,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { replicatedSC = getAndValidateNotReconciledRSC(ctx, cl, testName) - storageClass, err := controller.GetStorageClass(ctx, cl, testNamespaceConst, testName) + storageClass, err := controller.GetStorageClass(ctx, cl, testName) Expect(err).To(HaveOccurred()) Expect(errors.IsNotFound(err)).To(BeTrue()) Expect(storageClass).To(BeNil()) @@ -699,7 +699,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(slices.Contains(resource.Finalizers, controller.ReplicatedStorageClassFinalizerName)).To(BeTrue()) - storageClass, err = controller.GetStorageClass(ctx, cl, testNamespaceConst, testName) + storageClass, err = controller.GetStorageClass(ctx, cl, testName) Expect(err).NotTo(HaveOccurred()) Expect(storageClass).NotTo(BeNil()) Expect(storageClass.Name).To(Equal(testName)) @@ -764,7 +764,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { resFinalizers := strings.Join(resource.Finalizers, "") Expect(strings.Contains(resFinalizers, controller.ReplicatedStorageClassFinalizerName)) - storageClass, err := controller.GetStorageClass(ctx, cl, testNamespaceConst, testName) + storageClass, err := controller.GetStorageClass(ctx, cl, testName) Expect(err).NotTo(HaveOccurred()) Expect(storageClass).NotTo(BeNil()) Expect(storageClass.Name).To(Equal(testName)) @@ -788,7 +788,11 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { }, } - failedMessage := "[ReconcileReplicatedStorageClass] error updateStorageClassIfNeeded: [recreateStorageClassIfNeeded] The StorageClass cannot be recreated because its parameters are not equal: Old StorageClass and New StorageClass are not equal: ReclaimPolicy are not equal (Old StorageClass: Retain, New StorageClass: not-equal" + failedMessage := "error updateStorageClassIfNeeded: " + + "[recreateStorageClassIfNeeded] The StorageClass cannot be recreated because its parameters are not equal: " + + "Old StorageClass and New StorageClass are not equal: ReclaimPolicy are not equal " + + "(Old StorageClass: not-equal, New StorageClass: Retain" + err := cl.Create(ctx, &replicatedSC) if err == nil { defer func() { @@ -818,7 +822,7 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(replicatedSCafterReconcile.Name).To(Equal(testName)) Expect(replicatedSCafterReconcile.Status.Phase).To(Equal(controller.Failed)) - storageClass, err := controller.GetStorageClass(ctx, cl, testNamespaceConst, testName) + storageClass, err := controller.GetStorageClass(ctx, cl, testName) Expect(err).NotTo(HaveOccurred()) Expect(storageClass).NotTo(BeNil()) Expect(storageClass.Name).To(Equal(testName)) @@ -1570,7 +1574,8 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(storageClass.Annotations).NotTo(BeNil()) Expect(storageClass.Annotations[controller.StorageClassVirtualizationAnnotationKey]).To(Equal(controller.StorageClassVirtualizationAnnotationValue)) - scResourceAfterUpdate := controller.GetUpdatedStorageClass(&replicatedSC, storageClass, virtualizationEnabled) + scResourceAfterUpdate := controller.GetNewStorageClass(&replicatedSC, virtualizationEnabled) + controller.DoUpdateStorageClass(scResourceAfterUpdate, storageClass) Expect(scResourceAfterUpdate).NotTo(BeNil()) Expect(scResourceAfterUpdate.Annotations).To(BeNil()) @@ -1662,7 +1667,8 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(err).NotTo(HaveOccurred()) Expect(virtualizationEnabled).To(BeTrue()) - scResource := controller.GetUpdatedStorageClass(&replicatedSC, storageClass, virtualizationEnabled) + scResource := controller.GetNewStorageClass(&replicatedSC, virtualizationEnabled) + controller.DoUpdateStorageClass(scResource, storageClass) Expect(scResource).NotTo(BeNil()) Expect(scResource.Annotations).NotTo(BeNil()) Expect(len(scResource.Annotations)).To(Equal(2)) @@ -1727,8 +1733,8 @@ var _ = Describe(controller.ReplicatedStorageClassControllerName, func() { Expect(err).NotTo(HaveOccurred()) Expect(virtualizationEnabled).To(BeFalse()) - scResourceAfterUpdate := controller.GetUpdatedStorageClass(&replicatedSC, storageClass, virtualizationEnabled) - Expect(scResourceAfterUpdate).NotTo(BeNil()) + scResourceAfterUpdate := controller.GetNewStorageClass(&replicatedSC, virtualizationEnabled) + controller.DoUpdateStorageClass(scResourceAfterUpdate, storageClass) Expect(scResourceAfterUpdate.Annotations).NotTo(BeNil()) Expect(len(scResourceAfterUpdate.Annotations)).To(Equal(1)) Expect(scResourceAfterUpdate.Annotations[controller.DefaultStorageClassAnnotationKey]).To(Equal("true")) diff --git a/images/webhooks/src/go.sum b/images/webhooks/src/go.sum index 751e914e..8de9b056 100644 --- a/images/webhooks/src/go.sum +++ b/images/webhooks/src/go.sum @@ -7,10 +7,6 @@ github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSs github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM= github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= -github.com/deckhouse/sds-node-configurator/api v0.0.0-20240805103635-969dc811217b h1:EYmHWTWcWMpyxJGZK05ZxlIFnh9s66DRrxLw/LNb/xw= -github.com/deckhouse/sds-node-configurator/api v0.0.0-20240805103635-969dc811217b/go.mod h1:H71+9G0Jr46Qs0BA3z3/xt0h9lbnJnCEYcaCJCWFBf0= -github.com/deckhouse/sds-node-configurator/api v0.0.0-20240919102704-a035b4a92e77 h1:Y3vswUk/rnCpkZzWBk+Mlr9LtMg6EI5LkQ4GvgHCslI= -github.com/deckhouse/sds-node-configurator/api v0.0.0-20240919102704-a035b4a92e77/go.mod h1:H71+9G0Jr46Qs0BA3z3/xt0h9lbnJnCEYcaCJCWFBf0= github.com/deckhouse/sds-node-configurator/api v0.0.0-20240925090458-249de2896583 h1:HQd5YFQqoHj/CQwBKFCyuVCQmNV0PdML8QJiyDka4fQ= github.com/deckhouse/sds-node-configurator/api v0.0.0-20240925090458-249de2896583/go.mod h1:H71+9G0Jr46Qs0BA3z3/xt0h9lbnJnCEYcaCJCWFBf0= github.com/emicklei/go-restful/v3 v3.11.0 h1:rAQeMHw1c7zTmncogyy8VvRZwtkmkZ4FxERmMY4rD+g=