Skip to content

Releases: libopenstorage/stork

23.7.0

02 Aug 17:03
Compare
Choose a tag to compare

New Features

Improvements

  • Updated golang, aws-iam-authenticator and google-cloud-sdk versions to resolve 20 Critical and 105 High vulnerabilities reported by the JFrog scanner. #1458

  • Added kubelogin utility in Stork container. #1448

Bug Fixes

  • Issue: Due to occasional delays in bound pod deletion, the current timeout setting of 30 seconds proves insufficient, resulting in backup failures.
    User Impact: At times, the backup process for a volume, which is bound with the "WaitForFirstConsumer" mode, encounters timeout errors and fails.
    Resolution: The timeout value has been extended to five minutes to ensure that the deletion of the bound pod, created for binding the volume with "WaitForFirstConsumer," will not encounter timeout errors. #1454

  • Issue: During the cleanup process, when the KDMP/Localsnapshot backup fails, the volumesnapshot/volumesnapshotcontent were not being removed.
    User Impact: Unnecessary accumulation of the stale volumesnapshot/volumesnapshotcontent was occurring.
    Resolution: Volumesnapshot/volumensnapshotcontent cleanup is now performed, even in the case of failed KDMP/localshapshot backups. #295

  • Issue: When the native CSI backup fails with a timeout error, the volumesnapshotcontent is not being deleted.
    User Impact: In the event of native CSI backup failures, the volumesnapshotcontent will accumulate.
    Resolution: Proper handling includes deleting the volumesnapshotcontent in case of failure as well. #1460

23.6.0

27 Jun 07:18
Compare
Choose a tag to compare

New Features

  • Added support for NFS share backup location for the applicationbackup and applicationrestore in stork. This support is currently supported only with the PX-Backup product. #1434

Bug Fixes

  • Issue: Update calls were happening to the volumesnapshotschedule in each 10 seconds even if there is no new update.
    User Impact: With lots of volumesnapshotschedules running, it is unnecessary load on api-server.
    Resolution: Avoiding updates if there is no change to volume snapshot list as part pf pruning. #1415

  • Issue: Restore size was taken from volumesnapshot size because for some CSI driver PVC was not bounded, if Volumesnapshot size less than source volume size.
    User Impact: CSI restore was failing with some of the storage provisioner.
    Resolution: Updating the restore volume size only when the volumesnapshot size is greate than source volume size. #1445

  • Issue: Mysql app can show inconsistent data after being restored.
    User Impact: All Mysql backup may fail to run mysql application after restore operation due to data inconsistency
    Resolution: Fixed the data inconsistency by holding the table lock for a required interval while backup is in progress. #1436

23.5.0

31 May 07:47
Compare
Choose a tag to compare

New Features

  • You can now provide namespace labels in the MigrationSchedule spec. This enables the specification of namespaces to be migrated. #1395

Improvements

  • Stork now supports the ignoreOwnerReferences parameter in the Migration and MigrationSchedule objects. This parameter enables Stork to skip the owner reference check and migrate all resources, and it removes the ownerReference while applying the resource. This allows migrating all the Kubernetes resources managed and owned by an application Operator’s CR. #1398
    NOTE: You need to update the storkctl binary for this change to take effect.

Bug Fixes

  • Issue: Restoring from a backup, which was taken from a previously restored PVC, was failing in the CSI system.
    User Impact: Unable to restore a backup that had been taken from a previously restored CSI PVC.
    Resolution: You can now successfully perform CSI restores using backups taken from already restored PVCs. #1409

  • Issue: You may encounter webhook errors when attempting to modify StatefulSets in order to update Stork as the scheduler.
    User Impact: Webhook related errors may give you the impression that the pod scheduler feature does not function properly.
    Resolution: Removing webhook for StatefulSets and Deployments as Stork already contains a webhook for pods that manages setting Stork as the scheduler. #1373

  • Issue: Incorrect migration status for service account deletion on source cluster.
    User Impact: The expected behavior to delete the service account on the destination cluster, based on migration status, does not occur.
    Resolution: Do not display the purged status for resources for which merging is supported on the destination cluster. #1368

  • Issue: Storkctl create clusterpair does not honor the port provided from CLI.
    User Impact: You cannot create bi-directional clusterpairs.
    Resolution: To create clusterpair, pass the port provided from CLI in the endpoints. #1383
    NOTE: You need to update the storkctl binary for this change to take effect.

  • Issue: Stork will delete pods running on the Degraded or Portworx StorageDown nodes.
    User Impact: There will be a disruption on application, even though drivers like Portworx support running applications on the StorageDown nodes.
    Resolution: Stork will not delete pods running on the StorageDown nodes. #1385

23.4.0

05 May 22:43
6fb3064
Compare
Choose a tag to compare

New Features

  • You can now apply exclude label in MigrationSchedule spec to exclude specific resources from migration. #1339

Improvements

  • Stork service is updated to not accept old TLS versions 1.0 and 1.1. #1348

  • Stork now creates the default-migration-policy schedule policy, which is set to an interval of 30 minutes instead of 1 minute. #1346

  • Stork now skips migrating the OCP specific (system:) ClusterRole and ClusterRoleBinding resources on OpenShift. #1347

  • Stork now uses a default QPS of 1000 and a Burst of 2000 for its Kubernetes client. #1356, #1378

  • Updated moby package to fix vulnerability CVE-2023-28840. #1381

Bug Fixes

  • Issue: Stork monitoring controller causes a high number of ListPods API calls to be executed repeatedly, resulting in a considerable consumption of memory.
    User Impact: When there is a considerable quantity of pods within the cluster, the Stork monitoring system triggers additional ListPods APIs, which leads to a substantial utilization of memory.
    Resolution: To reduce the overall memory usage of Stork, utilize a cache for retrieving the pod list within the health monitoring process. #1321, #1340, #1390

  • Issue: Certain plurals of CRD do not follow pluralizing rules, which causes the APIs to fail to collect plurals during migration or backup.
    User Impact: These CRDs neither get migrated or backed up properly, which affects disaster recovery, backup and restore of applications that depend on the CRs.
    Resolution: Use API calls with correct CRD plurals, fetched from the cluster. #1361

23.3.1

26 Apr 11:41
Compare
Choose a tag to compare

Improvements

  • Issue: Backing up a significant number of Kubernetes resources has resulted in failures due to limits on gRPC requests, Kubernetes custom resource sizes, or etcd payload sizes.
    User Impact: If the number of Kubernetes resources in a backup is large, then the backup process may fail to complete due to errors related to size limits.
    Resolution: To prevent errors related to gRPC size limits, etcd payload limits, and custom resource definition size, the resource information was removed from both the ApplicationBackup and ApplicationRestore CRs.

23.3.0

06 Apr 07:19
Compare
Choose a tag to compare

Improvements

  • Stork now supports cluster pairing for Oracle Kubernetes Engine (OKE) clusters. #1331
  • Stork now supports bidirectional cluster pairing for Asynchronous DR migration.
  • Stork will update StorageClass on PV objects after the PVC migration. #1320
  • Fixed CVE-2020-26160 vulnerability by updating the JWT package. #1343

Bug Fixes

  • Issue: Failback to primary cluster failed when an app used Portworx CSI volumes, as the volumeHandle pointed to the old volume ID.
    User Impact: App did not come up on the primary cluster after failback.
    Resolution: Recreate the PVs and specify the correct volume name in the volumeHandle field of the spec. As a result, the app will use properly bounded PVCs and will come up without any issue. #1355

  • Issue: Could not find resources for the Watson Knowledge Catalog.
    User Impact: Migration failed for the Watson Knowledge Catalog.
    Resolution: Use the proper plurals for the CRDs for a successful migration process for the Watson Knowledge Catalog. #1326

  • Issue: Service and service account updates did not reflect on the destination cluster.
    User Impact: Migration failed to keep updated resources on the destination cluster.
    Resolution: During migration, you need to sync service updates, merge secrets associated with the service account, and update the AutomountServiceAccountToken param for the service account on the destination cluster. #1326

23.2.1

29 Mar 16:20
Compare
Choose a tag to compare

Bug Fixes

  • Issue: px-backup needs a way to know the custom admin namespace configured in stork deployment.
    User Impact: px-backup users were unable to use the custom admin namespace configured in stork deployment.
    Resolution: Added a configmap stork-controller-config in kube-system namespace with the details of admin namespace. #1310

  • Issue: rule cmd executor pods were always getting created/started in kube-system namespace.
    User Impact: Users had concerns about running the stork rule pods in kube-system namespace.
    Resolution: Fixed it to run the rule cmd executor pods in the namespace where stork is deployed. #1338

23.2.0

10 Mar 01:02
1f0d653
Compare
Choose a tag to compare

Notes:

  • Starting with 23.2.0, the naming scheme for Stork releases has changed. Release numbers are now based on the year and month of the release.
  • Customers upgrading to stork 23.2.0 will need to update storkctl on their cluster. This is required to correctly set up migration with Auto Suspend.

Improvements

  • Stork will migrate all the CRDs under the same group for which CR exists if it is for a different kind. #1269
  • Stork will update the owner reference for PVC objects on the destination cluster. #1269
  • Added support for gke-gcloud-auth-plugin required for authenticating with GKE. #1312

Bug Fixes

  • Issue: Users were unable to migrate Confluent Kafka resources during failback.
    User Impact: Confluent Kafka application failback was unable to bring up the application.
    Resolution: Stork will now remove any finalizers on the CRs when deleting the resource during migration so that a new version of the resource can be recreated. #1295

  • Issue: Migration was suspended after failback if autosuspend was enabled.
    User Impact: After failback, the existing migration schedule was not being resumed, which caused the secondary cluster not to sync.
    Resolution: With the fix, the primary cluster's migration schedule will correctly detect if migration can be resumed to secondary. #1282

  • Issue: The Confluent Kafka operator was not able to recognize service sub resources during Async-DR migration.
    User Impact: Application pods for Confluent Kafka were not able to start.
    Resolution: Stork will not migrate Service resources if the owner reference field is set. #1269

  • Issue: Stork was throwing the error Error migrating volumes: Operation cannot be fulfilled on migrations.stork.libopenstorage.org: the object has been modified; please apply your changes to the latest version and try again.
    User Impact: This error caused unnecessary confusion during migration.
    Resolution: Stork no longer raises this event as it retries the failed operation. #1272, #1293

  • Issue: Users were not allowed to take backups based on namespace labels.
    User Impact: Users had to manually select the static namespaces list for backup schedules. Dynamic selection of the namespace list based on the namespace label was not possible.
    Resolution: With namespace label support, users can specify the namespace label such that the list of namespaces with those label will be selected dynamically for backups. #1258, #1315

  • Issue: The Rancher project association for the Kubernetes resources in the Rancher environment was not backed up and restored.
    User Impact: Since the project configurations are not restored, some applications failed to come up.
    Resolution: Project settings are now backed up and applied during the restore. Users can also change to a different project with project mapping during restore. #1294, #1318

  • Issue: Users were not able to specify the option to use the default storage class configured on the restore cluster in storage class mapping.
    User Impact: Users were not able to use the default storage class for restore.
    Resolution: Now users can mention use-default-storage-class as the destination storage class in the storage class mapping if they want to use the default configure storage class from the restore cluster. #1288

Stork 2.12.4

23 Feb 02:18
0f5e6ef
Compare
Choose a tag to compare

Improvements

Updating the container image to fix libksba vulnerability issue CVE-2022-47629

Stork 2.12.3

28 Jan 06:37
Compare
Choose a tag to compare

Improvements