Skip to content

Releases: libopenstorage/stork

Stork 2.12.2

20 Dec 05:38
Compare
Choose a tag to compare

New Features

  • Stork will try to schedule application pods that are using sharedV4 service volumes to run on nodes where a volume replica does not exist. This means that the nodes will have an NFS mountpoint, and when a failover happens, application pods will not be restarted. You can use the StorageClass parameter stork.libopenstorage.org/preferRemoteNodeOnly: "true" to strictly enforce this behavior. #1222
  • Operator now sets the scheduler of the px-csi-ext pods as stork when running Operator 1.10.1 or newer. If Stork detects that a px-csi-ext pod is running on an offline Portworx node, it deletes the px-csi-ext pod. When Stork gets a scheduling request for such a pod, it can place the pod on the node where Portworx is operational. #1213

Improvements

  • Introduced a dynamic shared informer cache for StorageClass and ApplicationRegistration CRs to improve migration times in clusters which hit high API server rate limits. #1227
  • Added support for migrating MongoDB Enterprise Operator's MongoDBOpsManager and MongoDB CRs #1245

Bug Fixes

  • Issue: Resource transformation for ResourceQuota was failing with the error server could not find the requested resource.
    User Impact: Resource transformation of ResourceQuota was failing during migrations.
    Resolution: API calls with the right resource kind solved the issue, and ResourceQuota can be transformed. #1209
  • Issue: Stork will hit a nil panic when SkipDeletedNamespaces is not set in Migration or MigrationSchedule object and a migration is requested for a deleted namespace.
    User Impact: Stork pod will restart and migrations won't succeed.
    Resolution: The nil panic is now handled in Stork. #1241
  • Issue: All storage-provisioner annotations were not getting removed during pvc restore which was causing pvc to remain in unbounded state in generic restore case.
    User Impact: Restoring backups taken on GKE cluster to AKS cluster was failing
    Resolution: Removed both "volume.kubernetes.io/storage-provisioner" and "volume.beta.kubernetes.io/storage-provisioner" during pvc restore before applying. #1225
  • Issue: Snapshots triggered as part of a scheduled were getting indefinitely queued and retried if they got an error from the storage driver.
    User Impact: Retries of multiple snapshot requests put additional load on the storage driver.
    Resolution: Limited the number of snapshot in error state which are triggered as part of a schedule. Stork will delete older snapshot requests which are in error state. #1231
  • Issue: When stork creates snapshots as part of a schedule, it creates a name for the snapshot by appends a timestamp to the name of the schedule. If the length of the snapshot schedule name plus the suffix was greater than 63 characters, then the snapshot operation would fail.
    User Impact: Stork failed to trigger a snapshot for a schedule which had a long names.
    Resolution: Truncate the name of the snapshots which are created from a snapshot schedule . #1231

Stork 2.12.1

10 Nov 21:21
Compare
Choose a tag to compare

Bug Fixes

  • Issue: Restore of a Portworx volume using the in-tree provisioner (kubernetes.io/portworx-volume) to Portworx volume using the CSI provisioner (pxd.portworx.com) was not adding csi section in the PV spec
    User Impact: A Portworx volume with CSI provisioner would not be identified as a CSI volume.
    Resolution: Stork while restoring will add the csi section to the PersistentVolume. #1195

Stork 2.11.5

26 Oct 21:45
Compare
Choose a tag to compare

Improvements

Stork 2.12.0

24 Oct 19:21
Compare
Choose a tag to compare

New Features

  • Using ResourceTransformation for migration, users can now define a set of rules that modify the Kubernetes resources before they are migrated to the destination cluster. #1130
  • Users can now perform live migrations of KubeVirt VMs using Portworx volumes. #1117
  • Backup of webhook configuration. #1155
  • Backup and restore of CRDs belong to the same group. #1180

Improvements

Bug Fixes

  • Issue: Google SDK packaged inside Stork container is incompatible with Python 3.9.
    User Impact: Stork cannot use the Google SDK to authenticate with the Google API server to fetch refresh tokens for a GKE cluster's kubeconfig.
    Resolution: Updated the Google SDK version used by the Stork container to version 399. #1152

  • Issue: There were too many call to volumesnapshotdata from external-snapshotter library.
    User Impact: On setups like Rancher, making too many calls to the volumesnapshotdata API causes an out-of-memory error in kube-api server pods.
    Resolution: Implemented FindSnapshot plugin interface in the Portworx driver, which can filter out a single volumesnapshotdata for the required volumesnapshot. #1158

  • Issue: Stork will crash with a nil panic when initializing apimachinery schemes.
    User Impact: Stork pod may restart while initializing at startup.
    Resolution: Initialize Kubernetes apimachinery schemes before starting any Kubernetes watch streams. #1162

  • Issue: CVEs reported in Stork: CVE-2021-25741, CVE-2021-32690, and CVE-2022-28948.
    Resolution: Update Kubernetes client-go libraries to 1.21.5 in Stork to fix the above CVEs. #1167

  • Issue: A repetitive call to delete a backup of GCE persistent disks would silently cause backup deletion to fail.
    User Impact: After deleting a backup, the Kubernetes resource objects would not get deleted from the objectstore.
    Resolution: Stork now handles the case where GCE returns NotFound errors on snapshot deletion. #1189

  • Issue: Unable to get objectlock configuration for NetApp S3 objectstore.
    User Impact: Unable to add NetApp S3 objectstore as backup location in px-backup.
    Resolution: Added an error handling check for getting objectlock config based on the error returned by NetApp S3 objectstore. #1164

  • Issue: Restoring a LoadBalancer service with NodePort would fail.
    User Impact: Applications using LoadBalancer need a manual way of restoring the service.
    Resolution: Reset the NodePort of LoadBalancer service before restoring it on the application cluster. #1171

  • Issue: Post exec rules for the generic backup are getting executed before backup completion.
    User Impact: Since the post exec rule is executed before the snapshot, the backup could lose app consistency.
    Resolution: Post exec rules are now executed after successful completion for backups triggered using the generic backup workflow. #1172

  • Issue: The secret token for the service account was not getting applied during restore.
    User Impact: Bringing up the application fails sometimes if the application looks for the old secret of the service account token.
    Resolution: Reset the service account UID annotation of the secret so that Kubernetes will update to the new UID of the restored service account. #1178

  • Issue: ClusterRole is not backed up if it binds to RoleBinding at namespace level.
    User Impact: Users will not able to back up the ClusterRole if it binds to a RoleBinding at a namespace level.
    Resolution: Stork will backup ClusterRoles even if they are bound to RoleBindings at a namespace level. #1181

  • Issue: Annotations */storage-class and */provisioned-by were not updated when PVCs were restored using a StorageClass mapping.
    User Impact: Incorrect annotations may cause confusion over whether a PVC has been restored correctly.
    Resolution: Stork now updates the annotations on the PVC based on the values provided in the StorageClass mapping. #1187

  • Issue: Restored service account secret was not updated with the latest token value.
    User Impact: Token value in the service account secret still refers to old value, making the restored application unable to start.
    Resolution: Reset the token to empty so that the new token will be updated while restoring based on the new service account from the restore cluster. #1191

Stork 2.11.4

15 Sep 11:46
Compare
Choose a tag to compare

Announcements

  • We have deprecated the EncryptionKey parameter from the BackupLocation CR. To trigger an encrypted backup use the new EncryptionV2Key in the BackupLocation CR.

Bug Fixes

  • Issue: Backups triggered from PX-Backup would not explicitly set the EncryptionKey field in ApplicationBackup
    User Impact: Backups triggered from PX-Backup were not encrypted.
    Resolution: In order to support backups and restores between previously non-encrypted incremental backups to the new encrypted format we had to deprecate the existing EncryptionKey parameter in BackupLocation CR. A new EncryptionV2Key should be used to encrypt any subsequent backups.

  • Issue: v1beta1 CRDs are not supported from k8s 1.22. So CRDs were not getting backed up in 1.22 onwards.
    User Impact: CRD backup was broken from k8s v 1.22.0 onwards.
    Resolution: Now added support V1 API as well to get the CRDS and CRDs backup is fixed.

Stork 2.11.3

02 Sep 17:51
Compare
Choose a tag to compare

Bug Fixes

  • Issue: Google SDK packaged inside stork container is incompatible with python 3.9
    User Impact: Stork cannot use the google SDK to authenticate with google API server to fetch refresh tokens for a GKE cluster's kubeconfig.
    Improvement: Update the google SDK used by the stork container to version 399.

Stork 2.11.2

26 Jul 17:01
Compare
Choose a tag to compare

Improvements

  • User can specify project mappings between the source and destination cluster in the ClusterPair object. Stork will migrate resources between the clusters while translating the project related information by using the project mappings. Currently annotations, labels and namespace selectors on NetworkPolicy and any pod spec are handled by Stork while transforming the resources across projects between two clusters. #1113

  • Stork can now migrate manually created endpoint resources, user can also set annotation "stork.libopenstorage.org/include-resource" to collect auto-created endpoint. "stork.libopenstorage.org/include-resource" is only supported for k8s endpoint resources #1115

  • Fixed the following vulnerabilities: CVE-2022-23806 , CVE-2022-24921 , CVE-2022-23772 , CVE-2021-44716

Bug Fixes

  • Issue: Stork only collects network policy which does not have CIDR set.
    User Impact: In a DR setup if the two target clusters have the same CIDR network, a user may want to migrate all of the network policy, however stork fails to do so.
    Resolution: Stork can now collect all of the network policies if skipNetworkPolicyCheck is set in migration specs regardless of whether a CIDR is set on it. #1108

Stork 2.11.1

22 Jul 17:39
Compare
Choose a tag to compare

Bug Fixes

  • Issue: V1 version of VolumeSnapshot was not supported.
    User Impact: CSI backup failing for Kubernetes 1.20.0+
    Resolution: Added support for both V1 and V1beta1 VolumeSnapshot. #1088

  • Issue: ObjectLock configuration check was failing for FlashBlade, IBM, Cloudian objectstores.
    User Impact: User were not able to add FlashBlade, IBM, Cloudian Objectstore bucket as Backuplocation target.
    Resolution: Fixed the error handling in getting object lock configuration api. #1119

  • Issue: Generic backup with the self-signed enable Objectstore as Backuplocation target was failing.
    UserImpact: User were not able to take generic backup with the self-signed enable Objectstore as Backuplocation target
    Resolution: Added fix to create the valid secrets and pass it as the self-signed certificate to generic backup job #1120

Stork 2.11.0

16 Jun 01:47
Compare
Choose a tag to compare

New Features

  • Added support for migrating OpenShift Virtualization VM objects between clusters. Provide the namespace where the VirtualMachine resides and stork will migrate the VM CRs and the underlying PVCs to the target cluster. #1095
  • Added support for backup and restore of OpenShift Virtualization VM objects. #1095
  • Added support for using AWS IAM roles through BackupLocation CR. Stork will use the IAM roles assigned to the instance or to the stork pod while accessing AWS resources. #1099

Improvements

  • Stork pods now print the current leader for easier debugging and log collections #1085
  • Updated Stork SDK APIs to work with Portworx driver over IPv6 #1086
  • If a namespace gets deleted after MigrationSchedule creation, migration continues for other namespaces #1090
  • Improved performance of storkctl activate and storkctl deactivate migration operations. Users can now use qps and burst values for improving DR resource activation time #1094

Bug Fixes

  • Issue: Stork failed to schedule pods which have PVCs with WaitForFirstConsumer VolumeBindingMode.
    User Impact: Pods using a PVC with WaitForFirstConsumer VolumeBindingMode stayed in Pending state
    Resolution: Stork scheduler now allows scheduling pods even when the PVCs it uses are in Pending state due to WaitForForFirstConsumer VolumeBindingMode. #1097

  • Issue: Migration driver called CloudMigrateStatus API without a taskID, enumerating all migrations.
    User Impact: Enumerating all migration objects for single migration status put extra load on Portworx kvdb, which can cause slow transition of migration stages.
    Resolution: Stork migration controller now issues CloudMigrateStatus call with a migration task id to reduce the pressure on Portworx. #1098

  • Issue: VolumeOnly migration would query all Kubernetes resources in a namespace.
    User Impact: VolumeOnly Migration took significant time to migrate PV and PVC objects
    Resolution: Stork migration controller now queries only PV and PVC objects instead of all Kubernetes resources when handling volumeOnly migrations #1106

  • Issue: Stork scheduled pods that were using Portworx Direct Access volumes on a non-Portworx node
    User Impact: Pods using Portworx Direct Access volumes stayed in pending state
    Resolution: Stork scheduler now recognizes pods backed by Direct Access volumes and ensure they get scheduled on a Portworx node #1107

Errata

  • Backup and restore of VM objects is not supported across namespaces when the original VM that was backed up is still running in the same Kubernetes cluster.

Stork 2.10.0

25 May 19:26
Compare
Choose a tag to compare

New Features

  • Added support for Object Lock enabled buckets. This support is currently supported only with the PX-Backup product. #1047

Improvements

  • Use cmdexecutor image as per stork deployment image #1084
  • Use kopiaexecutor image as per stork deployment image #1076

Bug Fixes

  • Issue: CSI based backups failing for k8s > 1.22+
    User Impact: From k8s 1.22+ onwards CSIDriver v1beta1 APIs are removed which causes CSI backup to fail
    Resolution: Added support for V1 API's for csi driver #1079

  • Issue: The restore size for EBS volumes was displaying incorrectly
    User Impact: This lead to ambitious behaviour for user as the restore is less than the backup size for EBS volume.
    Resolution: Fixed to correct restore size for EBS volume #1069

  • Issue: Generic backup options triggering generic backup instead of CSI backup
    Resolution: Added missing BackupType in Applicationbackup CR to trigger appropriate backup #1066