Skip to content

Stork 2.12.2

Compare
Choose a tag to compare
@puresandra puresandra released this 20 Dec 05:38
· 592 commits to master since this release

New Features

  • Stork will try to schedule application pods that are using sharedV4 service volumes to run on nodes where a volume replica does not exist. This means that the nodes will have an NFS mountpoint, and when a failover happens, application pods will not be restarted. You can use the StorageClass parameter stork.libopenstorage.org/preferRemoteNodeOnly: "true" to strictly enforce this behavior. #1222
  • Operator now sets the scheduler of the px-csi-ext pods as stork when running Operator 1.10.1 or newer. If Stork detects that a px-csi-ext pod is running on an offline Portworx node, it deletes the px-csi-ext pod. When Stork gets a scheduling request for such a pod, it can place the pod on the node where Portworx is operational. #1213

Improvements

  • Introduced a dynamic shared informer cache for StorageClass and ApplicationRegistration CRs to improve migration times in clusters which hit high API server rate limits. #1227
  • Added support for migrating MongoDB Enterprise Operator's MongoDBOpsManager and MongoDB CRs #1245

Bug Fixes

  • Issue: Resource transformation for ResourceQuota was failing with the error server could not find the requested resource.
    User Impact: Resource transformation of ResourceQuota was failing during migrations.
    Resolution: API calls with the right resource kind solved the issue, and ResourceQuota can be transformed. #1209
  • Issue: Stork will hit a nil panic when SkipDeletedNamespaces is not set in Migration or MigrationSchedule object and a migration is requested for a deleted namespace.
    User Impact: Stork pod will restart and migrations won't succeed.
    Resolution: The nil panic is now handled in Stork. #1241
  • Issue: All storage-provisioner annotations were not getting removed during pvc restore which was causing pvc to remain in unbounded state in generic restore case.
    User Impact: Restoring backups taken on GKE cluster to AKS cluster was failing
    Resolution: Removed both "volume.kubernetes.io/storage-provisioner" and "volume.beta.kubernetes.io/storage-provisioner" during pvc restore before applying. #1225
  • Issue: Snapshots triggered as part of a scheduled were getting indefinitely queued and retried if they got an error from the storage driver.
    User Impact: Retries of multiple snapshot requests put additional load on the storage driver.
    Resolution: Limited the number of snapshot in error state which are triggered as part of a schedule. Stork will delete older snapshot requests which are in error state. #1231
  • Issue: When stork creates snapshots as part of a schedule, it creates a name for the snapshot by appends a timestamp to the name of the schedule. If the length of the snapshot schedule name plus the suffix was greater than 63 characters, then the snapshot operation would fail.
    User Impact: Stork failed to trigger a snapshot for a schedule which had a long names.
    Resolution: Truncate the name of the snapshots which are created from a snapshot schedule . #1231