Stork 2.12.0
New Features
- Using ResourceTransformation for migration, users can now define a set of rules that modify the Kubernetes resources before they are migrated to the destination cluster. #1130
- Users can now perform live migrations of KubeVirt VMs using Portworx volumes. #1117
- Backup of webhook configuration. #1155
- Backup and restore of CRDs belong to the same group. #1180
Improvements
- Following vulnerabilities were fixed by updating the base image: CVE-2022-27774 CVE-2022-1292
Bug Fixes
-
Issue: Google SDK packaged inside Stork container is incompatible with Python 3.9.
User Impact: Stork cannot use the Google SDK to authenticate with the Google API server to fetch refresh tokens for a GKE cluster's kubeconfig.
Resolution: Updated the Google SDK version used by the Stork container to version 399. #1152 -
Issue: There were too many call to volumesnapshotdata from external-snapshotter library.
User Impact: On setups like Rancher, making too many calls to the volumesnapshotdata API causes an out-of-memory error in kube-api server pods.
Resolution: ImplementedFindSnapshot
plugin interface in the Portworx driver, which can filter out a single volumesnapshotdata for the required volumesnapshot. #1158 -
Issue: Stork will crash with a nil panic when initializing apimachinery schemes.
User Impact: Stork pod may restart while initializing at startup.
Resolution: Initialize Kubernetes apimachinery schemes before starting any Kubernetes watch streams. #1162 -
Issue: CVEs reported in Stork: CVE-2021-25741, CVE-2021-32690, and CVE-2022-28948.
Resolution: Update Kubernetes client-go libraries to 1.21.5 in Stork to fix the above CVEs. #1167 -
Issue: A repetitive call to delete a backup of GCE persistent disks would silently cause backup deletion to fail.
User Impact: After deleting a backup, the Kubernetes resource objects would not get deleted from the objectstore.
Resolution: Stork now handles the case where GCE returns NotFound errors on snapshot deletion. #1189 -
Issue: Unable to get objectlock configuration for NetApp S3 objectstore.
User Impact: Unable to add NetApp S3 objectstore as backup location in px-backup.
Resolution: Added an error handling check for getting objectlock config based on the error returned by NetApp S3 objectstore. #1164 -
Issue: Restoring a LoadBalancer service with NodePort would fail.
User Impact: Applications using LoadBalancer need a manual way of restoring the service.
Resolution: Reset the NodePort of LoadBalancer service before restoring it on the application cluster. #1171 -
Issue: Post exec rules for the generic backup are getting executed before backup completion.
User Impact: Since the post exec rule is executed before the snapshot, the backup could lose app consistency.
Resolution: Post exec rules are now executed after successful completion for backups triggered using the generic backup workflow. #1172 -
Issue: The secret token for the service account was not getting applied during restore.
User Impact: Bringing up the application fails sometimes if the application looks for the old secret of the service account token.
Resolution: Reset the service account UID annotation of the secret so that Kubernetes will update to the new UID of the restored service account. #1178 -
Issue: ClusterRole is not backed up if it binds to RoleBinding at namespace level.
User Impact: Users will not able to back up the ClusterRole if it binds to a RoleBinding at a namespace level.
Resolution: Stork will backup ClusterRoles even if they are bound to RoleBindings at a namespace level. #1181 -
Issue: Annotations
*/storage-class
and*/provisioned-by
were not updated when PVCs were restored using a StorageClass mapping.
User Impact: Incorrect annotations may cause confusion over whether a PVC has been restored correctly.
Resolution: Stork now updates the annotations on the PVC based on the values provided in the StorageClass mapping. #1187 -
Issue: Restored service account secret was not updated with the latest token value.
User Impact: Token value in the service account secret still refers to old value, making the restored application unable to start.
Resolution: Reset the token to empty so that the new token will be updated while restoring based on the new service account from the restore cluster. #1191