Releases: libopenstorage/stork
24.2.1
Enhancement
- Stork now supports Azure China environment for Azure backup locations. For more information, see Add Azure backup location.
Bug Fixes
-
Issue: If you were running Portworx Backup version 2.6.0 and upgraded the Stork version to 24.1.0, selecting the default VSC in the Create Backup window resulted in a
VSC Not Found
error.
User Impact: Experienced failures during backup operations.
Resolution: You can now choose the default VSC in the Create Backup window and create successful backups. #1744 -
Issue: If you deploy Portworx Enterprise with PX-Security enabled and take a backup on NFS backup location and then restore, restore used to fail.
User Impact: Unable to restore backups from the NFS backup location for PX security-enabled Portworx volumes.
Resolution: This issue is now fixed. #1733
24.2.0
Enhancements
-
Enhanced Disaster Recovery User Experience
In this latest Stork release, the user experience has been improved significantly, with a particular focus on performing failover and failback operations. These enhancements provide a smoother and more intuitive user experience by simplifying the process while ensuring efficiency and reliability.
Now, you can perform a failover or failback operation using the following
storkctl
commands:- To perform a failover operation, use the following command:
storkctl perform failover -m <migration-schedule> -n <migration-schedule-namespace>
- To perform a failback operation, use the following command:
storkctl perform failback -m <migration-schedule> -n <migration-schedule-namespace>
For more information on the enhanced approach, refer to the below documentation.
- To perform a failover operation, use the following command:
-
The Portworx driver is updated to optimize the API calls it makes to reduce the time taken for scheduling pods and monitoring pods if they need to be rescheduled when Portworx is down on nodes.
Bug Fixes
-
Issue: Migration schedules in the
admin
namespace were updated with true or false for theapplicationActivated
field when activating or deactivating a namespace, even if they did not migrate the particular namespace.
User Impact: Unrelated migration schedules were getting suspended.
Resolution: Stork now updates theapplicationActivated
field only for migration schedules that are migrating at least one of the namespaces being activated or deactivated. #1718 -
Issue: Updating the VolumeSnapshotSchedule resulted in a version mismatch error from Kubernetes when the update happened on a previous version of the resource.
User Impact: When the VolumeSnapshotSchedule is high, Stork logs are flooded with these warning messages.
Resolution: Fixed the VolumeSnapshotSchedule update with a patch to avoid the version mismatch error. #1665 -
Issue: Similar volume snapshot names were created when the VolumeSnapshotSchedule frequency matched and aftertrimming produced similar substrings.
User Impact: For one volume, a snapshot may not be taken but can be marked as successful.
Resolution: Adding a 4 digit randomness to the name to avoid name collisions for volumesnapshots resulting from different volumesnapshot schedules. #1686 -
Issue: Stork relies on Kubernetes DNS to locate services, but it also assumes the
.svc.cluster.local
domain for Kubernetes services.
User Impact: The clusters that modified Kubernetes DNS domains were not able to use Stork.
Resolution: Stork now works on clusters with a modified Kubernetes DNS domain. #1629 -
Issue: Resource transformation for CR was not supported.
User Impact: It was blocking some of the necessary transformations for resources that were required at the destination site.
Resolution: Now, resource transformation for CR is supported. #1705
Known Issues
- Issue: If you use the
storkctl perform failover
command to perform a failover operation, the Stork might not be able to scale down the KubeVirt pods, which could cause the operation to fail.
Workaround: Perform the failover operation by following the procedure on the below pages:
24.1.0
Enhancements
- Stork now supports Kubevirt VMs for Portworx backup and restore operations. You can now initiate VM-specific backups by setting the
backupObjectType
toVirtualMachine
. Stork automatically includes associated resources, such as PVCs, secrets, and ConfigMaps used as volumes and user data in VM backups. Also, Stork applies default freeze/thaw rules during VM backup operations to ensure consistent filesystem backups. - Cloud Native backups will now automatically default to CSI or KDMP with LocalSnapshot, depending on the type of schedules they create.
- Previously in Stork, for CSI backups, you were limited to selecting a single VSC from the dropdown under the
CSISnapshotClassName
field. Now you can select a VSC for each provisioner via theCSISnapshotClassMap
. - Now, the creation of a default VSC from Stork is optional.
Bug Fixes
-
Issue: Canceling an ongoing backup initiated by PX-Backup results in the halting of the post-execution rule.
User Impact: This interruption causes the I/O processes on the application to stop or the post-execution rule execution to cease.
Resolution: To address this, Stork executes and removes the post-execution rule CR as part of the cleanup procedure for the application backup CR. #1602 -
Issue: Generic KDMP backup/restore pods become unresponsive in environments where Istio is enabled.
User Impact: Generic KDMP backup and restore fails in the Istio enabled environments.
Resolution: Relaxed the Istio webhook checks for the stork created KDMP generic backup/restore pods. Additionally, the underlying issue causing job pod freezes has been resolved in Kubernetes version 1.28 and Istio version 1.19. #1623
23.11.0
New Features
-
You can now create and delete schedule policies and migration schedules using the new
storkctl
CLI feature. This enables you to seamlessly create and delete SchedulePolicy and MigrationSchedule resources, enhancing the DR setup process.
In addition to the existing support for clusterPairs, you can now efficiently manage all necessary resources throughstorkctl
. This update ensures a faster and simpler setup process, with built-in validations. By eliminating the need for manual YAML file edits, the feature significantly reduces the likelihood of errors, providing a more robust and user-friendly experience for DR resource management in Kubernetes clusters. -
The new Storage Class parameter
preferRemoteNode
enhances scheduling flexibility for SharedV4 Service Volumes. By setting this parameter tofalse
, you can now disable anti-hyperconvergence during scheduling. This provides an increased flexibility to tailor Stork's scheduling behavior according to your specific application needs.
Enhancement
Bug Fixes
-
Issue: Exclusion of Kubernetes resources such as deployments, statefulsets, and so on was not successful during migration.
User Impact: The use of labels to exclude selectors proved ineffective in scenarios where the resource was managed by an operator that reset user-defined labels.
Resolution: The introduction of theexcludeResourceTypes
feature now allows users to exclude certain types of resources from migration, providing a more effective solution compared to using labels. #1554 -
Issue: The
applicationrestore
function created usingstorkctl
consistently restored to a namespace with the identical name as the source, causing users to be unable to restore to a different namespace.
User Impact: Users faced limitations as they were unable to restore applications to a namespace other than the one with the same name as the source.
Resolution:storkctl
has been updated to address this issue by introducing support for accepting namespace mapping as a parameter, allowing users to restore to a different namespace as needed. #1545 -
Issue: The
storkctl
createclusterpair
command was not functioning properly withHTTPS PX
endpoints.
User Impact: Migrations between clusters with SSL-enabled PX endpoints were not successful.
Resolution: The issue has been addressed, and now both HTTPS and HTTP endpoints are accepted as source (src-ep
) and destination (dest-ep
) when usingstorkctl create clusterpair
. #1537 -
Issue: The PostgreSQL operator generates an error related to the pre-existence of service account, role, and role bindings following a migration.
User Impact: Users are unable to scale up a PostgreSQL application installed via OpenShift Operator Hub after completing the migration.
Resolution: Excluded migration of service account, role, and role bindings if they have owner reference set to allow PostgreSQL pods to come up successfully. #1560 -
Issue: Real-Time Custom Resource (RT CR) enters a failed state when a transform rule includes either int or bool as a data type.
User Impact: Migration involving resource transformation will not succeed.
Resolution: Resolved the issue by addressing the parsing problem associated withint
andbool
types. #1532 -
Issue: Continuous crashes occur in Stork pods when the cluster contains a RT CR with a rule type set as slice and the operation is add.
User Impact: Stork service experiences ongoing disruptions.
Resolution: Implemented a solution by using type assertion to prevent the panic. Additionally, the problematicSetNestedField
method is replaced withSetNestedStringSlice
to avoid panics in such scenarios. You can also temporarily resolve the problem by removing the RT CR from the application cluster. #1530 -
Issue: Stork crashes when attempting to clone an application with CSI volumes using Portworx.
User Impact: Users are unable to clone applications if PVCs in the namespaces utilize Portworx CSI volumes.
Resolution: Now, a patch is included to manage CSI volumes with Portworx, which ensures the stability of application cloning functionality. #1591 -
Issue: When setting up a migration schedule in the
admin
namespace with pre/post-execution rules, these rules must be established in both theadmin
namespace and every namespace undergoing migration.
User Impact: The user experience is less intuitive as it requires creating identical rules across multiple namespaces.
Resolution: The process is now simplified as rules only require addition within the migration schedule's namespace. #1569 -
Issue: Stork was not honoring locator volume labels correctly when scheduling pods.
User Impact: In cases wherepreferRemoteNodeOnly
was initially set totrue
, pods sometimes failed to schedule. This issue was particularly noticeable when the Portworx volume settingpreferRemoteNodeOnly
was later changed tofalse
, and there were no remote nodes available for scheduling.
Resolution: Now, even in scenarios where remote nodes are not available for scheduling, pods can be successfully scheduled on a node that holds a replica of the volume. #1606
Known Issues
-
Issue: In Portworx version 3.0.4, several migration tests fail in an auth-enabled environment. This issue occurs specifically when running these tests in environments where authentication is enabled.
User Impact: You may experience failed migrations, which will impact data transfer and management processes.
Resolution: The issue has been resolved in Portworx version 3.1.0. Users experiencing this problem are advised to upgrade to version 3.1.0 to ensure smooth migration operations and avoid permission-related errors during data migration processes. -
Issue: When using the
storkctl create clusterpair
command, the HTTPS endpoints for Portworx were not functioning properly.
User Impact: This issue affects when you attempt migrations between clusters wherepx
endpoints were secured with SSL. As a result, migrations could not be carried out successfully in environments using secure HTTPS connections.
Resolution: In the upcoming Portworx 3.1.0 release, thestorkctl create clusterpair
command will be updated to accept both HTTP and HTTPS endpoints, allowing the specification of eithersrc-ep
ordest-ep
with the appropriate scheme. This update ensures successful cluster pairing and migration in environments with SSL-securedpx
endpoints.
23.9.1
Bug Fixes
-
Issue: The generic backup of some PVCs in kdmp was failing due to the inclusion of certain read-only directories and files.
User Impact: Difficulties restoring the snapshot as the restoration of these read-only directories and files resulted in permission denied errors.
Resolution: Introduced the --ignore-file option in kdmp backup, enabling you to specify a list of files and directories to be excluded during snapshot creation. This ensures that during restoration, these excluded files and directories will not be restored. #1572Format for adding the ignore file list:
KDMP_EXCLUDE_FILE_LIST: | <storageClassName1>=<dir-list>,<file-list1>,.... <storageClassName2>=<dir-list>,<file-list1>,....
Sample for adding the ignore file list:
KDMP_EXCLUDE_FILE_LIST: | px-db=dir1,file1,dir2 mysql=dir1,file1,dir2
-
Issue: The backup process does not terminate when an invalid post-execution rule is applied, leading to occasional failures in updating the failure status to the application backup CR.
User Impact: Backups with invalid post-execution rules were not failing as expected.
Resolution: Implemented a thorough check to ensure that backups with invalid post-execution rules are appropriately marked as failed, accompanied by a clear error message. #1582
23.9.0
New Feature
Enhanced support for Kubevirt VMs in Portworx Backup
This feature facilitates the backup and restoration of Kubevirt VMs through Portworx Backup. When Kubevirt VMs are included in the backup object, the restoration process from this backup ensures successful transformation of VMs, incorporating DataVolumeTemplate
and adjusting masquerade interface configurations to ensure a successful completion of the restore operation.
Bug Fixes
-
Issue: Occasionally, the restoration process encounters an error stating
resourceBackup CR already exists
when the reconciler attempts to re-enter.
User Impact: The restore operation is unsuccessful due to this error.
Resolution: The issue has been resolved by addressing the issue of the already-existingresourceBackup CR
error. The fix involves handling the error by ignoring it during the creation of theResourceBackup
CR. #1482 -
Issue: The attempt to add Tencent Cloud object storage fails during the objectlock support check due to a discrepancy in error reporting when object lock is not supported.
User Impact: Users are unable to utilize Tencent Cloud object storage as a backup location for backup and restore operations.
Resolution: A solution has been implemented by incorporating appropriate error handling checks during the verification of objectlock support for buckets in Tencent Cloud object storage. #1478 -
Issue: A warning event is recorded in the applicationbackup CR when the S3 bucket already exists.
User Impact: The warning event is causing confusion for users.
Resolution: To address this, the system now refrains from generating a warning event if the S3 object store indicates theErrCodeBucketAlreadyExists
code. #1481 -
Issue: Backing up the
kube-system
namespace is not a supported feature. However, in the case ofall-namespace
backups or backups based on namespace labels, thekube-system
namespace is inadvertently included.
User Impact: This inclusion of thekube-system
namespace in backups causes complications during the restore process.
Resolution: This issue has been resolved by excluding thekube-system
namespace fromall-namespace
backups and backups based on namespace labels. #1506 -
Issue: The restore process based on CSI encounters failures in setups with the
csi-snapshot-webhook
admin webhook. This failure is attributed to a distinct error related to existing resources, specifically when creating thevolumesnapshotclass
resource.
User Impact: Users are affected by the inability to perform CSI-based restores on setups featuring thecsi-snapshot-webhook
admin webhook.
Resolution: The issue has been addressed by incorporating a pre-check through aget
call before thecreate
call. Now, thecreate
call occurs only if theget
call fails with aNotFound
error, preventing conflicts related to existing resources. #1567
23.8.0
New Features
-
Stork now supports both asynchronous and synchronous DR migration for applications managed by operators. When
startApplications
is set tofalse
for migrations, Stork ensures that application pods remain inactive in the destination cluster after migration. Additionally, Stork provides the flexibility to scale down applications by modifying Custom Resource (CR) specifications, using thesuspend options
feature. For applications controlled by clusterwide operators that do not support scaling down via CR spec modifications, Stork offers a "stash strategy" to prevent application pods from becoming active prematurely during migration, ensuring a seamless transition to the destination cluster. #1451 -
Stork now supports import workflows using the DataExport CRs. This means you can now seamlessly transfer data from one PVC to another within the same cluster using rsync.
Improvement
- Stork now enables you to optimize resource utilization and ensure a more efficient and targeted restoration of applications. You can now specify
resourceTypes
during the application backup process, allowing for a more granular selection of resources to be included in the backup. Moreover, when initiating an application restore, you can also choose specific resources to restore, providing greater flexibility in the recovery process.
Bug Fixes
-
Issue:
ReplicaSets
that had been migrated and were not under the management of deployments were unable to be activated or deactivated usingstorkctl
.
User Impact: Users should manually scale up or down theReplicaSet
usingkubectl
.
Resolution: As part of the activation or deactivation process,storkctl
is now capable of scaling up or down the migratedReplicaSet
. #1471 -
Issue: When using Stork 23.7, KubeVirt pods encounter a
CrashLoopBackoff
state, accompanied by the following error messages within the pod logs:/usr/bin/virt-launcher-monitor: /lib64/libc.so.6: version GLIBC_2.34' not found (required by /etc/px_statfs.so) /usr/bin/virt-launcher-monitor: /lib64/libc.so.6: version GLIBC_2.33' not found (required by /etc/px_statfs.so)
User Impact: KubeVirt VMs become unusable.
Resolution: This issue has been resolved in Stork version 23.8. For any existing virt-launcher pods experiencing the
CrashLoopBackOff state due to this bug, follow these steps after upgrading to Stork 23.8:- Stop the KubeVirt VM.
- Restart the KubeVirt VM. #1493
-
Issue: Following migration, an ECK application installed using an operator demands all associated Custom Resource Definitions (CRDs) to be present to initiate successfully.
User Impact: Users experiences difficulties in scaling up the Elasticsearch application due to the absence of essential CRDs after the migration.
Resolution: To address this issue, the migration process will include the migration of associated CRDs for a Custom Resource, thereby preventing any obstacles in scaling up the ECK application post-migration. #1494 -
Issue: Following migration, applications controlled by Custom Resources (CRs) are automatically initiated if the operator is already operational in a distinct namespace.
User Impact: This results in applications in the destination cluster starting unexpectedly, contrary to the desired behavior where they should remain inactive withstartApplication: false
.
Resolution: To rectify this, a stashing strategy has been implemented for the CR content, storing it in a configmap. This ensures that the CR specification is applied only after activating the migration, allowing the applications to start as intended. #1451
23.7.3
Bug Fix
Issue: Migrations using a cluster pair created using —unidirectional
option fail due to the absence of object store information in the destination cluster.
User Impact: Users couldn't run migrations with unidirectional cluster pair.
Resolution: Create object store information in the destination cluster for the migrations to succeed.
#1501, #1507, #1510
23.7.2
Bug Fix
Issue: If the status is larger than the maximum etcd
request size (1.5M), the update of the kdmp
generic backup status in the Volumebackup
CR will fail.
User Impact: At times, the failure of the update to the Volumebackup
CR causes the kdmp
backup to also fail.
Resolution: Currently, the practice involves refraining from updating the actual status in the Volumebackup
CR if it is large. Instead, the update is made within the log of the job pod. #303
23.7.1
Bug Fixes
-
Issue: The
aws-iam-authenticator
binary size was zero in the Stork container.
User Impact: If kubeconfig is used withaws-iam-authenticator
, then user will encounter failure when creating a cluster pair on Amazon EKS.
Resolution: Updated theaws-iam-authenticator
and the curl options to make sure binary gets downloaded successfully. #1472 -
Issue: Updates made to the parameter associated with large resources in the
stork-controller-config
configmap are not
preserved when the Stork pod is restarted.
User Impact: Whenever Stork pods restart, user needs to update the parameter related to large resources in the
stork-controller-config
configmap.
Resolution: Ensure that updated value of the large-resource related parameter in thestork-controller-config
configmap persists
across Stork pod restarts. #1473 -
Issue: The backup object has an empty volume size for the Portworx volume, when you use a Stork version that is newer than
23.6.0 but Portworx version is older than 3.0.0.
User Impact: Portworx volumes with a version below 3.0.0 will display a volume size of zero.
Resolution: Retrieval of volume size has been handled regardless of the versions of Stork and Portworx. #1474