-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VolumeSnapshots not cleaned up after backup is completed #7556
Comments
@abh |
From the description Therefore, please double check the prerequisites for data mover backups --- 1. CSI plugin is installed; 2. Enable-CSI feature gate is enabled; 3. |
Thanks @Lyndon-Li & @blackpiglet! I must have specified the wrong This bundle has more data, and the backups from |
This log bundle still doesn't have the data mover backups. |
I thought these were it? From the bundle
and
|
Indeed, the backup enabled the snapshot data move feature, but I also didn't find snapshot-data-move-related logs. |
this is the installation command I used (plus editing the deployment and daemonset to increase the memory limits):
|
Please use the newer version of the Velero CSI plugin. |
Oh! Thank you; I will upgrade. 0.3.0 is documented as the one to use for 1.13 at https://velero.io/docs/v1.13/csi/ ![]() |
Thanks for the feedback. I will update the document. |
Upgrading to 0.7.0 made the snapshots get deleted as expected. 🎉🥳 Thank you so much for the prompt assistance. @blackpiglet I'll leave it to you to close this issue or keep it and maybe add a feature to have the plugin or velero check that the other component is within an expected range of versions. I now have new feature requests around how the PVCs are chosen to be snapshot and moved; but that's a separate issue. :-) |
Close this issue for now. |
Using
snapshotMoveData: true
velero seems to leave volumeSnapshots behind after the backup has completed.I'm not sure if this is a bug or a deliberate feature, but it makes the feature not work for my environment (backing up Ceph volumes to an S3 store).
This has some overlap with #7550.
Best I can tell this schedule makes a snapshot of every volume in the chosen namespaces, but only volumes associated with pods annotated with the
backup.velero.io/backup-volumes
annotation are backed up and (for me more importantly) deleted. This leaves all the snapshots not backed up around. (Some of them in our system have really high data churn so the Ceph system fills up).I'd expect either every PVC being backed up (or another option for annotating or labeling the PVCs to exist) by the volume snapshot + mover feature; or only the ones otherwise annotated -- but most importantly I'd expect every snapshot that velero creates when using the snapshotMoveData feature to be cleaned up when the backup is done.
Debug bundle attached below.
bundle-2024-03-20-17-52-52.tar.gz
The text was updated successfully, but these errors were encountered: