-
Notifications
You must be signed in to change notification settings - Fork 269
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The disk import via 'DataVolume' fails for a warm migration of windows VM. #3430
Comments
This looks very similar to a bug that was recently fixed: #3385 Is there any chance you can try with a version that includes that change? It looks like it was backported to CDI releases 1.58, 1.59, and 1.60. |
Hello @mrnold , Thanks a lot for your reply and pointing to the respective PR. I will try with one of those CDI releases with higher version and will share the results. |
Hello @mrnold , The migration completed successfully without those errors with cdi v1.58. Thanks a lot for your help. However, I am facing new issue now, after completing DV migration, the VM comes up in 'Running' state. When I am trying to take its console using JFI: serureBoot is kept 'false' in the VM template;
What could be causing this boot issue for post migration? |
long shot but maybe @lyarwood or @vsibirsk are able to help |
Usually that means something didn't work right in the VM conversion step, either the VirtIO drivers weren't installed or the virtual hardware configuration was not quite as expected. Probably forklift is the right place to continue discussion. |
What happened:
I have a DV that has source as
vddk
(spec.source.vddk
) using which I am trying to migration a windows VM from VMware to Kubevirt. Before starting the migration if the windows VM is in 'power on' state, then the migration(warm) doesn't start and the corresponding import pod fails. Attaching the file: importer-plan-winserver2k19-warm-powered-on.log that has the corresponding errors.If the windows VM is in 'power-off' state before starting the migration, then the migration(cold) happens without any issue.
For Linux VM, both 'warm' and 'cold' migration works without any issue.
Tried the migration with VSphere administration privileges(full access), however still getting the same errors.
What you expected to happen:
Both 'cold' and 'warm' migration for Windows VM should work similar to Linux VMs.
How to reproduce it (as minimally and precisely as possible):
DV
withspec.source.vddk
is getting autocreated similar to this: https://github.com/kubevirt/containerized-data-importer/blob/main/doc/datavolumes.md#vddk-data-volume after creating a forkliftMigration
CR. The resulted DV's disk import fails for windows VM if the VM is in power-on state, however the same windows VM's cold migration works and for the Linux VMs both 'cold' and 'warm' works as explained earlier.Additional context:
Add any other context about the problem here.
Environment:
kubectl get deployments cdi-deployment -o yaml
):kubectl version
):uname -a
): N/AThe text was updated successfully, but these errors were encountered: