-
Notifications
You must be signed in to change notification settings - Fork 389
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[vioscsi] Fix SendSRB notify regression #1150
[vioscsi] Fix SendSRB notify regression #1150
Conversation
Fixes regression in SendSRB of [vioscsi] Regression was introduced in virtio-win#684 between 7dc052d and fdf56dd. Specifically due to commits f1338bb and fdf56dd (considered together). Signed-off-by: benyamin-codez <[email protected]>
Note my edit above, lots of distractions atm... 8^d
Edit: Still got it wrong. Thanks @frwbr for the correction..!
|
@vrozenfe @JonKohler @sb-ntnx @frwbr @foxmox All the checks have passed. If there are no concerns, I'll move this from draft to review in about 12 hours... |
Hi, many thanks for opening this targeted PR! I applied your patch on top of current master (22d0908) and ran the reproducer from #756 (comment) again (details on setup below). With this patch, I have not seen I have one comment:
Has been some time since I last looked into the code -- but isn't it the call to The call to Guest: Windows 10
./qemu-stable-9.0.1/qemu-system-x86_64 \ -accel kvm \ -name 'win10-vioscsi-versions,debug-threads=on' \ -chardev 'socket,id=qmp,path=/var/run/qemu-server/151.qmp,server=on,wait=off' \ -mon 'chardev=qmp,mode=control' \ -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' \ -mon 'chardev=qmp-event,mode=control' \ -pidfile /var/run/qemu-server/151.pid \ -smbios 'type=1,uuid=32b5c31e-ec75-4d62-9d0b-a756f876e943' \ -smp '4,sockets=1,cores=4,maxcpus=4' \ -nodefaults \ -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' \ -vnc 'unix:/var/run/qemu-server/151.vnc,password=on' \ -cpu 'qemu64,+aes,enforce,hv_ipi,hv_relaxed,hv_reset,hv_runtime,hv_spinlocks=0x1fff,hv_stimer,hv_synic,hv_time,hv_vapic,hv_vpindex,+kvm_pv_eoi,+kvm_pv_unhalt,+pni,+popcnt,+sse4.1,+sse4.2,+ssse3' \ -m 8192 \ -object 'iothread,id=iothread-virtioscsi0' \ -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' \ -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' \ -device 'pci-bridge,id=pci.3,chassis_nr=3,bus=pci.0,addr=0x5' \ -device 'vmgenid,guid=98ea6dc4-8be3-4c04-9b90-ce97bd6ba7b2' \ -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' \ -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' \ -device 'VGA,id=vga,bus=pci.0,addr=0x2,edid=off' \ -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' \ -device 'virtio-scsi-pci,id=virtioscsi0,bus=pci.3,addr=0x1,iothread=iothread-virtioscsi0' \ -drive 'file=/dev/pve/vm-151-disk-1,if=none,id=drive-scsi0,format=raw,cache=none,aio=io_uring,detect-zeroes=on' \ -device 'scsi-hd,bus=virtioscsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0' \ -device 'ahci,id=ahci0,multifunction=on,bus=pci.0,addr=0x7' \ -drive 'file=/dev/pve/vm-151-disk-0,if=none,id=drive-sata0,format=raw,cache=none,aio=io_uring,detect-zeroes=on' \ -device 'ide-hd,bus=ahci0.0,drive=drive-sata0,id=sata0,bootindex=100' \ -netdev 'type=tap,id=net0,ifname=tap151i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown' \ -device 'e1000,mac=BA:FF:FF:84:10:E7,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=102' \ -rtc 'driftfix=slew,base=localtime' \ -machine 'hpet=off,type=pc-i440fx-9.0' \ -global 'kvm-pit.lost_tick_policy=discard' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM in general
@JonKohler & @frwbr, thank you for the feedback. Friedrich, you are of course correct re:
I will update the post. I have way too many distractions at the moment... I've also been going down that I think my brain might need rebasing before I'm done... 8^O Anyway, I'm sure I'll get there in the next week or two, just in time for viostor to start getting mainstream multiqueue support and become the new default...!! That being said, I have enjoyed the edification and improving what I can and it's been nice touching C again. |
Vadim, any issues with this one...? Regards, |
Sorry for delay in response. All the best, |
My pleasure Vadim. Happy to help. This one should bring welcome relief to many once merged and packaged. Best regards, |
Backports fixes and improvements from vioscsi PRs virtio-win#1150 and virtio-win#1162 virtqueue struct vq was also removed in favour of adaptExt->vq[QueueNumber], which results in a minor performance increase. Signed-off-by: benyamin-codez <[email protected]>
Backports fixes and improvements from vioscsi PRs #1150 and #1162 virtqueue struct vq was also removed in favour of adaptExt->vq[QueueNumber], which results in a minor performance increase. Signed-off-by: benyamin-codez <[email protected]>
Backports fixes and improvements from vioscsi PRs virtio-win#1150 and virtio-win#1162 virtqueue struct vq was also removed in favour of adaptExt->vq[QueueNumber], which results in a minor performance increase. Signed-off-by: benyamin-codez <[email protected]>
Backports fixes and improvements from vioscsi PRs virtio-win#1150 and virtio-win#1162 virtqueue struct vq was also removed in favour of adaptExt->vq[QueueNumber], which results in a minor performance increase. Signed-off-by: benyamin-codez <[email protected]>
Related issues:
#756
#623
#907
...
[likely others]
Regression was introduced in #684 between 7dc052d and fdf56dd.
Specifically due to commits f1338bb and fdf56dd (considered together).
Prior to the regression, we did not issuevirtqueue_notify()
outsideinside spinlock. This behaviour has been restored.Prior to the regression, we issued
virtqueue_kick_prepare()
inside spinlock. This behaviour has been restored.Note: We also do not want to issue
virtqueue_notify()
for a corrupt or failed buffer so this has been removed wherevirtqueue_add_buf()
does not return SUCCESS. Thevirtqueue_add_buf()
routine will return 0 on SUCCESS or otherwise a negative number, usually -28 (ENOSPC).Freedom for Windows guests once held captive...! 8^D
cc: @vrozenfe @JonKohler @sb-ntnx @frwbr @foxmox
Related external issues (at least confounded by this regression):
https://bugzilla.proxmox.com/show_bug.cgi?id=4295
https://bugzilla.proxmox.com/show_bug.cgi?id=4295
https://bugzilla.kernel.org/show_bug.cgi?id=199727
...
[likely others]