-
Notifications
You must be signed in to change notification settings - Fork 611
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix packet forwarding between vz and socket_vmnet #2680
Conversation
@balajiv113 can you review? |
Example iperf3 run - case 12 lima vms:
Running iperf3 on the host, testing server vm. Tested on M1 Pro
Retries logs on client:
Retry logs on server: |
Example iperf3 run - case 22 lima vms:
Tested on M1 Pro
Retries logs on client: Retries logs on server:
|
Example iperf3 run - case 3lima vm running Tested on M1 Pro
Retries logs on server vm: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the idea behind this change is good.
I have a few comments though.
79c83f9
to
e8e80c8
Compare
Example test output
|
// QEMUPacketConn converts raw network packet to a QEMU supported network packet. | ||
type QEMUPacketConn struct { | ||
unixConn net.Conn | ||
func forwardPackets(qemuConn *qemuPacketConn, vzConn *packetConn) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need our custom forwardPackets ??
I would still prefer to use inetproxy itself. Unless it doesn't work even after wrapping
We could simply wrap the packetConn with the fileconn. This way retry on ENOBUFS are present.
https://github.com/lima-vm/lima/blob/master/pkg/vz/network_darwin.go#L49
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need our custom forwardPackets ??
Yes, for 2 reasons:
- tcpproxy hides errors during io.Copy(), making debugging impossible
- we don't need tcpproxy since copying bytes between the sockets it trivial
I would still prefer to use inetproxy itself. Unless it doesn't work even after wrapping
Can you explain why?
Looking at https://github.com/inetaf/tcpproxy:
Proxy TCP connections based on static rules, HTTP Host headers, and SNI server names (Go package or binary)
We don't do any of that. We used a tiny bit of tcproxy copying bytes around, and this is better done in lima itself, where we can implement it in the best way for lima, and change it easily when needed.
We could simply wrap the packetConn with the fileconn. This way retry on ENOBUFS are present. https://github.com/lima-vm/lima/blob/master/pkg/vz/network_darwin.go#L49
Adding another layer of wrapping to keep unneeded dependency does not sounds like the right way to me.
e8e80c8
to
b6ada4c
Compare
Please squash the commits |
We used external package (tcpproxy) for proxying between unix stream and datagram sockets. This package cannot handle ENOBUFS error, expected condition on BSD based systems, and worse, it hides errors and stop forwarding packets silently when write to vz socket fails with ENOBUFS[1]. Fix the issues by replacing tcpproxy with a simpler and more direct implementation that will be easier to maintain. Fixes: - Fix error handling if write to vz datagram socket fail with ENOBUFS. We retry the write until it succeeds with a very short sleep between retries. Similar solution is used in gvisor-tap-vsock[2]. - Fix error handling if we could not read packet header or body from socket_vmnet stream socket. Previously we logged an error and continue to send corrupted packet to vz from the point of the failure. - Fix error handling if writing a packet to socket_vmnet stream socket returned after writing partial packet. Now we handle short writes and write the complete packet. Previously would break the protocol and continue to send corrupted packet from the point of the failure. - Log error if forwarding packets from vz to socket_vmnet or from socket_vmnet to vz failed. Simplification: - Use binary.Read() and binary.Write() to read and write qemu packet header. Visibility: - Make QEMUPacketConn private since it is an implementation detail of vz when using socket_vmnet. Testing: - Add a packet forwarding test covering the happy path in 10 milliseconds. [1] lima-vm/socket_vmnet#39 [2] containers/gvisor-tap-vsock#370 Signed-off-by: Nir Soffer <[email protected]>
b6ada4c
to
d49ac31
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks
We used external package (tcpproxy) for proxying between unix stream and datagram sockets. This package cannot handle ENOBUFS error, expected condition on BSD based systems, and worse, it hides errors and stop forwarding packets silently when write to vz socket fails with ENOBUFS[1].
Fix the issues by replacing tcpproxy with a simpler and more direct implementation that will be easier to maintain.
Fixes:
Fix error handling if write to vz datagram socket fail with ENOBUFS. We retry the write until it succeeds with a very short sleep between retries. Similar solution is used in gvisor-tap-vsock[2].
Fix error handling if we could not read packet header or body from socket_vmnet stream socket. Previously we logged an error and continue to send corrupted packet to vz from the point of the failure.
Fix error handling if writing a packet to socket_vmnet stream socket returned after writing partial packet. Now we handle short writes and write the complete packet. Previously would break the protocol and continue to send corrupted packet from the point of the failure.
Log error if forwarding packets from vz to socket_vmnet or from socket_vmnet to vz failed.
Simplification:
Visibility:
[1] lima-vm/socket_vmnet#39
[2] containers/gvisor-tap-vsock#370