You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I’ve written a simple TCP forwarder that sits between 2 computers and forwards message back and forth between them.
The program is working okay as long as I send small messages. For example, 1KB messages work fine. But when I send a larger message that has to be split across multiple Ethernet frames the program seems to lose some of the data. For example, if I send a 16KB hunk to the OFP program, it might only receive 12KB or 13KB of the message. It forwards that data along just fine, but the remainder of the message never seems to show up. Wireshark traces show the data is being sent to my app correctly, but the app doesn’t see all of that data.
As a different test, I modified the “IPv4 TCP socket local IP: send + recv” test in the ofp/example/socket program so I could connect to it from a remote computer. I made the buf in _receive_tcp() larger so I could send it a bigger message. I ran the ofp/example/socket/socket program and connected from a remote computer (actually, I was running from 2 VMware VMs. When I sent a 4,500 byte message from the remote computer, the socket program only received 3,964 bytes. This happens every time. It’s 100% reproducible. 3,964 bytes seems to be the maximum size it can receive. I could see in a Wireshark trace that a 4,500 byte Ethernet frame arrived on the fp0 interface, but it was truncated somewhere between the Ethernet driver and the socket program.
I also ran the same test with debugging turned on (“debug #0x1f” from the cli). I could see that the entire message was received by ODP. I think it was 4,552 bytes at that point. The program always crashes if I receive a large message with debugging turned on. It complains about a “Too long message!”, and then it crashes.
So that’s what I’m seeing. Is there a limit on the size of Ethernet frames? Do I need to force it to run with MTU=1500? Or maybe is there an easy way to increase the limitation within the software at build time or run time?
Here's an analysis from Bogdan:
So, you have a large MTU and want to transfer big eth frames, right?
First thing to check is SHM_PKT_POOL_BUFFER_SIZE: OFP is creating an ODP packet pool of (SHM_PKT_POOL_SIZE / SHM_PKT_POOL_BUFFER_SIZE) elements of SHM_PKT_POOL_BUFFER_SIZE size.
(Of course, ODP is adding a headroom (ODP_CONFIG_PACKET_HEADROOM) and tailroom (ODP_CONFIG_PACKET_TAILROOM) per packet and is doing some align stuff but this is not important for this case).
From this pkt pool buffer size you need to subtract eth, IP (IPv6 + other headers), TCP header sizes and remaining size should be large enough to accommodate the maximum payload you want to transport.
The text was updated successfully, but these errors were encountered:
Note: the issue was created automatically with bugzilla2github tool
Bugzilla Bug ID: 91
Date: 2016-08-09 23:22:55 +0200
From: Justin Riggs <[email protected]>
To: Sorin Vultureanu <[email protected]>
Last updated: 2016-11-22 15:33:20 +0100
Bugzilla Comment ID: 174
Date: 2016-08-09 23:22:55 +0200
From: Justin Riggs <[email protected]>
I’ve written a simple TCP forwarder that sits between 2 computers and forwards message back and forth between them.
The program is working okay as long as I send small messages. For example, 1KB messages work fine. But when I send a larger message that has to be split across multiple Ethernet frames the program seems to lose some of the data. For example, if I send a 16KB hunk to the OFP program, it might only receive 12KB or 13KB of the message. It forwards that data along just fine, but the remainder of the message never seems to show up. Wireshark traces show the data is being sent to my app correctly, but the app doesn’t see all of that data.
As a different test, I modified the “IPv4 TCP socket local IP: send + recv” test in the ofp/example/socket program so I could connect to it from a remote computer. I made the buf in _receive_tcp() larger so I could send it a bigger message. I ran the ofp/example/socket/socket program and connected from a remote computer (actually, I was running from 2 VMware VMs. When I sent a 4,500 byte message from the remote computer, the socket program only received 3,964 bytes. This happens every time. It’s 100% reproducible. 3,964 bytes seems to be the maximum size it can receive. I could see in a Wireshark trace that a 4,500 byte Ethernet frame arrived on the fp0 interface, but it was truncated somewhere between the Ethernet driver and the socket program.
I also ran the same test with debugging turned on (“debug #0x1f” from the cli). I could see that the entire message was received by ODP. I think it was 4,552 bytes at that point. The program always crashes if I receive a large message with debugging turned on. It complains about a “Too long message!”, and then it crashes.
So that’s what I’m seeing. Is there a limit on the size of Ethernet frames? Do I need to force it to run with MTU=1500? Or maybe is there an easy way to increase the limitation within the software at build time or run time?
Here's an analysis from Bogdan:
So, you have a large MTU and want to transfer big eth frames, right?
First thing to check is SHM_PKT_POOL_BUFFER_SIZE: OFP is creating an ODP packet pool of (SHM_PKT_POOL_SIZE / SHM_PKT_POOL_BUFFER_SIZE) elements of SHM_PKT_POOL_BUFFER_SIZE size.
(Of course, ODP is adding a headroom (ODP_CONFIG_PACKET_HEADROOM) and tailroom (ODP_CONFIG_PACKET_TAILROOM) per packet and is doing some align stuff but this is not important for this case).
From this pkt pool buffer size you need to subtract eth, IP (IPv6 + other headers), TCP header sizes and remaining size should be large enough to accommodate the maximum payload you want to transport.
The text was updated successfully, but these errors were encountered: