Adjust packet mbuf size to reflect MTU #609
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Due to #606 we needed to increase the packet buffer size to adjust for Jumbo frames as that is what host-host communication is using. This in turn raised the required number of hugepages for dpservice pod from 4G to 16G on our compute nodes.
So instead I created two memory pools, one for 1500 MTU, the other for 9100 MTU.
In normal operation, dpservice only encounters 1500 MTU packets, yet the packet buffer size is set to
RTE_MBUF_DEFAULT_BUF_SIZE
(rte_mbuf.h):So this PR lowers the buffer size to decrease memory requirements in normal mode and uses jumbo frames in pf1-proxy mode. Unfortunately
dpservice-dump
does not know which mode is active so it needs to use the bigger variant always, though it only uses a small ring buffer, so the allocation is not huge.I have deployed the 1518 size on an OSC cluster and it has been running for two weeks now without visible problems.
The 9118 pool is tested in OSC, but not in the current state (i.e. using TAP device), I chose to do it this way to separate this PR from the big changes to the proxy.