Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adjust packet mbuf size to reflect MTU #609

Merged
merged 2 commits into from
Sep 30, 2024
Merged

Adjust packet mbuf size to reflect MTU #609

merged 2 commits into from
Sep 30, 2024

Conversation

PlagueCZ
Copy link
Contributor

@PlagueCZ PlagueCZ commented Sep 26, 2024

Due to #606 we needed to increase the packet buffer size to adjust for Jumbo frames as that is what host-host communication is using. This in turn raised the required number of hugepages for dpservice pod from 4G to 16G on our compute nodes.

So instead I created two memory pools, one for 1500 MTU, the other for 9100 MTU.

In normal operation, dpservice only encounters 1500 MTU packets, yet the packet buffer size is set to RTE_MBUF_DEFAULT_BUF_SIZE (rte_mbuf.h):

#define RTE_MBUF_DEFAULT_DATAROOM       2048
#define RTE_MBUF_DEFAULT_BUF_SIZE       \
         (RTE_MBUF_DEFAULT_DATAROOM + RTE_PKTMBUF_HEADROOM)

So this PR lowers the buffer size to decrease memory requirements in normal mode and uses jumbo frames in pf1-proxy mode. Unfortunately dpservice-dump does not know which mode is active so it needs to use the bigger variant always, though it only uses a small ring buffer, so the allocation is not huge.

I have deployed the 1518 size on an OSC cluster and it has been running for two weeks now without visible problems.

The 9118 pool is tested in OSC, but not in the current state (i.e. using TAP device), I chose to do it this way to separate this PR from the big changes to the proxy.

@github-actions github-actions bot added size/XS enhancement New feature or request labels Sep 26, 2024
@github-actions github-actions bot added size/S and removed size/XS labels Sep 26, 2024
@PlagueCZ PlagueCZ changed the title Decrease packet mbuf size to reflect MTU Adjust packet mbuf size to reflect MTU Sep 26, 2024
@PlagueCZ PlagueCZ marked this pull request as ready for review September 26, 2024 01:00
@PlagueCZ PlagueCZ requested a review from a team as a code owner September 26, 2024 01:00
@PlagueCZ PlagueCZ marked this pull request as draft September 26, 2024 12:54
@github-actions github-actions bot added size/M and removed size/S labels Sep 26, 2024
@PlagueCZ PlagueCZ force-pushed the feature/mempool_size branch 2 times, most recently from b3a752c to fe1178c Compare September 26, 2024 16:14
@PlagueCZ PlagueCZ marked this pull request as ready for review September 26, 2024 16:32
@PlagueCZ
Copy link
Contributor Author

Some hard data from prometheus exporter:

  • old dpservice (with 900k packet memory pool) had HeapSize: 3G and AllocSize: 2.5G
  • new dpservice (with 350k packet memory pool) has HeapSize: 2G and AllocSize: 1.5G
  • new dpservice with pf1-proxy (thus another jumbo pool): has HeapSize: 3G and AllocSize: 2G

@PlagueCZ
Copy link
Contributor Author

I had to change the meson handling of ENABLE_ definitions (i.e. make them all visible to CPP) because now ENABLE_PF1_PROXY is in dpdk_layer structure, which in turn is accessed by gRPC C++ code and that was causing strange errors (as C++ of course used different structure definition)

Copy link
Collaborator

@guvenc guvenc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice that we could reduce the memory footprint of the dpservice.

@guvenc guvenc merged commit 6d6d43c into main Sep 30, 2024
6 checks passed
@guvenc guvenc deleted the feature/mempool_size branch September 30, 2024 08:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request size/M
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants