Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve memory usage by move ZMQ serialize buffer from ZmqProducerStateTable to ZmqClient #955

Merged
merged 2 commits into from
Nov 25, 2024

Conversation

liuh-80
Copy link
Contributor

@liuh-80 liuh-80 commented Nov 23, 2024

Why I did it

Every ZmqProducerStateTable will allocate 16MB buffer, this can be improve by share same buffer in ZmqClient.

How I did it

Improve memory usage by move ZMQ serialize buffer from ZmqProducerStateTable to ZmqClient.

Work item tracking

How to verify it

Pass all test cases.

Which release branch to backport (provide reason below if selected)

  • 201811
  • 201911
  • 202006
  • 202012
  • 202106
  • 202111

Description for the changelog

Improve memory usage by move ZMQ serialize buffer from ZmqProducerStateTable to ZmqClient.

Link to config_db schema for YANG module changes

A picture of a cute animal (not mandatory but encouraged)

@liuh-80 liuh-80 marked this pull request as ready for review November 23, 2024 03:15
@liuh-80 liuh-80 requested a review from ganglyu November 23, 2024 03:15
@@ -38,8 +38,6 @@ ZmqProducerStateTable::ZmqProducerStateTable(RedisPipeline *pipeline, const stri

void ZmqProducerStateTable::initialize(DBConnector *db, const std::string &tableName, bool dbPersistence)
{
m_sendbuffer.resize(MQ_RESPONSE_MAX_COUNT);
Copy link
Contributor Author

@liuh-80 liuh-80 Nov 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line cause GNMI memory issue, it will allocate 16MB memory for every ZmqProducerStateTable.

However, by add debug log, the dtor of invoked by GNMI. so this memory should be release.

However, the GNMI service memory will keep fast increase until 1.3GB.

Seems it's related with GO memory management or go SWIG wrapper: some memory allocates and released by C++ not free in GO side.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand, if dtor is invoked, why is this memory not released?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure why, this can only reproduce on starlab

@liuh-80 liuh-80 merged commit 6bac82b into sonic-net:master Nov 25, 2024
17 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants