unlimit RSS memory occupy about Rocksdb(maybe) in Kvrocks #1765
Replies: 5 comments 26 replies
-
Would you mind provide the version of kvrocks? Since different version of RocksDB is used, and config might changed in our system. |
Beta Was this translation helpful? Give feedback.
-
@xiaofan8421 Thanks for your detailed analysis. Could you show the Y-axis number of this picture? I'm not sure if it's a memory fragment issue caused by compaction. |
Beta Was this translation helpful? Give feedback.
-
facebook/rocksdb#3216 (comment) I think this is like the problem we meet. But I catch a cold and didn't have bandwitdh dive into it, maybe you can take a look |
Beta Was this translation helpful? Give feedback.
-
I'm interested by this as well. We mainly switched from Redis to Kvrocks to reduce the memory usage and more importantly, if possible, to constraint it, no matter the decrease of throughput/performance. With the default configuration (8 workers, Since we have limited the container to use 12Gi at best, we are a little concerned about it. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Background:
our expected rss mem occupy is 1G(metadata) + 1G(subkey) + 4*64(all memtables) + others =(almost) 3G(maybe below it, but if >3G, unexpected)
But actually, rss mem in our machine is almost 6.6G and slow-growth is still happening in Debian-11-Server. it exceeds the understanding of our team expected.
But the other thing, we run the same Kvrocks version in our Centos-7-Server is ok, almost 3G rss mem. mem slow-growth does not happen anymore.
Our Reproduce Benchmark Test: our inner bin-stress-test which uses PYSNC to test this confusing problem.
our cfg about Rocksdb cache in Kvrocks:
rocksdb.metadata_block_cache_size 1024
rocksdb.subkey_block_cache_size 1024
rocksdb.share_metadata_and_subkey_block_cache yes
rocksdb.block_size 16384
rocksdb.cache_index_and_filter_blocks yes
rocksdb.max_write_buffer_number 4
rocksdb.write_buffer_size 64
reproduce machine:
PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
NAME="Debian GNU/Linux"
VERSION_ID="11"
VERSION="11 (bullseye)"
VERSION_CODENAME=bullseye
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/
Here is Kvrcocks rss mem which is collected by pmap mem tool
Firstly, we doubt that Kvrocks oom out of mem leak by itself, but I compiled Google ASan into it, running a few days, ok now.
Secondly, I did some more research for further. Except above pmap, I dumped some Kvrocks address mem segment content. for example:
Thirdly, I find the monitor system unexpected phenomenon: rss memory outgrows fast when Rocksdb compaction midnight.
Based on the above findings, I think that it is Rocksdb uses lots of rss memory, but why it is unlimited in the above cfg? I have no idea now. After all, we have already made block_cache shared (metadata/subkey/index/filter).
Beta Was this translation helpful? Give feedback.
All reactions