Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

first version of tests that reproduce mem leak #2001

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

grusev
Copy link
Collaborator

@grusev grusev commented Nov 14, 2024

Reference Issues/PRs

The tests contain logic that is able to catch large memory leaks by repeating certain operation N times and cleaning mem through garbage collection each time. As the mem leaks is iterrative process the success of approach will be determined by how large the mem leak is. For small mem leaks this process will not be able to capture them unless repeated many many times, which could be very costly in terms of time.

It is advised that when approach is taken and new test is prepared the check procedure parameters to be finetuned for that particular case.. To do so you need to execute the process from command line with "-s" option so that pytest does not hide the output of print statements. See more in the doc of the procedure..

The output that you should expect from precedure should be like:

tart check process for memory leaks
Num iterrations: 30
Maxumum memory growth/lost to the examined process: 4808662528
Maxumum machine memory utilization: 4808662528
Lets pause for 10 secs so GC to kick in
Available memory 19730.07MB/[20688478208]

Process initial RSS 8671.72MB/[9092956160]
Starting watched code ...........
Lets pause for 7 secs so GC to kick in
Iter No[0] : Process did added (or if - number means cleaned) 1014.14MB/[1063407616] . Avail memory: 18793.03MB/[19705921536] Used Mem: 61.0%
Overall stats : Process growth since start 1014.14MB/[1063407616] AVG growth per iter 1014.14MB/[1063407616.0]
Minimum growth so far: 1014.14MB/[1063407616]
Maximum growth so far: 1014.14MB/[1063407616]
Number of times there was 50% drop in memory: 0
Starting watched code ...........
Lets pause for 7 secs so GC to kick in
Iter No[1] : Process did added (or if - number means cleaned) 990.93MB/[1039065088] . Avail memory: 17372.98MB/[18216894464] Used Mem: 63.9%
Overall stats : Process growth since start 2005.07MB/[2102472704] AVG growth per iter 1002.54MB/[1051236352.0]
Minimum growth so far: 1014.14MB/[1063407616]
Maximum growth so far: 2005.07MB/[2102472704]
Number of times there was 50% drop in memory: 0

..........................................................

What does this implement or fix?

Any other comments?

Checklist

Checklist for code changes...
  • Have you updated the relevant docstrings, documentation and copyright notice?
  • Is this contribution tested against all ArcticDB's features?
  • Do all exceptions introduced raise appropriate error messages?
  • Are API changes highlighted in the PR description?
  • Is the PR labelled as enhancement or bug so it appears in autogenerated release notes?


def check_process_memory_leaks(process_func , number_iterrations, max_total_mem_lost_treshold, max_machine_memory_percentage) -> np.int64:
"""
This check accepts a function which will be called iterrativly 'number_iterrations'. During this
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo: iterratively/iteratively and number_iterrations/number_iterations

print(df)
del df

check_process_memory_leaks(proc_to_examine, 30, 4808662528, 80.0)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would make 4808662528 into a variable and break down how it was calculated so it is easier to see the value that is tests.
Same goes for 808662528 below.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants