-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
out of memory #30
Comments
|
I am using a VM (vmware) running postgres 12.4 on docker |
debian buster |
Hi, Sorry for the long delay for the answer. Can you share a bit more about what exactly you're doing? Have you try running the same jobs without pg_background, for instance using multiple connections and dispatching the work with In any case, it seems that your server / docker image is configured to allow overcommit. It might be a better option here to disallow overcommit and let the processes fails with an |
I have several xml extractions from table xml to table A,B,C,D,E,F which I am running in parallel.
They take about 1 minute to run. I have pg_sleep in my loop of 20 seconds.
My aim is to process a total of 1000 extractions which means 1000 x 6 at a time. When I run the extractions in serial it takes 6 hours to complete. Using pg_background should reduce this to 1 hour. However when I am testing on a 100 files it rarely gets past processing 10 files without getting OOM errors. I made all the pg.conf and other os suggestions to solve this problem but to no avail. I then saw that one of the other issues logged by another user had the same issue and you replied that you would modify you code to fix this. Would it be possible to have some config file where you could set a param to increase memory rather than a c code modification.
Thanks.
The text was updated successfully, but these errors were encountered: