Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

out of memory #30

Open
xencloudtech opened this issue Sep 18, 2020 · 5 comments
Open

out of memory #30

xencloudtech opened this issue Sep 18, 2020 · 5 comments

Comments

@xencloudtech
Copy link

I have several xml extractions from table xml to table A,B,C,D,E,F which I am running in parallel.
They take about 1 minute to run. I have pg_sleep in my loop of 20 seconds.
My aim is to process a total of 1000 extractions which means 1000 x 6 at a time. When I run the extractions in serial it takes 6 hours to complete. Using pg_background should reduce this to 1 hour. However when I am testing on a 100 files it rarely gets past processing 10 files without getting OOM errors. I made all the pg.conf and other os suggestions to solve this problem but to no avail. I then saw that one of the other issues logged by another user had the same issue and you replied that you would modify you code to fix this. Would it be possible to have some config file where you could set a param to increase memory rather than a c code modification.
Thanks.
memory

@xencloudtech
Copy link
Author

cat /proc/meminfo MemTotal: 16426848 kB MemFree: 10400020 kB MemAvailable: 15375700 kB Buffers: 362292 kB Cached: 4695016 kB SwapCached: 72024 kB Active: 2021964 kB Inactive: 3324020 kB Active(anon): 137116 kB Inactive(anon): 289436 kB Active(file): 1884848 kB Inactive(file): 3034584 kB Unevictable: 16 kB Mlocked: 16 kB SwapTotal: 8384508 kB SwapFree: 8201212 kB Dirty: 52 kB Writeback: 0 kB AnonPages: 214652 kB Mapped: 224088 kB Shmem: 137868 kB Slab: 524068 kB SReclaimable: 389468 kB SUnreclaim: 134600 kB KernelStack: 7300 kB PageTables: 12952 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 16597932 kB Committed_AS: 8356732 kB VmallocTotal: 34359738367 kB VmallocUsed: 0 kB VmallocChunk: 0 kB Percpu: 6688 kB HardwareCorrupted: 0 kB AnonHugePages: 47104 kB ShmemHugePages: 0 kB ShmemPmdMapped: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB Hugetlb: 0 kB DirectMap4k: 792448 kB DirectMap2M: 15984640 kB DirectMap1G: 2097152 kB

@xencloudtech
Copy link
Author

meminfo

@xencloudtech
Copy link
Author

I am using a VM (vmware) running postgres 12.4 on docker

@xencloudtech
Copy link
Author

4.19.0-1-amd64 #1 SMP Debian 4.19.12-1 (2018-12-22) x86_64 GNU/Linux

debian buster

@rjuju
Copy link
Collaborator

rjuju commented May 16, 2021

Hi,

Sorry for the long delay for the answer. Can you share a bit more about what exactly you're doing? Have you try running the same jobs without pg_background, for instance using multiple connections and dispatching the work with SELECT FOR UPDATE ... SKIP LOCKED ? I'm wondering if this is due to pg_background, or just that your jobs are consuming more resources than you have (which seems more likely).

In any case, it seems that your server / docker image is configured to allow overcommit. It might be a better option here to disallow overcommit and let the processes fails with an out of memory error in case of problem. It will be less problematic for the rest of your workload and may reveal some important details in the memory context of the affected backend(s).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants