-
Notifications
You must be signed in to change notification settings - Fork 624
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
openvas stuck with 100% CPU in destroy_scan_globals #1271
Comments
Hey @dhedberg , Thanks for the bug report and sorry for the late reply. We tried to replicate the issue but without success. We guess that this is a possible bug in glib. Does this issue still happen or have you found a solution since then? Do you have some minimal reproducer, so we can replicate the issue? |
Thanks for looking into this. As far as I can tell from our logs and CPU usage monitoring, the issue still persists. We're working around the issue for now by regularly killing any lingering openvas --scan-start processes that matches tasks marked as Done in the API with a cronjob. We also automatically update to the latest stable images (at least) once a week. I don't have a minimal reproducer at hand. We don't do that much customization; we create a custom port list from the IANA one with one port removed, a custom scan config from "Full and fast" with around 10 oids disabled, and have a few tasks which scan a bunch of IP ranges with ssh auth enabled. Usually there's only one task running at a time. |
Hi all, I seem to have the same issue. OSPD[7] 2023-09-11 11:04:31,470: INFO: (ospd.command.command) Scan 3b93c961-e15f-460e-859a-15fa12c26368 added to the queue in position 2. root@ospd-openvas:/ospd-openvas# ps fax It seems to stay in memory and not cleaning. Restarting the docker process helps, but somewhere ospd-openvas is not giving the memory claim free. R = running or runnable (on run queue) Especially the last part L is happening. Is this a bug in ospd-openvas? Debian 12 (bookworm) Extra information: Tasks: 169 total, 4 running, 165 sleeping, 0 stopped, 0 zombie
1153249 docker 30 10 10.8t 16228 7556 R 100.0 0.1 9,57 openvas --scan-start 57278eb0-2a35-456b-8661-ec049570ea49 Scans keep on running infinitely. When using strace on the pid no output is shown. The process is not a zombie. Somewhere the process is not getting killed. |
I've tried with the default docker-compose file on a Debian 12 server. The VM is running on Amazon btw. Doesn't make a difference, because I used a Debian 11.5 before on Amazon (same architecture) and that one worked just fine. Same docker-compose file. |
I am seeing the same thing with our deployment using our helm chart: https://github.com/fpm-git/Greenbone-Community-Edition-Helm |
We've been investigating this an it appears to be similar to the OP with a more recent build of OpenVAS. Including the quoted bit below.
We seem to see it only when there is authenticated scans being used. |
Hi @ArnoStiefvater, Would you be able to debug if given a core-dump on |
We're on the latest greenbone/ospd-openvas:stable, dated 2022-12-12T14:39:05.303393489Z, and we're seeing an issue where the openvas processes hang around after the tasks have been marked as done, taking up 100% of a CPU core.
The container is running in kubernetes with a deployment converted (with some small adaptations) from the community container docker-compose file.
Example:
The task is already marked as finished. Attaching with gdb gives:
it seems to be stuck in:
.. which I think corresponds to
in glib/ghash.c
That's as far as I have dug; I'm guessing it might be some sort of memory corruption or concurrency issue?
The text was updated successfully, but these errors were encountered: