-
Notifications
You must be signed in to change notification settings - Fork 170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
lots of zombie processes? #106
Comments
Another instance of about 6K processes per fuzzing process localuser@bot12b:~/archive/logs$ ps ax | wc -l
18278
localuser@bot12b:~/archive/logs$ ps ax | grep '\[duk\] <defunct>' | wc -l
5909
localuser@bot12b:~/archive/logs$ ps ax | grep '\[jq\] <defunct>' | wc -l
11817
localuser@bot12b:~/archive/logs$ ps ax | grep ' <defunct>' | wc -l
17727
localuser@bot12b:~/archive/logs$ pgrep -af angora/bin/fuzzer
4940 /angora/bin/fuzzer --sync_afl -i inputs -o outputs -t ./lava-ang/bin/jq.tt -j 2 --time_limit 9.0 -- ./lava-ang/bin/jq @@
30609 /angora/bin/fuzzer --sync_afl -i inputs -o outputs -t ./lava-ang/bin/jq.tt -j 2 --time_limit 9.0 -- ./lava-ang/bin/jq @@
31005 /angora/bin/fuzzer --sync_afl -i inputs -o outputs -t ./lava-ang/bin/duk.tt -j 2 --time_limit 2.0 -- ./lava-ang/bin/duk @@ |
I'm seeing roughly one zombie per minute, which suggests this isn't a high-frequency hot path. Capturing the child process and One potential issue I noticed: Zombies sometimes persist for up to ≈40s before they're collected. This might be fine and just mean ---
fuzzer/src/executor/forksrv.rs | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/fuzzer/src/executor/forksrv.rs b/fuzzer/src/executor/forksrv.rs
index bec8098..0dc209d 100644
--- a/fuzzer/src/executor/forksrv.rs
+++ b/fuzzer/src/executor/forksrv.rs
@@ -23,6 +23,7 @@ pub struct Forksrv {
path: String,
pub socket: UnixStream,
uses_asan: bool,
+ child: std::process::Child,
}
impl Forksrv {
@@ -48,7 +49,7 @@ impl Forksrv {
let mut envs_fk = envs.clone();
envs_fk.insert(ENABLE_FORKSRV.to_string(), String::from("TRUE"));
envs_fk.insert(FORKSRV_SOCKET_PATH_VAR.to_string(), socket_path.to_owned());
- match Command::new(&target.0)
+ let child = match Command::new(&target.0)
.args(&target.1)
.stdin(Stdio::null())
.envs(&envs_fk)
@@ -59,7 +60,7 @@ impl Forksrv {
.pipe_stdin(fd, is_stdin)
.spawn()
{
- Ok(_) => (),
+ Ok(child) => child,
Err(e) => {
error!("FATAL: Failed to spawn child. Reason: {}", e);
panic!();
@@ -88,6 +89,7 @@ impl Forksrv {
path: socket_path.to_owned(),
socket,
uses_asan,
+ child,
}
}
@@ -167,6 +169,10 @@ impl Drop for Forksrv {
if self.socket.write(&fin).is_err() {
debug!("Fail to write socket !! FIN ");
}
+ match self.child.wait() {
+ Ok(s) => debug!("Forksrv child reaped with status {:?}", s),
+ Err(e) => warn!("Forksrv child wait failure: {}", e),
+ }
let path = Path::new(&self.path);
if path.exists() {
if fs::remove_file(&self.path).is_err() { |
I think I'm running into issues where Angora might be failing because it is not reaping zombie child processes, filling up the process table, then unable to launch new processes. It appears that the fork server does read the status of the child processes, so there must be another invocation that doesn't check the exit codes?
Do you know where this might be originating from?
Here's an example with base64 from LAVA-M, the number of defunct processes just keeps growing over time.
And here's the error log from another instance on the same host:
The text was updated successfully, but these errors were encountered: