Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow restricting the number of parallel linker invocations #9157

Open
luser opened this issue Feb 9, 2021 · 17 comments
Open

Allow restricting the number of parallel linker invocations #9157

luser opened this issue Feb 9, 2021 · 17 comments
Labels
A-jobserver Area: jobserver, concurrency, parallelism A-linkage Area: linker issues, dylib, cdylib, shared libraries, so C-feature-request Category: proposal for a feature. Before PR, ping rust-lang/cargo if this is not `Feature accepted` S-needs-design Status: Needs someone to work further on the design for the feature or fix. NOT YET accepted.

Comments

@luser
Copy link
Contributor

luser commented Feb 9, 2021

In CI at my work, we ran into a situation where rustc would get OOM-killed while linking example binaries:

error: linking with `cc` failed: exit code: 1
  |
  = note: "cc" <…>
  = note: collect2: fatal error: ld terminated with signal 9 [Killed]
          compilation terminated.

We were able to mitigate this by using a builder with more available memory, but it's unfortunate. We could dial down the parallelism of the whole build by explicitly passing -jN, but that would make the non-linking parts of the build slower by leaving CPU cores idle.

It would be ideal if we could explicitly ask cargo to lower the number of parallel linker invocations it will spawn. Compile steps are generally CPU-intensive, but linking is usually much more memory-intensive. In the extreme case, for large projects like Firefox and Chromium where the vast majority of code gets linked into a single binary, that link step far outweighs any other part of the build in terms of memory usage.

In terms of prior art, ninja has a concept of "pools" that allow expressing this sort of restriction in a more generic way:

Pools allow you to allocate one or more rules or edges a finite number of concurrent jobs which is more tightly restricted than the default parallelism.
This can be useful, for example, to restrict a particular expensive rule (like link steps for huge executables), or to restrict particular build statements which you know perform poorly when run concurrently.

The Ninja feature was originally motivated by Chromium builds switching to Ninja and wanting to support distributed builds, in which there might be capacity to spawn way more compile jobs in parallel since they can be run on distributed build nodes, but link jobs, needing to be run on the local machine, would want a lower limit.

If this were implemented, one could imagine a further step whereby cargo could estimate how heavy individual linker invocations are by the number of crates they link, and attempt to set a reasonable default value based on that and the amount of available system memory.

@luser luser added the C-feature-request Category: proposal for a feature. Before PR, ping rust-lang/cargo if this is not `Feature accepted` label Feb 9, 2021
@luser
Copy link
Contributor Author

luser commented Feb 9, 2021

I believe this would also be useful for people using sccache in distributed compilation mode, as they could have an exaggerated version of this problem similar to what's described in that message about Chromium, with more build capacity for compiling than linking.

@ehuss ehuss added A-jobserver Area: jobserver, concurrency, parallelism A-linkage Area: linker issues, dylib, cdylib, shared libraries, so labels Feb 10, 2021
@Be-ing
Copy link

Be-ing commented Feb 27, 2021

I have no idea if this would be practical, but could cargo automatically monitor memory usage to adjust how many concurrent threads to use?

@levkk
Copy link

levkk commented Oct 26, 2021

I've managed to work around this by enabling swap. Linking time did not suffer visibly. On Ubuntu, I followed this guide.

@sagudev
Copy link

sagudev commented Mar 16, 2023

Proposed Solution

Adding --link-jobs option to specify number of jobs for linking. The option would default to number of parallel jobs.

Here what would help look like (-j option is displayed for comparison):

-j, --jobs <N>                Number of parallel jobs, defaults to # of CPUs
--link-jobs <N>               Number of parallel jobs for linking, defaults to # of parallel jobs

@weihanglo weihanglo added the S-needs-design Status: Needs someone to work further on the design for the feature or fix. NOT YET accepted. label Oct 31, 2023
@weihanglo
Copy link
Member

@rustbot claim

@epage
Copy link
Contributor

epage commented Nov 2, 2023

@weihanglo see also #7480

@weihanglo
Copy link
Member

FWIW, Cabel community had a discussion a while back: haskell/cabal#1529

@weihanglo
Copy link
Member

weihanglo commented Nov 3, 2023

Potentially the unstable rustc flag -Zno-link can separate linking phase from others (see #9019), and then Cargo can control the parallelism of linker invocations. Somebody needs to take a look at the status of -Zno-link/-Zlink-only in rustc (and that is very likely me).

@epage
Copy link
Contributor

epage commented Nov 3, 2023

As this is focusing on the problem of OOM, I'm going to close in favor of #12912 so we keep the conversations in one place.

@epage epage closed this as not planned Won't fix, can't repro, duplicate, stale Nov 3, 2023
@weihanglo
Copy link
Member

FWIW, setting split-debuginfo = "packed" or "unpacked" on profile should reduce the memory usage of linker. My experiment results in half of the memory usage per invocation.

Something we might want to keep an eye on in rustc: rust-lang/rust#48762

@luser
Copy link
Contributor Author

luser commented Nov 7, 2023

As this is focusing on the problem of OOM, I'm going to close in favor of #12912 so we keep the conversations in one place.

I suppose, although this is a very specific problem and I'm doubtful that the generic mechanisms being discussed in that issue will really help address it.

@weihanglo weihanglo reopened this Nov 7, 2023
@weihanglo
Copy link
Member

Thanks. Reopened as it might need both #9019 and #12912, and maybe other upstream works from rust to make it happen.

@weihanglo
Copy link
Member

FWIW, there is a --no-keep-memory flag for GNU linker. Haven't tried it but might help before we make some progress on this.

https://linux.die.net/man/1/ld

@weihanglo
Copy link
Member

rust-lang/rust#117962 has made into nightly. It could alleviate the pain of linker OOM to some extent.

@luser
Copy link
Contributor Author

luser commented Dec 13, 2023

FWIW, there is a --no-keep-memory flag for GNU linker. Haven't tried it but might help before we make some progress on this.

I suspect this will make performance much worse in the average case, unfortunately.

@soloturn
Copy link
Contributor

i think this can be closed, as it is the linkers business to not run out of memory. e.g. mold does this, see rui314/mold#1319 via its MOLD_JOBS environment variable. to avoid cargo does everything and nothing well ...

@weihanglo weihanglo removed their assignment Oct 20, 2024
@epage
Copy link
Contributor

epage commented Oct 21, 2024

For something like that, jobserver support in a linker would be a big help so we can coordinate across rustc and the linker for how many threads are available to use.

That also only focuses on number of parallel threads and not actual memory consumption. Like with threads, likely any solution coordination will be needed between links and rustc / cargo

atodorov added a commit to gluwa/polkadot-sdk that referenced this issue Nov 14, 2024
what I am seeing with the move to Linode VMs is essentially this:
rust-lang/cargo#9157
rust-lang/cargo#12912

B/c these new VMs have more CPU cores, 16 (new) vs 4(old),
compilation is faster however this causes cargo to be overzealous and
spawn too many linker processes which consume all of the available
memory (on a 64 GB VM) and causes an OOM error forcing the kernel to
kill the linker process and causing cargo to fail!

Another alternative, which works, is using `--jobs 8`, however that is
less optimal b/c it leaves unused CPU capacity and also affects the
number of parallel threads when executing the test suite!

WARNING: using `--release` is not an option because it breaks tests. The
polkadot-sdk code uses the macro defensive! which is designed to panic
when running in debug mode and multiple test scenarios rely on this
behavior via #[should_panic]!

WARNING: we still need the 64 GB memory!
atodorov added a commit to gluwa/polkadot-sdk that referenced this issue Nov 15, 2024
what I am seeing with the move to Linode VMs is essentially this:
rust-lang/cargo#9157
rust-lang/cargo#12912

B/c these new VMs have more CPU cores, 16 (new) vs 4(old),
compilation is faster however this causes cargo to be overzealous and
spawn too many linker processes which consume all of the available
memory (on a 64 GB VM) and causes an OOM error forcing the kernel to
kill the linker process and causing cargo to fail!

Another alternative, which works, is using `--jobs 8`, however that is
less optimal b/c it leaves unused CPU capacity and also affects the
number of parallel threads when executing the test suite!

WARNING: using `--release` is not an option because it breaks tests. The
polkadot-sdk code uses the macro defensive! which is designed to panic
when running in debug mode and multiple test scenarios rely on this
behavior via #[should_panic]!

WARNING: we still need the 64 GB memory!
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-jobserver Area: jobserver, concurrency, parallelism A-linkage Area: linker issues, dylib, cdylib, shared libraries, so C-feature-request Category: proposal for a feature. Before PR, ping rust-lang/cargo if this is not `Feature accepted` S-needs-design Status: Needs someone to work further on the design for the feature or fix. NOT YET accepted.
Projects
None yet
Development

No branches or pull requests

8 participants