Skip to content

Commit

Permalink
v1.18: accounts-db: fix 8G+ memory spike during hash calculation (bac…
Browse files Browse the repository at this point in the history
…kport of #1308) (#1318)

accounts-db: fix 8G+ memory spike during hash calculation (#1308)

We were accidentally doing several thousands 4MB allocations - even
during incremental hash - which added up to a 8G+ memory spikes over ~2s
every ~30s.

Fix by using Vec::new() in the identity function. Empirically 98%+
reduces join arrays with less than 128 elements, and only the last few
reduces join large vecs. Because realloc does exponential growth we
don't see pathological reallocation but reduces do at most one realloc
(and often 0 because of exp growth).

(cherry picked from commit 2c71685)

Co-authored-by: Alessandro Decina <[email protected]>
  • Loading branch information
mergify[bot] and alessandrod authored Jun 17, 2024
1 parent 6a36903 commit c027cfc
Showing 1 changed file with 7 additions and 3 deletions.
10 changes: 7 additions & 3 deletions accounts-db/src/accounts_hash.rs
Original file line number Diff line number Diff line change
Expand Up @@ -838,9 +838,13 @@ impl<'a> AccountsHasher<'a> {
accum
})
.reduce(
|| DedupResult {
hashes_files: Vec::with_capacity(max_bin),
..Default::default()
|| {
DedupResult {
// Allocate with Vec::new() so that no allocation actually happens. See
// https://github.com/anza-xyz/agave/pull/1308.
hashes_files: Vec::new(),
..Default::default()
}
},
|mut a, mut b| {
a.lamports_sum = a
Expand Down

0 comments on commit c027cfc

Please sign in to comment.