-
Notifications
You must be signed in to change notification settings - Fork 254
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Aggressively shrink ancient storages when shrink isn't too busy. #2946
Conversation
9a2729a
to
838e063
Compare
4d869c2
to
003ba1f
Compare
@dmakarov, where did you end up on this? We can move this forward monday. Hopefully you have a machine running this? |
I had it running on my dev machine for a few days. Now I added stat counters and will restart it with the new stats. I’d like to experiment a bit more with this. |
b18e934
to
1bfd6d0
Compare
please add a test that shows we add an ancient storage to shrink. May be helpful to refactor the shrink fn so that it calculates the storages to shrink in a separate fn so we can just check the output of that fn. Or, you can do a more full test which actually verifies the capacity is what you expect after running shrink. There should be tests that do this similarly already. Look for tests that call |
yes, I'm working on it. |
this pr should have zero effect unless skipping rewrites is enabled by cli. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The idea in the PR looks good to me.
I don't see too much downside for adding one ancient to shrink when we are not busy.
accounts-db/src/accounts_db.rs
Outdated
&& *capacity == store.capacity() | ||
&& Self::is_candidate_for_shrink(self, &store) | ||
{ | ||
*capacity = 0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we not overload the u64, and instead create an enum to indicate if this storage is pre or post shrunk?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know. Isn't capacity checked for being 0 in other logic, so that if we add an enum we still have to set capacity to 0 here for other code to work correctly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we are overloading the 0 here. Yes. We could do an enum and it would be much more clear what we're trying to do.
{AlreadyShrunk, CanBeShrunk(capacity: u64)}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could also just remove the element from the vec, but that could be expsneive. I was assuming marking it as 'already shrunk' would be sufficient. maybe none of htis is necessary because we'll see that the new capacity doesn't match the old capacity and skip it anyway... Then we don't need to iter mut at all and we can just iter. That seems simplest of all and we already have to handle that case anyway.
This does cause us to look up way more storages.
An oldie but goodie: https://en.wikichip.org/wiki/schlemiel_the_painter%27s_algorithm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the suggested change? Not to change capacity?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
enum is fine with me. iterating. alternatively, vec sorted in reverse and you pop the last one off the end and reduce the count. This would not require a re-allocation and would avoid revisiting ancient storages we already previously shrunk.
it looks like i need to rebase to fix the vulnerability check errors. |
rebased to resolve conflicts. I'm still working on unit tests. When ready, I'll renew review requests. Thanks. |
&mut ancient_slot_infos.best_slots_to_shrink, | ||
); | ||
// Reverse the vector so that the elements with the largest | ||
// dead bytes are poped first when used to extend the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
popped
// Reverse the vector so that the elements with the largest | ||
// dead bytes are poped first when used to extend the | ||
// shrinking candidates. | ||
self.best_ancient_slots_to_shrink.write().unwrap().reverse(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
probably reverse them while it is still local, before swapping
accounts-db/src/accounts_db.rs
Outdated
@@ -1463,6 +1468,11 @@ pub struct AccountsDb { | |||
/// Flag to indicate if the experimental accounts lattice hash is enabled. | |||
/// (For R&D only; a feature-gate also exists to turn this on and make it a part of consensus.) | |||
pub is_experimental_accumulator_hash_enabled: AtomicBool, | |||
|
|||
/// These are the ancient storages that could be valuable to shrink. | |||
/// sorted by largest dead bytes to smallest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think they are sorted now smallest dead bytes to largest? I don't see where we are getting that sort order just from the diffs here and I don't quite remember.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just added sorting in another commit. I had to add a field to the tuple, so that we actually sort the elements by the amount of dead bytes.
// assumed to be in reverse order. | ||
if shrink_slots.len() < SHRINK_INSERT_ANCIENT_THRESHOLD { | ||
let mut ancients = self.best_ancient_slots_to_shrink.write().unwrap(); | ||
while let Some((slot, capacity)) = ancients.pop() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the pop
is beautiful compared to my hacky original impl!
@@ -182,7 +183,8 @@ impl AncientSlotInfos { | |||
self.best_slots_to_shrink = Vec::with_capacity(self.shrink_indexes.len()); | |||
for info_index in &self.shrink_indexes { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dmakarov sorry to go in circles... reverse is probably right and simplest. If you look at sort_shrink_indexes_by_bytes_saved
, I think we are already iterating in most bytes to save to least. So, reversing best_slots_to_shrink
will be sorted correctly without the addition of a new field.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's sorted on capacity not on the amount of dead bytes, though. Isn't it?
// dead bytes are popped first when used to extend the | ||
// shrinking candidates. | ||
self.best_slots_to_shrink.sort_by(|a, b| b.2.cmp(&a.2)); | ||
self.best_slots_to_shrink.reverse(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
woohoo!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we reverse the Vec, or use a VecDeque and pop_front instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's an option. how strongly do you feel about it for this pr?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not strong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'll change it to a deque in a follow-up pr.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
sorry. had to update a comment. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
…a-xyz#2946) * Tweak ancient packing algorithm * Minor change * Feedback * Remove redundancy * Correction * Revert correction * Loop * Add test * Fix clippy * Comments * Comment * Comments * Pop ancients * Revert * Checks * Move reverse * Typo * Popped * Sort * Format * Revert sort, back to reverse * Fix comment
Problem
Ancient packing when skipping rewrites has some non-ideal behavior.
It can sometimes be true that an ancient storage might never meet the 90%(?) threshold for shrinking. However, every dead account that an ancient storage keeps present causes the account to remain in the index in memory and starts a chain reaction of other accounts, such as zero lamport accounts, that must be kept alive.
Summary of Changes
Add another slot for shrinking when the number of shrink candidate slots is too small (less than 10). The additional slot's storage has the largest number of dead bytes. This aggressively shrinks ancient storages, even when they are below the normal threshold. This allows the system to keep itself towards the ideal of storing each non-zero account once and having no zero lamport accounts.
reworked #2849