-
Notifications
You must be signed in to change notification settings - Fork 254
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Aggressively shrink ancient storages when shrink isn't too busy. #2946
Changes from 19 commits
372780e
132eddd
7e79341
6feba95
d3fcd70
82ba7ef
444ae0f
c3f6e88
80051cd
9a919c8
618c47b
e79a529
acc539b
e253a42
7c2f5f2
fa967e2
2c5b3c1
dd5f265
61c15b7
af85db2
2948f80
c72217e
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -79,6 +79,9 @@ struct AncientSlotInfos { | |
total_alive_bytes_shrink: Saturating<u64>, | ||
/// total alive bytes across all slots | ||
total_alive_bytes: Saturating<u64>, | ||
/// slots that have dead accounts and thus the corresponding slot | ||
/// storages can be shrunk | ||
best_slots_to_shrink: Vec<(Slot, u64, u64)>, | ||
} | ||
|
||
impl AncientSlotInfos { | ||
|
@@ -177,8 +180,11 @@ impl AncientSlotInfos { | |
* tuning.percent_of_alive_shrunk_data | ||
/ 100, | ||
); | ||
self.best_slots_to_shrink = Vec::with_capacity(self.shrink_indexes.len()); | ||
for info_index in &self.shrink_indexes { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @dmakarov sorry to go in circles... reverse is probably right and simplest. If you look at There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think it's sorted on capacity not on the amount of dead bytes, though. Isn't it? |
||
let info = &mut self.all_infos[*info_index]; | ||
let dead_bytes = info.capacity - info.alive_bytes; | ||
self.best_slots_to_shrink.push((info.slot, info.capacity, dead_bytes)); | ||
if bytes_to_shrink_due_to_ratio.0 >= threshold_bytes { | ||
// we exceeded the amount to shrink due to alive ratio, so don't shrink this one just due to 'should_shrink' | ||
// It MAY be shrunk based on total capacity still. | ||
|
@@ -188,6 +194,10 @@ impl AncientSlotInfos { | |
bytes_to_shrink_due_to_ratio += info.alive_bytes; | ||
} | ||
} | ||
// Sort the vector so that the elements with the largest | ||
// dead bytes are popped first when used to extend the | ||
// shrinking candidates. | ||
self.best_slots_to_shrink.sort_by(|a, b| b.2.cmp(&a.2)); | ||
} | ||
|
||
/// after this function, only slots that were chosen to shrink are marked with | ||
|
@@ -396,7 +406,12 @@ impl AccountsDb { | |
self.shrink_ancient_stats | ||
.slots_considered | ||
.fetch_add(sorted_slots.len() as u64, Ordering::Relaxed); | ||
let ancient_slot_infos = self.collect_sort_filter_ancient_slots(sorted_slots, &tuning); | ||
let mut ancient_slot_infos = self.collect_sort_filter_ancient_slots(sorted_slots, &tuning); | ||
|
||
std::mem::swap( | ||
&mut *self.best_ancient_slots_to_shrink.write().unwrap(), | ||
&mut ancient_slot_infos.best_slots_to_shrink, | ||
); | ||
brooksprumo marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
if ancient_slot_infos.all_infos.is_empty() { | ||
return; // nothing to do | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think they are sorted now smallest dead bytes to largest? I don't see where we are getting that sort order just from the diffs here and I don't quite remember.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just added sorting in another commit. I had to add a field to the tuple, so that we actually sort the elements by the amount of dead bytes.