Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test: reshardingV3 test loop for buffered receipts #12490

Merged
merged 10 commits into from
Nov 22, 2024

Conversation

Trisfald
Copy link
Contributor

@Trisfald Trisfald commented Nov 20, 2024

New test loop test for buffered receipts during resharding.

  • Added a new static shard in the starting shard layout to make facilitate cross shard testing
    New state before resharding:
    accounts for shard s2.v3: ["account0"]
    accounts for shard s1.v3: ["account1", "account2"]
    accounts for shard s0.v3: ["account3", "account4", "account5", "account6", "account7", "near"]
    
  • test_resharding_v3_split_parent_buffered_receipts fails when buffered receipts are present in resharded shard, it works when target shard is not changing at epoch boundary
  • Changed loop_action to execute multiple functions, not just one

@Trisfald Trisfald requested a review from a team as a code owner November 20, 2024 15:22
Copy link

codecov bot commented Nov 21, 2024

Codecov Report

Attention: Patch coverage is 0% with 4 lines in your changes missing coverage. Please review.

Project coverage is 69.96%. Comparing base (7da5c8f) to head (9a458b8).
Report is 1 commits behind head on master.

Files with missing lines Patch % Lines
integration-tests/src/test_loop/builder.rs 0.00% 4 Missing ⚠️
Additional details and impacted files
@@           Coverage Diff           @@
##           master   #12490   +/-   ##
=======================================
  Coverage   69.96%   69.96%           
=======================================
  Files         838      838           
  Lines      169135   169139    +4     
  Branches   169135   169139    +4     
=======================================
+ Hits       118338   118342    +4     
- Misses      45666    45669    +3     
+ Partials     5131     5128    -3     
Flag Coverage Δ
backward-compatibility 0.16% <ø> (ø)
db-migration 0.16% <ø> (?)
genesis-check 1.28% <ø> (ø)
linux 69.28% <0.00%> (+<0.01%) ⬆️
linux-nightly 69.49% <0.00%> (-0.01%) ⬇️
macos 51.09% <ø> (-0.01%) ⬇️
pytests 1.59% <ø> (ø)
sanity-checks 1.39% <ø> (ø)
unittests 69.79% <0.00%> (+<0.01%) ⬆️
upgradability 0.20% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.


🚨 Try these New Features:

@Trisfald
Copy link
Contributor Author

@shreyan-gupta I've added outgoing receipts tests (I hope I'm testing them correctly) and these seems to work fine out of the box.

Copy link
Contributor

@shreyan-gupta shreyan-gupta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Absolutely lovely structuring! Makes it so much easier to read and figure out what's going on while making the components reusable!

shard_id,
&epoch_manager.get_shard_layout(&tip.epoch_id).unwrap(),
);
let congestion_info = &client_actor
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Along with checks for congestion_info I think what we should independently do is to verify whether the correct trie key entries are present or not.

  • For delayed receipts we can check the DelayedReceiptIndices trie key and see if the start_index is not same as end_index.
  • For buffered receipts it would be the BufferedReceiptIndices trie key and we can do something similar for the expected shard.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given this difference in checks, we can consider splitting checking delayed receipt and buffered receipt into separate functions if it looks more appropriate, else I'm fine with having them in the same function as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting! Let me see if I can expand this check easily (here in another PR)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done!

txs_vec.push((tx.get_hash(), tip.height));
txs.set(txs_vec);
submit_tx(&node_datas, &"account0".parse().unwrap(), tx);
for _ in 0..(CALLS_PER_BLOCK_HEIGHT / signer_ids.len()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it benefit to do round robin across signer_ids?

for i in 0..CALLS_PER_BLOCK_HEIGHT
  signer_id = signer_ids[i % signer_ids.len()]
  ...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should not make any difference I guess 🤔

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current logic 0..(CALLS_PER_BLOCK_HEIGHT / signer_ids.len() + (CALLS_PER_BLOCK_HEIGHT % signer_ids.len()).min(1)) can exceeds CALLS_PER_BLOCK_HEIGHT right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does, yes, trying to be fair to all accounts. Maybe it should respect more importantly CALLS_PER_BLOCK_HEIGHT rather than fairness to help with making expectations clearer .. let me modify this part

@Trisfald Trisfald added this pull request to the merge queue Nov 22, 2024
Merged via the queue into near:master with commit dcc67b2 Nov 22, 2024
28 checks passed
@Trisfald Trisfald deleted the resharding-test-receipt-buffered branch November 22, 2024 14:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants