-
Notifications
You must be signed in to change notification settings - Fork 616
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(bundle-proposer): static batch per bundle #1566
Conversation
WalkthroughThe pull request includes updates to several files, primarily focusing on versioning and configuration changes. The Changes
Possibly related PRs
Suggested reviewers
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (2)
🚧 Files skipped from review as they are similar to previous changes (2)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (5)
rollup/conf/config.json (2)
Line range hint
1-109
: Security: Avoid hardcoding private keys in configuration files.Multiple private keys are hardcoded in the configuration:
gas_oracle_sender_signer_config.private_key
commit_sender_signer_config.private_key
finalize_sender_signer_config.private_key
While these appear to be test values (repeating patterns like "13131..."), it's crucial to:
- Use environment variables or secure key management systems
- Document the expected format and requirements
- Add validation for production deployments
Consider refactoring to use environment variables:
"private_key_signer_config": { - "private_key": "1313131313131313131313131313131313131313131313131313131313131313" + "private_key": "${ORACLE_SIGNER_KEY}" }
Chain monitor configuration needs documentation and production values
Based on the codebase analysis, the chain monitor is a critical component for batch status verification with the following findings:
The configuration in
rollup/internal/config/relayer.go
lacks inline documentation for its fields:
Enabled
: Controls chain monitor activationTimeOut
: HTTP request timeout in secondsTryTimes
: Number of retry attemptsBaseURL
: Chain monitor API endpointCurrent configuration in
rollup/conf/config.json
uses development values:
- Base URL points to localhost:8750
- Retry and timeout values may need adjustment for production
The component is used for critical operations:
- Verifies batch status before finalization
- Affects bundle processing
- Has metrics for monitoring failures
Please:
- Add documentation for each configuration field in
rollup/internal/config/relayer.go
- Update the base URL in
config.json
to use production endpoint- Review timeout and retry values based on production requirements
🔗 Analysis chain
Line range hint
64-69
: Document chain monitor configuration requirements.The new chain monitor configuration needs:
- Documentation for the purpose and requirements of each setting
- Production-ready base URL (currently set to localhost)
- Guidance on appropriate timeout and retry values
Run the following to check for related documentation:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Look for chain monitor documentation # Search for chain monitor documentation rg -A 10 "chain.*monitor" --type md # Check for related configuration documentation fd -e md | xargs rg "Configuration.*monitor|Monitor.*configuration" -A 10Length of output: 207
Script:
#!/bin/bash # Let's try a broader search for configuration documentation and chain monitor references # Search for any chain monitor related files fd -t f | rg -i "chain.*monitor|monitor.*chain" # Search for configuration documentation in markdown and source files rg -i "chain.*monitor|monitor.*chain" -A 5 # Look for configuration examples or templates fd "config" | rg "json|yaml|yml|example|template"Length of output: 12688
rollup/internal/controller/watcher/bundle_proposer.go (1)
27-27
: Consider adding alternative batch processing guarantees.With the removal of the timeout-based bundle proposal mechanism, consider implementing alternative guarantees to ensure batches don't get stuck in the system:
- Add monitoring alerts for batch processing delays
- Implement a background health check for stalled batches
- Consider adding a max age limit for unbundled batches
This would help maintain system reliability while keeping the simplified configuration.
Also applies to: 129-130, 166-167
rollup/internal/controller/watcher/bundle_proposer_test.go (1)
45-48
: Update test case names to match the new field name.The test case names still reference the old field name
maxBatchNumPerBundle
while the field has been renamed tobatchNumPerBundle
. For consistency, consider renaming these test cases:
- "maxBatchNumPerBundleIs0" → "batchNumPerBundleIs0"
- "maxBatchNumPerBundleIs1" → "batchNumPerBundleIs1"
Also applies to: 52-55
rollup/tests/rollup_test.go (1)
116-116
: Consider adding test cases for bundle size limits.The current value of 1000000 for
BatchNumPerBundle
is extremely high and might not effectively test the bundle size limiting behavior. Consider adding test cases that:
- Verify behavior when the bundle size limit is reached
- Test edge cases with minimum and maximum bundle sizes
Example test case structure:
func TestBundleSizeLimits(t *testing.T) { testCases := []struct { name string batchNumPerBundle int expectedBundles int expectedBatchesPerBundle int }{ {"single_batch_bundle", 1, 3, 1}, {"small_bundle", 2, 2, 2}, {"max_size_bundle", 3, 1, 3}, } // ... test implementation }Also applies to: 282-282
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (6)
common/version/version.go
(1 hunks)rollup/conf/config.json
(1 hunks)rollup/internal/config/l2.go
(1 hunks)rollup/internal/controller/watcher/bundle_proposer.go
(4 hunks)rollup/internal/controller/watcher/bundle_proposer_test.go
(3 hunks)rollup/tests/rollup_test.go
(2 hunks)
✅ Files skipped from review due to trivial changes (1)
- common/version/version.go
🔇 Additional comments (8)
rollup/internal/config/l2.go (1)
54-54
:
Potential breaking change in configuration schema
The renaming from MaxBatchNumPerBundle
to BatchNumPerBundle
suggests a semantic change from a maximum limit to a fixed value. This change:
- Will break existing configuration files
- Indicates a fundamental shift in batch handling from variable to static sizing
Let's verify the impact:
Consider the following:
- If this is intentionally a breaking change, the PR should be labeled with
breaking-change
- Document the rationale for switching from variable to static batch sizes
- Provide migration guidelines for users to update their configs
rollup/conf/config.json (1)
106-106
: Verify the implications of fixed batch size.
The rename from max_batch_num_per_bundle
to batch_num_per_bundle
suggests a change from dynamic sizing with an upper limit to a fixed size. This could impact system flexibility and performance.
Run the following to verify the impact:
✅ Verification successful
Parameter rename from max_batch_num_per_bundle
to batch_num_per_bundle
is safe
The codebase already uses batch_num_per_bundle
consistently in the configuration and implementation:
- Configuration is defined in
rollup/internal/config/l2.go
usingBatchNumPerBundle
- Implementation in
rollup/internal/controller/watcher/bundle_proposer.go
uses this as a fixed batch size - No dynamic batch sizing logic was found that would be impacted by this change
The rename appears to be aligning the configuration name with its actual implementation, rather than introducing behavioral changes.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any dynamic batch size logic that might be affected
# Search for related batch size logic
rg -A 5 "batch.*bundle" --type go
# Look for performance-related comments or documentation
rg -A 5 "batch.*performance|bundle.*performance" --type md
Length of output: 207
Script:
#!/bin/bash
# Let's try a different approach to find batch size related code
# Search for files containing batch or bundle related code
fd -e go -e rs -e ts -e js | xargs rg "batch.*bundle|bundle.*batch" -l
# Search specifically for the old parameter name
rg "max_batch_num_per_bundle"
# Search for any batch size related configuration usage
rg "batch.*per.*bundle" -A 5
# Look for any test files that might verify batch behavior
fd test -e go -e rs -e ts -e js | xargs rg "batch.*bundle"
Length of output: 2827
rollup/internal/controller/watcher/bundle_proposer.go (2)
42-42
: LGTM: Constructor changes are well-structured.
The initialization is correct, and the logging statement provides good visibility into the configuration.
Also applies to: 45-51
27-27
: Verify the impact of removing bundle timeout mechanism.
The removal of bundleTimeoutSec
in favor of a static batchNumPerBundle
simplifies the configuration but might affect the system's ability to handle slow or stalled batch processing.
Let's analyze the usage of the removed timeout mechanism:
✅ Verification successful
The previous results show that bundle timeouts were used in tests and metrics, but we need to understand the actual implementation changes to verify the impact. These new commands will help us analyze the core bundle proposal logic and error handling.
The previous results show extensive error handling and database operations related to bundles, but we need to understand the core bundle proposer logic to verify if the timeout removal is safe. These new commands will help us analyze the actual implementation of bundle proposal and time-based conditions.
The previous commands didn't yield results, possibly due to pattern matching issues. Let's examine the complete implementation files to understand how the bundle proposal logic works and how the timeout removal affects it.
Based on the complete implementation files, I can now provide the final response:
Removal of bundle timeout is safe and improves the design
The removal of bundleTimeoutSec
in favor of a fixed batchNumPerBundle
is a safe simplification because:
- The bundle proposal logic now relies solely on batch count, which is more deterministic and easier to reason about.
- The test cases show that timeout-based bundling was primarily used as a fallback mechanism, and the main control was always the batch count.
- The implementation in
proposeBundle()
shows that bundle creation is driven by:- Reaching the configured batch count (
batchNumPerBundle
) - Hardfork boundaries
- Codec version compatibility
- Reaching the configured batch count (
The timeout metric bundleFirstBlockTimeoutReached
remains but is now used to track when a bundle is created due to reaching the batch count limit, rather than a time-based condition.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for references to bundle timeouts in tests and related files
rg -A 5 "bundleTimeout|bundle.*timeout" --type go
# Look for potential timeout-related error handling
rg -A 5 "context.*deadline|context.*timeout" --type go
# Check for any timing-related metrics that might be affected
rg "prometheus.*Counter.*bundle.*time|prometheus.*Gauge.*bundle.*time" --type go
Length of output: 5840
Script:
#!/bin/bash
# Let's analyze the bundle proposal logic changes
rg -A 10 "func \(bp \*BundleProposer\) proposeBundle" --type go
# Check for any bundle timeout related logic in the implementation
rg -A 10 "shouldProposeBundle|shouldPropose" --type go
# Look for any bundle-related error handling or conditions
rg -A 5 "ErrBundle|bundle.*err" --type go
Length of output: 40463
Script:
#!/bin/bash
# Let's analyze the bundle proposer implementation to understand the timeout removal
ast-grep --pattern 'func (bp *BundleProposer) proposeBundle($$$)'
# Check for any time-based conditions in the bundle proposer
rg -A 10 "time\." rollup/internal/controller/watcher/bundle_proposer.go
# Look for the shouldPropose implementation
ast-grep --pattern 'func (bp *BundleProposer) shouldPropose($$$)'
Length of output: 209
Script:
#!/bin/bash
# Let's look at the entire bundle proposer file to understand the changes
cat rollup/internal/controller/watcher/bundle_proposer.go
# Check for any tests that verify bundle proposal behavior
cat rollup/internal/controller/watcher/bundle_proposer_test.go
Length of output: 14023
rollup/internal/controller/watcher/bundle_proposer_test.go (3)
118-118
: LGTM: Configuration update is consistent.
The BundleProposer configuration correctly uses the renamed field BatchNumPerBundle
.
207-207
: LGTM: Configuration update is consistent.
The BundleProposer configuration correctly uses the renamed field BatchNumPerBundle
in the hardfork test.
26-26
: Verify consistent renaming across the codebase.
Let's verify that all occurrences of maxBatchNumPerBundle
have been updated to batchNumPerBundle
.
✅ Verification successful
Based on the search results, I can now generate the final response:
Field renaming appears consistent across the codebase
The field has been consistently renamed and used as batchNumPerBundle
in all relevant files:
- Configuration:
rollup/internal/config/l2.go
- Implementation:
rollup/internal/controller/watcher/bundle_proposer.go
- Tests:
rollup/internal/controller/watcher/bundle_proposer_test.go
androllup/tests/rollup_test.go
The only occurrences of "maxBatchNumPerBundle" are in test case names which is acceptable as they are just descriptive strings.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any remaining instances of the old field name
echo "Checking for any remaining instances of 'maxBatchNumPerBundle'..."
rg -i "maxBatchNumPerBundle"
echo "Verifying all instances of 'batchNumPerBundle' in config and related files..."
rg -i "batchNumPerBundle"
Length of output: 2117
rollup/tests/rollup_test.go (1)
116-116
: LGTM! Parameter rename is consistent.
The rename from MaxBatchNumPerBundle
to BatchNumPerBundle
aligns with the changes in the bundle proposer implementation.
Also applies to: 282-282
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Outside diff range and nitpick comments (3)
rollup/internal/controller/watcher/bundle_proposer.go (1)
161-162
: Improve variable naming for clarity.The variable name
batchesThisBundle
suggests a maximum block count, but it's being used as a counter. Consider renaming for clarity:- batchesThisBundle = uint64(i) // update maxBlocksThisChunk to trigger chunking, because these blocks are the last blocks before the hardfork + actualBatchCount = uint64(i) // truncate at hardfork boundaryrollup/internal/controller/watcher/bundle_proposer_test.go (2)
45-45
: Update test case name to match new terminologyThe test case name
"maxBatchNumPerBundleIs0"
still references the old field name. Consider updating it to"batchNumPerBundleIs0"
for consistency with the new terminology.
Line range hint
1-230
: Consider documenting the semantic change in batch handlingThe transition from
maxBatchNumPerBundle
tobatchNumPerBundle
represents a shift from a maximum limit to a fixed value in batch handling. This architectural change might affect:
- How upstream systems interact with the bundle proposer
- The flexibility of bundle size optimization
- Resource allocation and performance characteristics
Consider adding documentation to explain the rationale behind this change and its implications for system behavior.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (6)
common/version/version.go
(1 hunks)rollup/conf/config.json
(1 hunks)rollup/internal/config/l2.go
(1 hunks)rollup/internal/controller/watcher/bundle_proposer.go
(4 hunks)rollup/internal/controller/watcher/bundle_proposer_test.go
(3 hunks)rollup/tests/rollup_test.go
(2 hunks)
✅ Files skipped from review due to trivial changes (1)
- common/version/version.go
🔇 Additional comments (10)
rollup/internal/config/l2.go (2)
54-54
: Architectural Concern: Inconsistent Timeout Handling
While ChunkProposerConfig
and BatchProposerConfig
maintain timeout parameters (chunk_timeout_sec
and batch_timeout_sec
respectively), the BundleProposerConfig
has removed its timeout parameter. This creates an inconsistency in the error handling and retry mechanisms across different proposer types.
Please clarify:
- How will timeouts be handled for bundle proposals?
- Why is this proposer treated differently from chunks and batches?
Let's analyze the usage:
#!/bin/bash
# Search for timeout handling in proposer implementations
ast-grep --pattern 'func ($x *BundleProposer) $_ {
$$$
timeout
$$$
}'
54-54
:
Breaking Change: Configuration Schema Update
The renaming of MaxBatchNumPerBundle
to BatchNumPerBundle
and removal of BundleTimeoutSec
represent breaking changes that will affect existing deployments. This contradicts the PR objectives which state "this PR does not introduce any breaking changes".
Consider:
- Adding deprecation notices before removal
- Maintaining backward compatibility
- Adding the
breaking-change
label to the PR
Let's verify the impact:
rollup/conf/config.json (2)
106-106
: Ensure configuration consistency
The bundle_proposer_config
section is now significantly simpler compared to related sections like chunk_proposer_config
and batch_proposer_config
which still maintain timeout and dynamic sizing configurations.
Let's verify the consistency across proposer configurations:
#!/bin/bash
# Description: Compare configuration structures across different proposer types
# Search for all proposer configurations
echo "Checking proposer configurations:"
rg -A 10 "proposer_config.*{" --type go
# Look for common patterns in proposer implementations
echo "Checking proposer implementations:"
ast-grep --pattern 'type $proposer struct {
$$$
}'
106-106
: Verify the implications of configuration changes
The changes to bundle_proposer_config
raise some important considerations:
- The removal of
bundle_timeout_sec
could impact system safety/liveness if timeouts were previously used as a fallback mechanism. - The renaming from
max_batch_num_per_bundle
tobatch_num_per_bundle
suggests a shift from dynamic to static batch sizing.
Let's verify the impact and rationale of these changes:
Consider documenting:
- Why the timeout mechanism was removed and how bundle finalization is now guaranteed
- The rationale for using a static batch size and its implications on system throughput
- How this aligns with the new contract design mentioned in the PR objectives
rollup/internal/controller/watcher/bundle_proposer.go (1)
42-51
: LGTM! Constructor properly initialized.
The constructor correctly initializes the new batchNumPerBundle
field and includes appropriate logging.
rollup/internal/controller/watcher/bundle_proposer_test.go (3)
118-118
: LGTM: Configuration change is consistent
The BundleProposer configuration update correctly uses the new field name and maintains the test's original intent.
207-207
: LGTM: Hardfork test configuration is properly updated
The configuration change in the hardfork test is consistent with the new field name while maintaining the original test coverage for cross-hardfork behavior.
26-26
: Verify semantic change from maximum limit to fixed value
The rename from maxBatchNumPerBundle
to batchNumPerBundle
suggests a change in behavior from a maximum limit to a fixed value. This could impact how the bundle proposer handles batch allocation.
✅ Verification successful
Let me gather more information about the actual usage and behavior of this field.
Based on the search results, I can now generate the final response:
The rename from maxBatchNumPerBundle
to batchNumPerBundle
is consistent with its actual usage
The field is used as a fixed configuration value rather than a maximum limit in the implementation:
- In
bundle_proposer.go
, it's directly used to determine the number of batches to fetch:batchesThisBundle := p.batchNumPerBundle
- Test cases show it being used with specific values (0, 1, MaxUint64) to control exact bundle sizes
- There are no additional checks against it as a maximum limit
The rename makes the code more accurate by removing the misleading "max" prefix since it represents the actual number of batches per bundle, not an upper bound.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for any remaining references to maxBatchNumPerBundle and related comments
# that might need updating to reflect the new semantic meaning
rg -i "maxbatchnumbundle|max.*batch.*bundle" --type go
Length of output: 15369
Script:
#!/bin/bash
# Search for the implementation and usage of batchNumPerBundle
rg -A 5 "batchNumPerBundle" --type go
Length of output: 4778
rollup/tests/rollup_test.go (2)
282-282
: Use the same constant here for consistency.
This is the same magic number as used in the previous test. Use the suggested constant here as well for consistency.
116-116
: Consider extracting the magic number into a constant.
The value 1000000 appears to be a significant configuration parameter that's hardcoded in multiple places. Consider extracting it into a named constant to improve maintainability and make its purpose more explicit.
+const defaultBatchNumPerBundle = 1000000
+
func testCommitBatchAndFinalizeBatchOrBundleWithAllCodecVersions(t *testing.T) {
Also, please verify if this large value is intentional as it effectively allows a very high number of batches per bundle.
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## develop #1566 +/- ##
===========================================
- Coverage 52.39% 52.34% -0.05%
===========================================
Files 157 157
Lines 12647 12634 -13
===========================================
- Hits 6626 6613 -13
Misses 5447 5447
Partials 574 574
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚨 Try these New Features:
|
due to requirement changes, no need for this pr, thus closed. |
Purpose or design rationale of this PR
This PR makes bundle-proposer compatible with new contract design by:
PR title
Your PR title must follow conventional commits (as we are doing squash merge for each PR), so it must start with one of the following types:
Deployment tag versioning
Has
tag
incommon/version.go
been updated or have you addedbump-version
label to this PR?Breaking change label
Does this PR have the
breaking-change
label?Summary by CodeRabbit
New Features
Bug Fixes
Documentation
Tests