Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(bundle-proposer): static batch per bundle #1566

Closed

Conversation

colinlyguo
Copy link
Member

@colinlyguo colinlyguo commented Nov 18, 2024

Purpose or design rationale of this PR

This PR makes bundle-proposer compatible with new contract design by:

  • deleting timeout feature.
  • make bundle size static and configurable.

PR title

Your PR title must follow conventional commits (as we are doing squash merge for each PR), so it must start with one of the following types:

  • feat: A new feature

Deployment tag versioning

Has tag in common/version.go been updated or have you added bump-version label to this PR?

  • Yes

Breaking change label

Does this PR have the breaking-change label?

  • No, this PR is not a breaking change

Summary by CodeRabbit

  • New Features

    • Updated version to v4.4.77, reflecting the latest release.
    • Simplified configuration for batch proposals by renaming and consolidating parameters.
  • Bug Fixes

    • Ensured consistent naming across tests and implementation for batch processing parameters.
  • Documentation

    • Improved clarity in configuration settings related to bundle proposals.
  • Tests

    • Updated tests to align with the new naming convention for batch proposal parameters.

@colinlyguo colinlyguo added the bump-version Bump the version tag for deployment label Nov 18, 2024
Copy link

coderabbitai bot commented Nov 18, 2024

Walkthrough

The pull request includes updates to several files, primarily focusing on versioning and configuration changes. The version.go file is updated to reflect a new version tag, changing from "v4.4.76" to "v4.4.77". Additionally, modifications to the configuration files involve renaming a property and removing another, streamlining the configuration schema. Changes in the BundleProposer struct and related tests reflect these configuration updates, ensuring consistency across the codebase. The overall modifications aim to enhance clarity and maintainability without altering existing functionalities.

Changes

File Change Summary
common/version/version.go Updated tag variable from "v4.4.76" to "v4.4.77".
rollup/conf/config.json Renamed property max_batch_num_per_bundle to batch_num_per_bundle and removed bundle_timeout_sec from bundle_proposer_config.
rollup/internal/config/l2.go Renamed field MaxBatchNumPerBundle to BatchNumPerBundle and removed BundleTimeoutSec in BundleProposerConfig struct.
rollup/internal/controller/watcher/bundle_proposer.go Removed fields maxBatchNumPerBundle and bundleTimeoutSec from BundleProposer struct; added batchNumPerBundle. Updated related methods accordingly.
rollup/internal/controller/watcher/bundle_proposer_test.go Renamed maxBatchNumPerBundle to batchNumPerBundle in test functions and updated assertions to match the new naming.
rollup/tests/rollup_test.go Updated parameter name from MaxBatchNumPerBundle to BatchNumPerBundle in test functions, with value changes from 1000000 to 2 and 1.

Possibly related PRs

Suggested reviewers

  • georgehao
  • 0xmountaintop
  • Thegaram
  • colinlyguo

🐰 A hop, a skip, a version change,
From v4.4.76 to v4.4.77, how strange!
Configs refined, with names so neat,
Batches now simpler, oh what a treat!
In the code, we dance and play,
Hopping along, come what may! 🐇✨


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 15ff972 and 3f684ed.

📒 Files selected for processing (2)
  • common/version/version.go (1 hunks)
  • rollup/conf/config.json (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • common/version/version.go
  • rollup/conf/config.json

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Outside diff range and nitpick comments (5)
rollup/conf/config.json (2)

Line range hint 1-109: Security: Avoid hardcoding private keys in configuration files.

Multiple private keys are hardcoded in the configuration:

  • gas_oracle_sender_signer_config.private_key
  • commit_sender_signer_config.private_key
  • finalize_sender_signer_config.private_key

While these appear to be test values (repeating patterns like "13131..."), it's crucial to:

  1. Use environment variables or secure key management systems
  2. Document the expected format and requirements
  3. Add validation for production deployments

Consider refactoring to use environment variables:

 "private_key_signer_config": {
-  "private_key": "1313131313131313131313131313131313131313131313131313131313131313"
+  "private_key": "${ORACLE_SIGNER_KEY}"
 }

Chain monitor configuration needs documentation and production values

Based on the codebase analysis, the chain monitor is a critical component for batch status verification with the following findings:

  1. The configuration in rollup/internal/config/relayer.go lacks inline documentation for its fields:

    • Enabled: Controls chain monitor activation
    • TimeOut: HTTP request timeout in seconds
    • TryTimes: Number of retry attempts
    • BaseURL: Chain monitor API endpoint
  2. Current configuration in rollup/conf/config.json uses development values:

    • Base URL points to localhost:8750
    • Retry and timeout values may need adjustment for production
  3. The component is used for critical operations:

    • Verifies batch status before finalization
    • Affects bundle processing
    • Has metrics for monitoring failures

Please:

  1. Add documentation for each configuration field in rollup/internal/config/relayer.go
  2. Update the base URL in config.json to use production endpoint
  3. Review timeout and retry values based on production requirements
🔗 Analysis chain

Line range hint 64-69: Document chain monitor configuration requirements.

The new chain monitor configuration needs:

  1. Documentation for the purpose and requirements of each setting
  2. Production-ready base URL (currently set to localhost)
  3. Guidance on appropriate timeout and retry values

Run the following to check for related documentation:

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Look for chain monitor documentation

# Search for chain monitor documentation
rg -A 10 "chain.*monitor" --type md

# Check for related configuration documentation
fd -e md | xargs rg "Configuration.*monitor|Monitor.*configuration" -A 10

Length of output: 207


Script:

#!/bin/bash
# Let's try a broader search for configuration documentation and chain monitor references

# Search for any chain monitor related files
fd -t f | rg -i "chain.*monitor|monitor.*chain"

# Search for configuration documentation in markdown and source files
rg -i "chain.*monitor|monitor.*chain" -A 5

# Look for configuration examples or templates
fd "config" | rg "json|yaml|yml|example|template"

Length of output: 12688

rollup/internal/controller/watcher/bundle_proposer.go (1)

27-27: Consider adding alternative batch processing guarantees.

With the removal of the timeout-based bundle proposal mechanism, consider implementing alternative guarantees to ensure batches don't get stuck in the system:

  1. Add monitoring alerts for batch processing delays
  2. Implement a background health check for stalled batches
  3. Consider adding a max age limit for unbundled batches

This would help maintain system reliability while keeping the simplified configuration.

Also applies to: 129-130, 166-167

rollup/internal/controller/watcher/bundle_proposer_test.go (1)

45-48: Update test case names to match the new field name.

The test case names still reference the old field name maxBatchNumPerBundle while the field has been renamed to batchNumPerBundle. For consistency, consider renaming these test cases:

  • "maxBatchNumPerBundleIs0" → "batchNumPerBundleIs0"
  • "maxBatchNumPerBundleIs1" → "batchNumPerBundleIs1"

Also applies to: 52-55

rollup/tests/rollup_test.go (1)

116-116: Consider adding test cases for bundle size limits.

The current value of 1000000 for BatchNumPerBundle is extremely high and might not effectively test the bundle size limiting behavior. Consider adding test cases that:

  1. Verify behavior when the bundle size limit is reached
  2. Test edge cases with minimum and maximum bundle sizes

Example test case structure:

func TestBundleSizeLimits(t *testing.T) {
    testCases := []struct {
        name            string
        batchNumPerBundle int
        expectedBundles int
        expectedBatchesPerBundle int
    }{
        {"single_batch_bundle", 1, 3, 1},
        {"small_bundle", 2, 2, 2},
        {"max_size_bundle", 3, 1, 3},
    }
    // ... test implementation
}

Also applies to: 282-282

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 54d8236 and 0b33b81.

📒 Files selected for processing (6)
  • common/version/version.go (1 hunks)
  • rollup/conf/config.json (1 hunks)
  • rollup/internal/config/l2.go (1 hunks)
  • rollup/internal/controller/watcher/bundle_proposer.go (4 hunks)
  • rollup/internal/controller/watcher/bundle_proposer_test.go (3 hunks)
  • rollup/tests/rollup_test.go (2 hunks)
✅ Files skipped from review due to trivial changes (1)
  • common/version/version.go
🔇 Additional comments (8)
rollup/internal/config/l2.go (1)

54-54: ⚠️ Potential issue

Potential breaking change in configuration schema

The renaming from MaxBatchNumPerBundle to BatchNumPerBundle suggests a semantic change from a maximum limit to a fixed value. This change:

  1. Will break existing configuration files
  2. Indicates a fundamental shift in batch handling from variable to static sizing

Let's verify the impact:

Consider the following:

  1. If this is intentionally a breaking change, the PR should be labeled with breaking-change
  2. Document the rationale for switching from variable to static batch sizes
  3. Provide migration guidelines for users to update their configs
rollup/conf/config.json (1)

106-106: Verify the implications of fixed batch size.

The rename from max_batch_num_per_bundle to batch_num_per_bundle suggests a change from dynamic sizing with an upper limit to a fixed size. This could impact system flexibility and performance.

Run the following to verify the impact:

✅ Verification successful

Parameter rename from max_batch_num_per_bundle to batch_num_per_bundle is safe

The codebase already uses batch_num_per_bundle consistently in the configuration and implementation:

  • Configuration is defined in rollup/internal/config/l2.go using BatchNumPerBundle
  • Implementation in rollup/internal/controller/watcher/bundle_proposer.go uses this as a fixed batch size
  • No dynamic batch sizing logic was found that would be impacted by this change

The rename appears to be aligning the configuration name with its actual implementation, rather than introducing behavioral changes.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for any dynamic batch size logic that might be affected

# Search for related batch size logic
rg -A 5 "batch.*bundle" --type go

# Look for performance-related comments or documentation
rg -A 5 "batch.*performance|bundle.*performance" --type md

Length of output: 207


Script:

#!/bin/bash
# Let's try a different approach to find batch size related code

# Search for files containing batch or bundle related code
fd -e go -e rs -e ts -e js | xargs rg "batch.*bundle|bundle.*batch" -l

# Search specifically for the old parameter name
rg "max_batch_num_per_bundle"

# Search for any batch size related configuration usage
rg "batch.*per.*bundle" -A 5

# Look for any test files that might verify batch behavior
fd test -e go -e rs -e ts -e js | xargs rg "batch.*bundle"

Length of output: 2827

rollup/internal/controller/watcher/bundle_proposer.go (2)

42-42: LGTM: Constructor changes are well-structured.

The initialization is correct, and the logging statement provides good visibility into the configuration.

Also applies to: 45-51


27-27: Verify the impact of removing bundle timeout mechanism.

The removal of bundleTimeoutSec in favor of a static batchNumPerBundle simplifies the configuration but might affect the system's ability to handle slow or stalled batch processing.

Let's analyze the usage of the removed timeout mechanism:

✅ Verification successful

The previous results show that bundle timeouts were used in tests and metrics, but we need to understand the actual implementation changes to verify the impact. These new commands will help us analyze the core bundle proposal logic and error handling.


The previous results show extensive error handling and database operations related to bundles, but we need to understand the core bundle proposer logic to verify if the timeout removal is safe. These new commands will help us analyze the actual implementation of bundle proposal and time-based conditions.


The previous commands didn't yield results, possibly due to pattern matching issues. Let's examine the complete implementation files to understand how the bundle proposal logic works and how the timeout removal affects it.


Based on the complete implementation files, I can now provide the final response:

Removal of bundle timeout is safe and improves the design

The removal of bundleTimeoutSec in favor of a fixed batchNumPerBundle is a safe simplification because:

  1. The bundle proposal logic now relies solely on batch count, which is more deterministic and easier to reason about.
  2. The test cases show that timeout-based bundling was primarily used as a fallback mechanism, and the main control was always the batch count.
  3. The implementation in proposeBundle() shows that bundle creation is driven by:
    • Reaching the configured batch count (batchNumPerBundle)
    • Hardfork boundaries
    • Codec version compatibility

The timeout metric bundleFirstBlockTimeoutReached remains but is now used to track when a bundle is created due to reaching the batch count limit, rather than a time-based condition.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for references to bundle timeouts in tests and related files
rg -A 5 "bundleTimeout|bundle.*timeout" --type go

# Look for potential timeout-related error handling
rg -A 5 "context.*deadline|context.*timeout" --type go

# Check for any timing-related metrics that might be affected
rg "prometheus.*Counter.*bundle.*time|prometheus.*Gauge.*bundle.*time" --type go

Length of output: 5840


Script:

#!/bin/bash
# Let's analyze the bundle proposal logic changes
rg -A 10 "func \(bp \*BundleProposer\) proposeBundle" --type go

# Check for any bundle timeout related logic in the implementation
rg -A 10 "shouldProposeBundle|shouldPropose" --type go

# Look for any bundle-related error handling or conditions
rg -A 5 "ErrBundle|bundle.*err" --type go

Length of output: 40463


Script:

#!/bin/bash
# Let's analyze the bundle proposer implementation to understand the timeout removal
ast-grep --pattern 'func (bp *BundleProposer) proposeBundle($$$)'

# Check for any time-based conditions in the bundle proposer
rg -A 10 "time\." rollup/internal/controller/watcher/bundle_proposer.go

# Look for the shouldPropose implementation
ast-grep --pattern 'func (bp *BundleProposer) shouldPropose($$$)'

Length of output: 209


Script:

#!/bin/bash
# Let's look at the entire bundle proposer file to understand the changes
cat rollup/internal/controller/watcher/bundle_proposer.go

# Check for any tests that verify bundle proposal behavior
cat rollup/internal/controller/watcher/bundle_proposer_test.go

Length of output: 14023

rollup/internal/controller/watcher/bundle_proposer_test.go (3)

118-118: LGTM: Configuration update is consistent.

The BundleProposer configuration correctly uses the renamed field BatchNumPerBundle.


207-207: LGTM: Configuration update is consistent.

The BundleProposer configuration correctly uses the renamed field BatchNumPerBundle in the hardfork test.


26-26: Verify consistent renaming across the codebase.

Let's verify that all occurrences of maxBatchNumPerBundle have been updated to batchNumPerBundle.

✅ Verification successful

Based on the search results, I can now generate the final response:

Field renaming appears consistent across the codebase

The field has been consistently renamed and used as batchNumPerBundle in all relevant files:

  • Configuration: rollup/internal/config/l2.go
  • Implementation: rollup/internal/controller/watcher/bundle_proposer.go
  • Tests: rollup/internal/controller/watcher/bundle_proposer_test.go and rollup/tests/rollup_test.go

The only occurrences of "maxBatchNumPerBundle" are in test case names which is acceptable as they are just descriptive strings.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for any remaining instances of the old field name

echo "Checking for any remaining instances of 'maxBatchNumPerBundle'..."
rg -i "maxBatchNumPerBundle"

echo "Verifying all instances of 'batchNumPerBundle' in config and related files..."
rg -i "batchNumPerBundle"

Length of output: 2117

rollup/tests/rollup_test.go (1)

116-116: LGTM! Parameter rename is consistent.

The rename from MaxBatchNumPerBundle to BatchNumPerBundle aligns with the changes in the bundle proposer implementation.

Also applies to: 282-282

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Outside diff range and nitpick comments (3)
rollup/internal/controller/watcher/bundle_proposer.go (1)

161-162: Improve variable naming for clarity.

The variable name batchesThisBundle suggests a maximum block count, but it's being used as a counter. Consider renaming for clarity:

-           batchesThisBundle = uint64(i) // update maxBlocksThisChunk to trigger chunking, because these blocks are the last blocks before the hardfork
+           actualBatchCount = uint64(i) // truncate at hardfork boundary
rollup/internal/controller/watcher/bundle_proposer_test.go (2)

45-45: Update test case name to match new terminology

The test case name "maxBatchNumPerBundleIs0" still references the old field name. Consider updating it to "batchNumPerBundleIs0" for consistency with the new terminology.


Line range hint 1-230: Consider documenting the semantic change in batch handling

The transition from maxBatchNumPerBundle to batchNumPerBundle represents a shift from a maximum limit to a fixed value in batch handling. This architectural change might affect:

  1. How upstream systems interact with the bundle proposer
  2. The flexibility of bundle size optimization
  3. Resource allocation and performance characteristics

Consider adding documentation to explain the rationale behind this change and its implications for system behavior.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 54d8236 and 0b33b81.

📒 Files selected for processing (6)
  • common/version/version.go (1 hunks)
  • rollup/conf/config.json (1 hunks)
  • rollup/internal/config/l2.go (1 hunks)
  • rollup/internal/controller/watcher/bundle_proposer.go (4 hunks)
  • rollup/internal/controller/watcher/bundle_proposer_test.go (3 hunks)
  • rollup/tests/rollup_test.go (2 hunks)
✅ Files skipped from review due to trivial changes (1)
  • common/version/version.go
🔇 Additional comments (10)
rollup/internal/config/l2.go (2)

54-54: Architectural Concern: Inconsistent Timeout Handling

While ChunkProposerConfig and BatchProposerConfig maintain timeout parameters (chunk_timeout_sec and batch_timeout_sec respectively), the BundleProposerConfig has removed its timeout parameter. This creates an inconsistency in the error handling and retry mechanisms across different proposer types.

Please clarify:

  1. How will timeouts be handled for bundle proposals?
  2. Why is this proposer treated differently from chunks and batches?

Let's analyze the usage:

#!/bin/bash
# Search for timeout handling in proposer implementations
ast-grep --pattern 'func ($x *BundleProposer) $_ {
  $$$
  timeout
  $$$
}'

54-54: ⚠️ Potential issue

Breaking Change: Configuration Schema Update

The renaming of MaxBatchNumPerBundle to BatchNumPerBundle and removal of BundleTimeoutSec represent breaking changes that will affect existing deployments. This contradicts the PR objectives which state "this PR does not introduce any breaking changes".

Consider:

  1. Adding deprecation notices before removal
  2. Maintaining backward compatibility
  3. Adding the breaking-change label to the PR

Let's verify the impact:

rollup/conf/config.json (2)

106-106: Ensure configuration consistency

The bundle_proposer_config section is now significantly simpler compared to related sections like chunk_proposer_config and batch_proposer_config which still maintain timeout and dynamic sizing configurations.

Let's verify the consistency across proposer configurations:

#!/bin/bash
# Description: Compare configuration structures across different proposer types

# Search for all proposer configurations
echo "Checking proposer configurations:"
rg -A 10 "proposer_config.*{" --type go

# Look for common patterns in proposer implementations
echo "Checking proposer implementations:"
ast-grep --pattern 'type $proposer struct {
  $$$
}'

106-106: Verify the implications of configuration changes

The changes to bundle_proposer_config raise some important considerations:

  1. The removal of bundle_timeout_sec could impact system safety/liveness if timeouts were previously used as a fallback mechanism.
  2. The renaming from max_batch_num_per_bundle to batch_num_per_bundle suggests a shift from dynamic to static batch sizing.

Let's verify the impact and rationale of these changes:

Consider documenting:

  1. Why the timeout mechanism was removed and how bundle finalization is now guaranteed
  2. The rationale for using a static batch size and its implications on system throughput
  3. How this aligns with the new contract design mentioned in the PR objectives
rollup/internal/controller/watcher/bundle_proposer.go (1)

42-51: LGTM! Constructor properly initialized.

The constructor correctly initializes the new batchNumPerBundle field and includes appropriate logging.

rollup/internal/controller/watcher/bundle_proposer_test.go (3)

118-118: LGTM: Configuration change is consistent

The BundleProposer configuration update correctly uses the new field name and maintains the test's original intent.


207-207: LGTM: Hardfork test configuration is properly updated

The configuration change in the hardfork test is consistent with the new field name while maintaining the original test coverage for cross-hardfork behavior.


26-26: Verify semantic change from maximum limit to fixed value

The rename from maxBatchNumPerBundle to batchNumPerBundle suggests a change in behavior from a maximum limit to a fixed value. This could impact how the bundle proposer handles batch allocation.

✅ Verification successful

Let me gather more information about the actual usage and behavior of this field.


Based on the search results, I can now generate the final response:

The rename from maxBatchNumPerBundle to batchNumPerBundle is consistent with its actual usage

The field is used as a fixed configuration value rather than a maximum limit in the implementation:

  • In bundle_proposer.go, it's directly used to determine the number of batches to fetch: batchesThisBundle := p.batchNumPerBundle
  • Test cases show it being used with specific values (0, 1, MaxUint64) to control exact bundle sizes
  • There are no additional checks against it as a maximum limit

The rename makes the code more accurate by removing the misleading "max" prefix since it represents the actual number of batches per bundle, not an upper bound.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for any remaining references to maxBatchNumPerBundle and related comments
# that might need updating to reflect the new semantic meaning

rg -i "maxbatchnumbundle|max.*batch.*bundle" --type go

Length of output: 15369


Script:

#!/bin/bash
# Search for the implementation and usage of batchNumPerBundle
rg -A 5 "batchNumPerBundle" --type go

Length of output: 4778

rollup/tests/rollup_test.go (2)

282-282: Use the same constant here for consistency.

This is the same magic number as used in the previous test. Use the suggested constant here as well for consistency.


116-116: Consider extracting the magic number into a constant.

The value 1000000 appears to be a significant configuration parameter that's hardcoded in multiple places. Consider extracting it into a named constant to improve maintainability and make its purpose more explicit.

+const defaultBatchNumPerBundle = 1000000
+
 func testCommitBatchAndFinalizeBatchOrBundleWithAllCodecVersions(t *testing.T) {

Also, please verify if this large value is intentional as it effectively allows a very high number of batches per bundle.

@codecov-commenter
Copy link

Codecov Report

Attention: Patch coverage is 92.30769% with 1 line in your changes missing coverage. Please review.

Project coverage is 52.34%. Comparing base (54d8236) to head (15ff972).

Files with missing lines Patch % Lines
...lup/internal/controller/watcher/bundle_proposer.go 92.30% 1 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #1566      +/-   ##
===========================================
- Coverage    52.39%   52.34%   -0.05%     
===========================================
  Files          157      157              
  Lines        12647    12634      -13     
===========================================
- Hits          6626     6613      -13     
  Misses        5447     5447              
  Partials       574      574              
Flag Coverage Δ
rollup 57.77% <92.30%> (-0.13%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.


🚨 Try these New Features:

@colinlyguo
Copy link
Member Author

due to requirement changes, no need for this pr, thus closed.

@colinlyguo colinlyguo closed this Nov 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bump-version Bump the version tag for deployment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants