Skip to content

Commit

Permalink
jito patch
Browse files Browse the repository at this point in the history
only reroute if relayer connected (jito-foundation#123)
feat: add client tls config (jito-foundation#121)
remove extra val (jito-foundation#129)
fix clippy (jito-foundation#130)
copy all binaries to docker-output (jito-foundation#131)
Ledger tool halts at slot passed to create-snapshot (jito-foundation#118)
update program submodule (jito-foundation#133)
quick fix for tips and clearing old bundles (jito-foundation#135)
update submodule to new program (jito-foundation#136)
Improve stake-meta-generator usability (jito-foundation#134)
pinning submodule head (jito-foundation#140)
Use BundleAccountLocker when handling tip txs (jito-foundation#147)
Add metrics for relayer + block engine proxy (jito-foundation#149)
Build claim-mev in docker (jito-foundation#141)
Rework bundle receiving and add metrics (jito-foundation#152) (jito-foundation#154)
update submodule + dev files (jito-foundation#158)
Deterministically find tip amounts, add meta to stake info, and cleanup pubkey/strings in MEV tips (jito-foundation#159)
update jito-programs submodule (jito-foundation#160)
Separate MEV tip related workflow (jito-foundation#161)
Add block builder fee protos (jito-foundation#162)
fix jito programs (jito-foundation#163)
update submodule so autosnapshot exits out of ledger tool early (jito-foundation#164)
Pipe through block builder fee (jito-foundation#167)
pull in new snapshot code (jito-foundation#171)
block builder bug (jito-foundation#172)

Pull in new slack autosnapshot submodule (jito-foundation#174)

sort stake meta json and use int math (jito-foundation#176)

add accountsdb conn submod (jito-foundation#169)

Update tip distribution parameters (jito-foundation#177)

new submodules (jito-foundation#180)

Add buildkite link for jito CI (jito-foundation#183)

Fixed broken links to repositories (jito-foundation#184)

Changed from ssh to https transfer for clone

Seg/update submods (jito-foundation#187)

fix tests (jito-foundation#190)

rm geyser submod (jito-foundation#192)

rm dangling geyser references (jito-foundation#193)

fix syntax err (jito-foundation#195)

use deterministic req ids in batch calls (jito-foundation#199)

update jito-programs

revert cargo

update Cargo lock

update with path fix

fix cargo

update autosnapshot with block lookback (jito-foundation#201)

[JIT-460] When claiming mev tips, skip accounts that won't have min rent exempt amount after claiming (jito-foundation#203)

Add logging for sol balance desired (jito-foundation#205)

* add logging

* add logging

* update msg

* tweak vars

update submodule (jito-foundation#204)

use efficient data structures when calling batch_simulate_bundles (jito-foundation#206)

[JIT-504] Add low balance check in uploading merkle roots (jito-foundation#209)

add config to simulate on top of working bank (jito-foundation#211)

rm frozen bank check

simulate_bundle rpc bugfixes (jito-foundation#214)

rm frozen bank check in simulate_bundle rpc method

[JIT-519] Store ClaimStatus address in merkle-root-json (jito-foundation#210)

* add files

* switch to include bump

update submodule (jito-foundation#217)

add amount filter (jito-foundation#218)

update autosnapshot (jito-foundation#222)

Print TX error in Bundles (jito-foundation#223)

add new args to support single relayer and block-engine endpoints (jito-foundation#224)

point to new jito-programs submod and invoke updated init tda instruction (jito-foundation#228)

fix clippy errors (jito-foundation#230)

fix validator start scripts (jito-foundation#232)

Point README to gitbook (jito-foundation#237)

use packaged cargo bin to build (jito-foundation#239)

Add validator identity pubkey to StakeMeta (jito-foundation#226)

The vote account associated with a validator is not a permanent link, so log the validator identity as well.

bugfix: conditionally compile with debug flags (jito-foundation#240)

Seg/tip distributor master (jito-foundation#242)

* validate tree nodes

* fix unit tests

* pr feedback

* bump jito-programs submod

Simplify bootstrapping (jito-foundation#241)

* startup without precompile

* update spacing

* use release mode

* spacing

fix validation

rm validation skip

Account for block builder fee when generating excess tip balance (jito-foundation#247)

Improve docker caching

delay constructing claim mev txs (jito-foundation#253)

fix stake meta tests from bb fee (jito-foundation#254)

fix tests

Buffer bundles that exceed cost model (jito-foundation#225)

* buffer bundles that exceed cost model

clear qos failed bundles buffer if not leader soon (jito-foundation#260)

update Cargo.lock to correct solana versions in jito-programs submodule (jito-foundation#265)

fix simulate_bundle client and better error handling (jito-foundation#267)

update submod (jito-foundation#272)

Preallocate Bundle Cost (jito-foundation#238)

fix Dockerfile (jito-foundation#278)

Fix Tests (jito-foundation#279)

Fix Tests (jito-foundation#281)

* fix tests

update jito-programs submod (jito-foundation#282)

add reclaim rent workflow (jito-foundation#283)

update jito-programs submod

fix clippy errs

rm wrong assertion and swap out file write fn call (jito-foundation#292)

Remove security.md (jito-foundation#293)

demote frequent relayer_stage-stream_error to warn (jito-foundation#275)

account for case where TDA exists but not allocated (jito-foundation#295)

implement better retries for tip-distributor workflows (jito-foundation#297)

limit number of concurrent rpc calls (jito-foundation#298)

Discard Empty Packet Batches (jito-foundation#299)

Identity Hotswap (jito-foundation#290)

small fixes (jito-foundation#305)

Set backend config from admin rpc (jito-foundation#304)

Admin Shred Receiver Change (jito-foundation#306)

Seg/rm bundle UUID (jito-foundation#309)

Fix github workflow to recursively clone (jito-foundation#327)

Add recursive checkout for downstream-project-spl.yaml (jito-foundation#341)

Use cluster info functions for tpu (jito-foundation#345)

Use git rev-parse for git sha

Remove blacklisted tx from message_hash_to_transaction (jito-foundation#374)

Updates bootstrap and start scripts needed for local dev. (jito-foundation#384)

Remove Deprecated Cli Args (jito-foundation#387)

Master Rebase

improve simulate_bundle errors and response (jito-foundation#404)

derive Clone on accountoverrides (jito-foundation#416)

Add upsert to AccountOverrides (jito-foundation#419)

update jito-programs (jito-foundation#430)

[JIT-1661] Faster Autosnapshot (jito-foundation#436)

Reverts simulate_transaction result calls to upstream (jito-foundation#446)

Don't unlock accounts in TransactionBatches used during simulation (jito-foundation#449)

first pass at wiring up jito-plugin (jito-foundation#428)

[JIT-1713] Fix bundle's blockspace preallocation (jito-foundation#489)

[JIT-1708] Fix TOC TOU condition for relayer and block engine config (jito-foundation#491)

[JIT-1710] - Optimize Bundle Consumer Checks (jito-foundation#490)

Add Blockhash Metrics to Bundle Committer (jito-foundation#500)

add priority fee ix to mev-claim (jito-foundation#520)

Update Autosnapshot (jito-foundation#548)

Run MEV claims + reclaiming rent-exempt amounts in parallel. (jito-foundation#582)

Update CI (jito-foundation#584)
- Add recursive submodule checkouts.
- Re-add solana-secondary step

Add more release fixes (jito-foundation#585)

Fix more release urls (jito-foundation#588)

[JIT-1812] Fix blocking mutexs (jito-foundation#495)

 [JIT-1711] Compare the unprocessed transaction storage BundleStorage against a constant instead of VecDeque::capacity() (jito-foundation#587)

Automatically rebase Jito-Solana on a periodic basis. Send message on slack during any failures or success.

Fix periodic rebase jito-foundation#594

Fixes the following bugs in the periodic rebase:
Sends multiple messages on failure instead of one
Cancels entire job if one branch fails

Ignore buildkite curl errors for rebasing and try to keep curling until job times out (jito-foundation#597)

Sleep longer waiting for buildkite to start (jito-foundation#598)

correctly initialize account overrides (jito-foundation#595)

Fix: Ensure set contact info to UDP port instead of QUIC (jito-foundation#603)

Add fast replay branch to daily rebase (jito-foundation#607)

take a snapshot of all bundle accounts before sim (jito-foundation#13) (jito-foundation#615)

update jito-programs submodule

Export agave binaries during docker build (BP jito-foundation#627) (jito-foundation#628)

Backport jito-foundation#611  (jito-foundation#631)

Publish releases to S3 and GCS (jito-foundation#633) (jito-foundation#634)

Add packet flag for from staked sender (jito-foundation#655)

Co-authored-by: Jed <4679729+jedleggett@users.noreply.github.com>

Add bundle storage to new unprocessed transaction storage method

Loosen tip requirement [v2.0] (jito-foundation#685)

Add comments around ignoring the slot returned from ImmutableDeserializedPacket::build_sanitized_transaction
buffalu committed Nov 7, 2024
1 parent f8f3fe3 commit ffc0054
Showing 182 changed files with 17,811 additions and 870 deletions.
9 changes: 9 additions & 0 deletions .dockerignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
.dockerignore
.git/
.github/
.gitignore
.idea/
README.md
Dockerfile
f
target/
2 changes: 2 additions & 0 deletions .github/workflows/cargo.yml
Original file line number Diff line number Diff line change
@@ -35,6 +35,8 @@ jobs:
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v4
with:
submodules: 'recursive'

- uses: mozilla-actions/sccache-action@v0.0.4
with:
1 change: 1 addition & 0 deletions .github/workflows/changelog-label.yml
Original file line number Diff line number Diff line change
@@ -13,6 +13,7 @@ jobs:
- uses: actions/checkout@v4
with:
fetch-depth: 0
submodules: 'recursive'
- name: Check if changes to CHANGELOG.md
shell: bash
env:
4 changes: 4 additions & 0 deletions .github/workflows/client-targets.yml
Original file line number Diff line number Diff line change
@@ -32,6 +32,8 @@ jobs:
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v4
with:
submodules: 'recursive'

- run: cargo install cargo-ndk@2.12.2

@@ -56,6 +58,8 @@ jobs:
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v4
with:
submodules: 'recursive'

- name: Setup Rust
run: |
1 change: 1 addition & 0 deletions .github/workflows/crate-check.yml
Original file line number Diff line number Diff line change
@@ -19,6 +19,7 @@ jobs:
- uses: actions/checkout@v4
with:
fetch-depth: 0
submodules: 'recursive'

- name: Get commit range (push)
if: ${{ github.event_name == 'push' }}
3 changes: 3 additions & 0 deletions .github/workflows/docs.yml
Original file line number Diff line number Diff line change
@@ -22,6 +22,7 @@ jobs:
uses: actions/checkout@v4
with:
fetch-depth: 0
submodules: 'recursive'

- name: Get commit range (push)
if: ${{ github.event_name == 'push' }}
@@ -77,6 +78,8 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v4
with:
submodules: 'recursive'

- name: Setup Node
uses: actions/setup-node@v4
4 changes: 3 additions & 1 deletion .github/workflows/downstream-project-anchor.yml
Original file line number Diff line number Diff line change
@@ -43,10 +43,12 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
version: ["v0.29.0", "v0.30.0"]
version: [ "v0.29.0", "v0.30.0" ]
if: false # Re-enable once new major versions for spl-token-2022 and spl-pod are out
steps:
- uses: actions/checkout@v4
with:
submodules: 'recursive'

- shell: bash
run: |
42 changes: 24 additions & 18 deletions .github/workflows/downstream-project-spl.yml
Original file line number Diff line number Diff line change
@@ -42,6 +42,8 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
submodules: 'recursive'

- shell: bash
run: |
@@ -68,7 +70,7 @@ jobs:
arrays:
[
{
test_paths: ["token/cli"],
test_paths: [ "token/cli" ],
required_programs:
[
"token/program",
@@ -78,14 +80,14 @@ jobs:
],
},
{
test_paths: ["single-pool/cli"],
test_paths: [ "single-pool/cli" ],
required_programs:
[
"single-pool/program",
],
},
{
test_paths: ["token-upgrade/cli"],
test_paths: [ "token-upgrade/cli" ],
required_programs:
[
"token-upgrade/program",
@@ -94,6 +96,8 @@ jobs:
]
steps:
- uses: actions/checkout@v4
with:
submodules: 'recursive'

- shell: bash
run: |
@@ -128,26 +132,28 @@ jobs:
strategy:
matrix:
programs:
- [token/program]
- [ token/program ]
- [
instruction-padding/program,
token/program-2022,
token/program-2022-test,
]
instruction-padding/program,
token/program-2022,
token/program-2022-test,
]
- [
associated-token-account/program,
associated-token-account/program-test,
]
- [token-upgrade/program]
- [feature-proposal/program]
- [governance/addin-mock/program, governance/program]
- [memo/program]
- [name-service/program]
- [stake-pool/program]
- [single-pool/program]
associated-token-account/program,
associated-token-account/program-test,
]
- [ token-upgrade/program ]
- [ feature-proposal/program ]
- [ governance/addin-mock/program, governance/program ]
- [ memo/program ]
- [ name-service/program ]
- [ stake-pool/program ]
- [ single-pool/program ]

steps:
- uses: actions/checkout@v4
with:
submodules: 'recursive'

- shell: bash
run: |
181 changes: 181 additions & 0 deletions .github/workflows/rebase.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,181 @@
# This workflow runs a periodic rebase process, pulling in updates from an upstream repository
# The workflow for rebasing a jito-solana branch to a solana labs branch locally is typically:
# $ git checkout v1.17
# $ git pull --rebase # --rebase needed locally
# $ git branch -D lb/v1.17_rebase # deletes branch from last v1.17 rebase
# $ git checkout -b lb/v1.17_rebase
# $ git fetch upstream
# $ git rebase upstream/v1.17 # rebase + fix merge conflicts
# $ git rebase --continue
# $ git push origin +lb/v1.17_rebase # force needed to overwrite remote. wait for CI, fix if any issues
# $ git checkout v1.17
# $ git reset --hard lb/v1.17_rebase
# $ git push origin +v1.17
#
# This workflow automates this process, with periodic status updates over slack.
# It will also run CI and wait for it to pass before performing the force push to v1.17.
# In the event there's a failure in the process, it's reported to slack and the job stops.

name: "Rebase jito-solana from upstream anza-xyz/agave"

on:
# push:
schedule:
- cron: "30 18 * * 1-5"

jobs:
rebase:
runs-on: ubuntu-latest
strategy:
matrix:
include:
- branch: master
rebase: upstream/master
- branch: v1.18
rebase: upstream/v1.18
- branch: v1.17
rebase: upstream/v1.17
# note: this will always be a day behind because we're rebasing from the previous day's rebase
# and NOT upstream
- branch: v1.17-fast-replay
rebase: origin/v1.17
fail-fast: false
steps:
- uses: actions/checkout@v4
with:
ref: ${{ matrix.branch }}
submodules: recursive
fetch-depth: 0
token: ${{ secrets.JITO_SOLANA_RELEASE_TOKEN }}
- name: Add upstream
run: git remote add upstream https://github.com/anza-xyz/agave.git
- name: Fetch upstream
run: git fetch upstream
- name: Fetch origin
run: git fetch origin
- name: Set REBASE_BRANCH
run: echo "REBASE_BRANCH=ci/nightly/${{ matrix.branch }}/$(date +'%Y-%m-%d-%H-%M')" >> $GITHUB_ENV
- name: echo $REBASE_BRANCH
run: echo $REBASE_BRANCH
- name: Create rebase branch
run: git checkout -b $REBASE_BRANCH
- name: Setup email
run: |
git config --global user.email "infra@jito.wtf"
git config --global user.name "Jito Infrastructure"
- name: Rebase
id: rebase
run: git rebase ${{ matrix.rebase }}
- name: Send warning for rebase error
if: failure() && steps.rebase.outcome == 'failure'
uses: slackapi/slack-github-action@v1.25.0
with:
payload: |
{
"text": "Nightly rebase on branch ${{ matrix.branch }}\nStatus: Rebase failed to apply cleanly"
}
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
- name: Check if rebase applied
id: check_rebase_applied
run: |
PRE_REBASE_SHA=$(git rev-parse ${{ matrix.branch }})
POST_REBASE_SHA=$(git rev-parse HEAD)
if [ "$PRE_REBASE_SHA" = "$POST_REBASE_SHA" ]; then
echo "No rebase was applied, exiting..."
exit 1
else
echo "Rebase applied successfully."
fi
- name: Send warning for rebase error
if: failure() && steps.check_rebase_applied.outcome == 'failure'
uses: slackapi/slack-github-action@v1.25.0
with:
payload: |
{
"text": "Nightly rebase on branch ${{ matrix.branch }}\nStatus: Rebase not needed"
}
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
- name: Set REBASE_SHA
run: echo "REBASE_SHA=$(git rev-parse HEAD)" >> $GITHUB_ENV
- name: Push changes
uses: ad-m/github-push-action@master
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
branch: ${{ env.REBASE_BRANCH }}
- name: Wait for buildkite to start build
run: sleep 300
- name: Wait for buildkite to finish
id: wait_for_buildkite
timeout-minutes: 300
run: |
while true; do
response=$(curl -s -f -H "Authorization: Bearer ${{ secrets.BUILDKITE_TOKEN }}" "https://api.buildkite.com/v2/organizations/jito/pipelines/jito-solana/builds?commit=${{ env.REBASE_SHA }}")
if [ $? -ne 0 ]; then
echo "Curl request failed."
exit 1
fi
state=$(echo $response | jq --exit-status -r '.[0].state')
echo "Current build state: $state"
# Check if the state is one of the finished states
case $state in
"passed"|"finished")
echo "Build finished successfully."
exit 0
;;
"canceled"|"canceling"|"not_run")
# ignoring "failing"|"failed" because flaky CI, can restart and hope it finishes or times out
echo "Build failed or was cancelled."
exit 2
;;
esac
sleep 30
done
- name: Send failure update
uses: slackapi/slack-github-action@v1.25.0
if: failure() && steps.wait_for_buildkite.outcome == 'failure'
with:
payload: |
{
"text": "Nightly rebase on branch ${{ matrix.branch }}\nStatus: CI failed\nBranch: ${{ env.REBASE_BRANCH}}\nBuild: https://buildkite.com/jito/jito-solana/builds?commit=${{ env.REBASE_SHA }}"
}
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
# check to see if different branch since CI build can take awhile and these steps are not atomic
- name: Fetch the latest remote changes
run: git fetch origin ${{ matrix.branch }}
- name: Check if origin HEAD has changed from the beginning of the workflow
run: |
LOCAL_SHA=$(git rev-parse ${{ matrix.branch }})
ORIGIN_SHA=$(git rev-parse origin/${{ matrix.branch }})
if [ "$ORIGIN_SHA" != "$LOCAL_SHA" ]; then
echo "The remote HEAD of ${{ matrix.branch }} does not match the local HEAD of ${{ matrix.branch }} at the beginning of CI."
echo "origin sha: $ORIGIN_SHA"
echo "local sha: $LOCAL_SHA"
exit 1
else
echo "The remote HEAD matches the local REBASE_SHA at the beginning of CI. Proceeding."
fi
- name: Reset ${{ matrix.branch }} to ${{ env.REBASE_BRANCH }}
run: |
git checkout ${{ matrix.branch }}
git reset --hard ${{ env.REBASE_BRANCH }}
- name: Push rebased %{{ matrix.branch }}
uses: ad-m/github-push-action@master
with:
github_token: ${{ secrets.JITO_SOLANA_RELEASE_TOKEN }}
branch: ${{ matrix.branch }}
force: true
- name: Send success update
uses: slackapi/slack-github-action@v1.25.0
with:
payload: |
{
"text": "Nightly rebase on branch ${{ matrix.branch }}\nStatus: CI success, rebased, and pushed\nBranch: ${{ env.REBASE_BRANCH}}\nBuild: https://buildkite.com/jito/jito-solana/builds?commit=${{ env.REBASE_SHA }}"
}
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
1 change: 1 addition & 0 deletions .github/workflows/release-artifacts.yml
Original file line number Diff line number Diff line change
@@ -22,6 +22,7 @@ jobs:
with:
ref: master
fetch-depth: 0
submodules: 'recursive'

- name: Setup Rust
shell: bash
5 changes: 5 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -4,6 +4,7 @@ target/
/solana-release.tar.bz2
/solana-metrics/
/solana-metrics.tar.bz2
**/target/
/test-ledger/

**/*.rs.bk
@@ -27,7 +28,11 @@ log-*/
# fetch-spl.sh artifacts
/spl-genesis-args.sh
/spl_*.so
/jito_*.so

.DS_Store
# scripts that may be generated by cargo *-bpf commands
**/cargo-*-bpf-child-script-*.sh

.env
docker-output/
9 changes: 9 additions & 0 deletions .gitmodules
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
[submodule "anchor"]
path = anchor
url = https://github.com/jito-foundation/anchor.git
[submodule "jito-programs"]
path = jito-programs
url = https://github.com/jito-foundation/jito-programs.git
[submodule "jito-protos/protos"]
path = jito-protos/protos
url = https://github.com/jito-labs/mev-protos.git
781 changes: 624 additions & 157 deletions Cargo.lock

Large diffs are not rendered by default.

18 changes: 17 additions & 1 deletion Cargo.toml
Original file line number Diff line number Diff line change
@@ -19,6 +19,7 @@ members = [
"bench-tps",
"bloom",
"bucket_map",
"bundle",
"cargo-registry",
"clap-utils",
"clap-v3-utils",
@@ -45,6 +46,7 @@ members = [
"gossip",
"inline-spl",
"install",
"jito-protos",
"keygen",
"ledger",
"ledger-tool",
@@ -92,6 +94,7 @@ members = [
"rpc-client-nonce-utils",
"rpc-test",
"runtime",
"runtime-plugin",
"runtime-transaction",
"sdk",
"sdk/cargo-build-bpf",
@@ -111,6 +114,7 @@ members = [
"svm",
"test-validator",
"thin-client",
"tip-distributor",
"tokens",
"tps-client",
"tpu-client",
@@ -133,7 +137,12 @@ members = [
"zk-token-sdk",
]

exclude = ["programs/sbf", "svm/tests/example-programs"]
exclude = [
"anchor",
"jito-programs",
"programs/sbf",
"svm/tests/example-programs",
]

resolver = "2"

@@ -150,6 +159,7 @@ Inflector = "0.11.4"
aquamarine = "0.3.3"
aes-gcm-siv = "0.11.1"
ahash = "0.8.10"
anchor-lang = { path = "anchor/lang" }
anyhow = "1.0.82"
arbitrary = "1.3.2"
ark-bn254 = "0.4.0"
@@ -238,13 +248,17 @@ jemallocator = { package = "tikv-jemallocator", version = "0.4.1", features = [
"unprefixed_malloc_on_supported_platforms",
] }
js-sys = "0.3.69"
jito-protos = { path = "jito-protos", version = "=2.0.15" }
jito-tip-distribution = { path = "jito-programs/mev-programs/programs/tip-distribution", features = ["no-entrypoint"] }
jito-tip-payment = { path = "jito-programs/mev-programs/programs/tip-payment", features = ["no-entrypoint"] }
json5 = "0.4.1"
jsonrpc-core = "18.0.0"
jsonrpc-core-client = "18.0.0"
jsonrpc-derive = "18.0.0"
jsonrpc-http-server = "18.0.0"
jsonrpc-ipc-server = "18.0.0"
jsonrpc-pubsub = "18.0.0"
jsonrpc-server-utils = "18.0.0"
lazy-lru = "0.1.2"
lazy_static = "1.4.0"
libc = "0.2.155"
@@ -330,6 +344,7 @@ solana-bench-tps = { path = "bench-tps", version = "=2.0.15" }
solana-bloom = { path = "bloom", version = "=2.0.15" }
solana-bpf-loader-program = { path = "programs/bpf_loader", version = "=2.0.15" }
solana-bucket-map = { path = "bucket_map", version = "=2.0.15" }
solana-bundle = { path = "bundle", version = "=2.0.15" }
agave-cargo-registry = { path = "cargo-registry", version = "=2.0.15" }
solana-clap-utils = { path = "clap-utils", version = "=2.0.15" }
solana-clap-v3-utils = { path = "clap-v3-utils", version = "=2.0.15" }
@@ -384,6 +399,7 @@ solana-rpc-client = { path = "rpc-client", version = "=2.0.15", default-features
solana-rpc-client-api = { path = "rpc-client-api", version = "=2.0.15" }
solana-rpc-client-nonce-utils = { path = "rpc-client-nonce-utils", version = "=2.0.15" }
solana-runtime = { path = "runtime", version = "=2.0.15" }
solana-runtime-plugin = { path = "runtime-plugin", version = "=2.0.15" }
solana-runtime-transaction = { path = "runtime-transaction", version = "=2.0.15" }
solana-sdk = { path = "sdk", version = "=2.0.15" }
solana-sdk-macro = { path = "sdk/macro", version = "=2.0.15" }
33 changes: 22 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
@@ -4,12 +4,16 @@
</a>
</p>

[![Solana crate](https://img.shields.io/crates/v/solana-core.svg)](https://crates.io/crates/solana-core)
[![Solana documentation](https://docs.rs/solana-core/badge.svg)](https://docs.rs/solana-core)
[![Build status](https://badge.buildkite.com/8cc350de251d61483db98bdfc895b9ea0ac8ffa4a32ee850ed.svg?branch=master)](https://buildkite.com/solana-labs/solana/builds?branch=master)
[![codecov](https://codecov.io/gh/solana-labs/solana/branch/master/graph/badge.svg)](https://codecov.io/gh/solana-labs/solana)
[![Build status](https://badge.buildkite.com/3a7c88c0f777e1a0fddacc190823565271ae4c251ef78d83a8.svg)](https://buildkite.com/jito/jito-solana)

# Building
# About

This repository contains Jito's fork of the Solana validator.

We recommend checking out our [Gitbook](https://jito-foundation.gitbook.io/mev/jito-solana/building-the-software) for
more detailed instructions on building and running Jito-Solana.

---

## **1. Install rustc, cargo and rustfmt.**

@@ -25,30 +29,36 @@ When building the master branch, please make sure you are using the latest stabl
$ rustup update
```

When building a specific release branch, you should check the rust version in `ci/rust-version.sh` and if necessary, install that version by running:
When building a specific release branch, you should check the rust version in `ci/rust-version.sh` and if necessary,
install that version by running:

```bash
$ rustup install VERSION
```
Note that if this is not the latest rust version on your machine, cargo commands may require an [override](https://rust-lang.github.io/rustup/overrides.html) in order to use the correct version.

Note that if this is not the latest rust version on your machine, cargo commands may require
an [override](https://rust-lang.github.io/rustup/overrides.html) in order to use the correct version.

On Linux systems you may need to install libssl-dev, pkg-config, zlib1g-dev, protobuf etc.

On Ubuntu:

```bash
$ sudo apt-get update
$ sudo apt-get install libssl-dev libudev-dev pkg-config zlib1g-dev llvm clang cmake make libprotobuf-dev protobuf-compiler
```

On Fedora:

```bash
$ sudo dnf install openssl-devel systemd-devel pkg-config zlib-devel llvm clang cmake make protobuf-devel protobuf-compiler perl-core
```

## **2. Download the source code.**

```bash
$ git clone https://github.com/anza-xyz/agave.git
$ cd agave
$ git clone https://github.com/jito-foundation/jito-solana.git
$ cd jito-solana
```

## **3. Build.**
@@ -72,7 +82,7 @@ Start your own testnet locally, instructions are in the [online docs](https://do
### Accessing the remote development cluster

* `devnet` - stable public cluster for development accessible via
devnet.solana.com. Runs 24/7. Learn more about the [public clusters](https://docs.solanalabs.com/clusters)
devnet.solana.com. Runs 24/7. Learn more about the [public clusters](https://docs.solanalabs.com/clusters)

# Benchmarking

@@ -104,7 +114,7 @@ $ open target/cov/lcov-local/index.html

Why coverage? While most see coverage as a code quality metric, we see it primarily as a developer
productivity metric. When a developer makes a change to the codebase, presumably it's a *solution* to
some problem. Our unit-test suite is how we encode the set of *problems* the codebase solves. Running
some problem. Our unit-test suite is how we encode the set of *problems* the codebase solves. Running
the test suite should indicate that your change didn't *infringe* on anyone else's solutions. Adding a
test *protects* your solution from future changes. Say you don't understand why a line of code exists,
try deleting it and running the unit-tests. The nearest test failure should tell you what problem
@@ -113,3 +123,4 @@ problem is solved by this code?" On the other hand, if a test does fail and you
better way to solve the same problem, a Pull Request with your solution would most certainly be
welcome! Likewise, if rewriting a test can better communicate what code it's protecting, please
send us that patch!

85 changes: 57 additions & 28 deletions RELEASE.md
Original file line number Diff line number Diff line change
@@ -17,9 +17,10 @@
```

### master branch

All new development occurs on the `master` branch.

Bug fixes that affect a `vX.Y` branch are first made on `master`. This is to
Bug fixes that affect a `vX.Y` branch are first made on `master`. This is to
allow a fix some soak time on `master` before it is applied to one or more
stabilization branches.

@@ -29,7 +30,7 @@ release blocker in a branch causes you to forget to propagate back to
`master`!)"

Once the bug fix lands on `master` it is cherry-picked into the `vX.Y` branch
and potentially the `vX.Y-1` branch. The exception to this rule is when a bug
and potentially the `vX.Y-1` branch. The exception to this rule is when a bug
fix for `vX.Y` doesn't apply to `master` or `vX.Y-1`.

Immediately after a new stabilization branch is forged, the `Cargo.toml` minor
@@ -38,10 +39,12 @@ Incrementing the major version of the `master` branch is outside the scope of
this document.

### v*X.Y* stabilization branches

These are stabilization branches. They are created from the `master` branch approximately
every 13 weeks.

### v*X.Y.Z* release tag

The release tags are created as desired by the owner of the given stabilization
branch, and cause that *X.Y.Z* release to be shipped to https://crates.io

@@ -50,25 +53,28 @@ patch version number (*Z*) of the stabilization branch is incremented by the
release engineer.

## Channels

Channels are used by end-users (humans and bots) to consume the branches
described in the previous section, so they may automatically update to the most
recent version matching their desired stability.

There are three release channels that map to branches as follows:

* edge - tracks the `master` branch, least stable.
* beta - tracks the largest (and latest) `vX.Y` stabilization branch, more stable.
* stable - tracks the second largest `vX.Y` stabilization branch, most stable.

## Steps to Create a Branch

### Create the new branch

1. Check out the latest commit on `master` branch:
```
git fetch --all
git checkout upstream/master
```
1. Determine the new branch name. The name should be "v" + the first 2 version fields
from Cargo.toml. For example, a Cargo.toml with version = "0.9.0" implies
1. Determine the new branch name. The name should be "v" + the first 2 version fields
from Cargo.toml. For example, a Cargo.toml with version = "0.9.0" implies
the next branch name is "v0.9".
1. Create the new branch and push this branch to the `agave` repository:
```
@@ -80,7 +86,8 @@ Alternatively use the Github UI.
### Update master branch to the next release minor version
1. After the new branch has been created and pushed, update the Cargo.toml files on **master** to the next semantic version (e.g. 0.9.0 -> 0.10.0) with:
1. After the new branch has been created and pushed, update the Cargo.toml files on **master** to the next semantic
version (e.g. 0.9.0 -> 0.10.0) with:
```
$ scripts/increment-cargo-version.sh minor
```
@@ -91,60 +98,82 @@ Alternatively use the Github UI.
git commit -m 'Bump version to X.Y+1.0'
git push -u origin version_update
```
1. Confirm that your freshly cut release branch is shown as `BETA_CHANNEL` and the previous release branch as `STABLE_CHANNEL`:
1. Confirm that your freshly cut release branch is shown as `BETA_CHANNEL` and the previous release branch
as `STABLE_CHANNEL`:
```
ci/channel-info.sh
```
### Miscellaneous Clean up
1. Pin the spl-token-cli version in the newly promoted stable branch by setting `splTokenCliVersion` in scripts/spl-token-cli-version.sh to the latest release that depends on the stable branch (usually this will be the latest spl-token-cli release).
1. Update [mergify.yml](https://github.com/anza-xyz/agave/blob/master/.mergify.yml) to add backport actions for the new branch and remove actions for the obsolete branch.
1. Adjust the [Github backport labels](https://github.com/anza-xyz/agave/labels) to add the new branch label and remove the label for the obsolete branch.
1. Pin the spl-token-cli version in the newly promoted stable branch by setting `splTokenCliVersion` in
scripts/spl-token-cli-version.sh to the latest release that depends on the stable branch (usually this will be the
latest spl-token-cli release).
1. Update [mergify.yml](https://github.com/jito-foundation/jito-solana/blob/master/.mergify.yml) to add backport actions
for the new branch and remove actions for the obsolete branch.
1. Adjust the [Github backport labels](https://github.com/jito-foundation/jito-solana/labels) to add the new branch
label and remove the label for the obsolete branch.
1. Announce on Discord #development that the release branch exists so people know to use the new backport labels.
## Steps to Create a Release
### Create the Release Tag on GitHub
1. Go to [GitHub Releases](https://github.com/anza-xyz/agave/releases) for tagging a release.
1. Click "Draft new release". The release tag must exactly match the `version`
1. Go to [GitHub Releases](https://github.com/jito-foundation/jito-solana/releases) for tagging a release.
1. Click "Draft new release". The release tag must exactly match the `version`
field in `/Cargo.toml` prefixed by `v`.
1. If the Cargo.toml version field is **0.12.3**, then the release tag must be **v0.12.3**
1. If the Cargo.toml version field is **0.12.3**, then the release tag must be **v0.12.3**
1. Make sure the Target Branch field matches the branch you want to make a release on.
1. If you want to release v0.12.0, the target branch must be v0.12
1. If you want to release v0.12.0, the target branch must be v0.12
1. Fill the release notes.
1. If this is the first release on the branch (e.g. v0.13.**0**), paste in [this
template](https://raw.githubusercontent.com/anza-xyz/agave/master/.github/RELEASE_TEMPLATE.md). Engineering Lead can provide summary contents for release notes if needed.
1. If this is a patch release, review all the commits since the previous release on this branch and add details as needed.
1. If this is the first release on the branch (e.g. v0.13.**0**), paste in [this
template](https://raw.githubusercontent.com/jito-foundation/jito-solana/master/.github/RELEASE_TEMPLATE.md).
Engineering Lead can provide summary contents for release notes if needed.
1. If this is a patch release, review all the commits since the previous release on this branch and add details as
needed.
1. Click "Save Draft", then confirm the release notes look good and the tag name and branch are correct.
1. Ensure all desired commits (usually backports) are landed on the branch by now.
1. Ensure the release is marked **"This is a pre-release"**. This flag will need to be removed manually after confirming the Linux binary artifacts appear at a later step.
1. Ensure the release is marked **"This is a pre-release"**. This flag will need to be removed manually after confirming
the Linux binary artifacts appear at a later step.
1. Go back into edit the release and click "Publish release" while being marked as a pre-release.
1. Confirm there is new git tag with intended version number at the intended revision after running `git fetch` locally.
### Update release branch with the next patch version
[This action](https://github.com/anza-xyz/agave/blob/master/.github/workflows/increment-cargo-version-on-release.yml) ensures that publishing a release will trigger the creation of a PR to update the Cargo.toml files on **release branch** to the next semantic version (e.g. 0.9.0 -> 0.9.1). Ensure that the created PR makes it through CI and gets submitted.
[This action](https://github.com/jito-foundation/jito-solana/blob/master/.github/workflows/increment-cargo-version-on-release.yml)
ensures that publishing a release will trigger the creation of a PR to update the Cargo.toml files on **release branch**
to the next semantic version (e.g. 0.9.0 -> 0.9.1). Ensure that the created PR makes it through CI and gets submitted.
Note: As of 2024-03-26 the above action is failing so version bumps are done manually. The version bump script is incorrectly updating hashbrown and proc-macro2 versions which should be reverted.
Note: As of 2024-03-26 the above action is failing so version bumps are done manually. The version bump script is
incorrectly updating hashbrown and proc-macro2 versions which should be reverted.
### Prepare for the next release
1. Go to [GitHub Releases](https://github.com/anza-xyz/agave/releases) and create a new draft release for `X.Y.Z+1` with empty release notes. This allows people to incrementally add new release notes until it's time for the next release
1. Go to [GitHub Releases](https://github.com/jito-foundation/jito-solana/releases) and create a new draft release
for `X.Y.Z+1` with empty release notes. This allows people to incrementally add new release notes until it's time for
the next release
1. Also, point the branch field to the same branch and mark the release as **"This is a pre-release"**.
### Verify release automation success
Go to [Agave Releases](https://github.com/anza-xyz/agave/releases) and click on the latest release that you just published.
Verify that all of the build artifacts are present (15 assets), then uncheck **"This is a pre-release"** for the release.
Go to [Agave Releases](https://github.com/jito-foundation/jito-solana/releases) and click on the latest release that you
just published.
Verify that all of the build artifacts are present (15 assets), then uncheck **"This is a pre-release"** for the
release.
Build artifacts can take up to 60 minutes after creating the tag before
appearing. To check for progress:
* The `agave-secondary` Buildkite pipeline handles creating the Linux and macOS release artifacts and updated crates. Look for a job under the tag name of the release: https://buildkite.com/anza-xyz/agave-secondary.
* The Windows release artifacts are produced by GitHub Actions. Look for a job under the tag name of the release: https://github.com/anza-xyz/agave/actions.
appearing. To check for progress:
[Crates.io agave-validator](https://crates.io/crates/agave-validator) should have an updated agave-validator version. This can take 2-3 hours, and sometimes fails in the `agave-secondary` job.
* The `agave-secondary` Buildkite pipeline handles creating the Linux and macOS release artifacts and updated crates.
Look for a job under the tag name of the release: https://buildkite.com/jito-foundation/jito-solana-secondary.
* The Windows release artifacts are produced by GitHub Actions. Look for a job under the tag name of the
release: https://github.com/jito-foundation/jito-solana/actions.
[Crates.io agave-validator](https://crates.io/crates/agave-validator) should have an updated agave-validator version.
This can take 2-3 hours, and sometimes fails in the `agave-secondary` job.
If this happens and the error is non-fatal, click "Retry" on the "publish crate" job
### Update software on testnet.solana.com
See the documentation at https://github.com/solana-labs/cluster-ops/. devnet.solana.com and mainnet-beta.solana.com run stable releases that have been tested on testnet. Do not update devnet or mainnet-beta with a beta release.
See the documentation at https://github.com/solana-labs/cluster-ops/. devnet.solana.com and mainnet-beta.solana.com run
stable releases that have been tested on testnet. Do not update devnet or mainnet-beta with a beta release.
108 changes: 91 additions & 17 deletions accounts-db/src/accounts.rs
Original file line number Diff line number Diff line change
@@ -559,19 +559,32 @@ impl Accounts {
}

fn lock_account(
&self,
account_locks: &mut AccountLocks,
writable_keys: Vec<&Pubkey>,
readonly_keys: Vec<&Pubkey>,
additional_read_locks: Option<&HashSet<Pubkey>>,
additional_write_locks: Option<&HashSet<Pubkey>>,
) -> Result<()> {
for k in writable_keys.iter() {
if account_locks.is_locked_write(k) || account_locks.is_locked_readonly(k) {
if account_locks.is_locked_write(k)
|| account_locks.is_locked_readonly(k)
|| additional_write_locks
.map(|additional_write_locks| additional_write_locks.contains(k))
.unwrap_or(false)
|| additional_read_locks
.map(|additional_read_locks| additional_read_locks.contains(k))
.unwrap_or(false)
{
debug!("Writable account in use: {:?}", k);
return Err(TransactionError::AccountInUse);
}
}
for k in readonly_keys.iter() {
if account_locks.is_locked_write(k) {
if account_locks.is_locked_write(k)
|| additional_write_locks
.map(|additional_write_locks| additional_write_locks.contains(k))
.unwrap_or(false)
{
debug!("Read-only account in use: {:?}", k);
return Err(TransactionError::AccountInUse);
}
@@ -615,7 +628,7 @@ impl Accounts {
let tx_account_locks_results: Vec<Result<_>> = txs
.map(|tx| tx.get_account_locks(tx_account_lock_limit))
.collect();
self.lock_accounts_inner(tx_account_locks_results)
self.lock_accounts_inner(tx_account_locks_results, None, None)
}

#[must_use]
@@ -624,6 +637,8 @@ impl Accounts {
txs: impl Iterator<Item = &'a SanitizedTransaction>,
results: impl Iterator<Item = Result<()>>,
tx_account_lock_limit: usize,
additional_read_locks: Option<&HashSet<Pubkey>>,
additional_write_locks: Option<&HashSet<Pubkey>>,
) -> Vec<Result<()>> {
let tx_account_locks_results: Vec<Result<_>> = txs
.zip(results)
@@ -632,22 +647,30 @@ impl Accounts {
Err(err) => Err(err),
})
.collect();
self.lock_accounts_inner(tx_account_locks_results)
self.lock_accounts_inner(
tx_account_locks_results,
additional_read_locks,
additional_write_locks,
)
}

#[must_use]
fn lock_accounts_inner(
&self,
tx_account_locks_results: Vec<Result<TransactionAccountLocks>>,
additional_read_locks: Option<&HashSet<Pubkey>>,
additional_write_locks: Option<&HashSet<Pubkey>>,
) -> Vec<Result<()>> {
let account_locks = &mut self.account_locks.lock().unwrap();
tx_account_locks_results
.into_iter()
.map(|tx_account_locks_result| match tx_account_locks_result {
Ok(tx_account_locks) => self.lock_account(
Ok(tx_account_locks) => Self::lock_account(
account_locks,
tx_account_locks.writable,
tx_account_locks.readonly,
additional_read_locks,
additional_write_locks,
),
Err(err) => Err(err),
})
@@ -686,8 +709,13 @@ impl Accounts {
durable_nonce: &DurableNonce,
lamports_per_signature: u64,
) {
let (accounts_to_store, transactions) =
self.collect_accounts_to_store(txs, res, loaded, durable_nonce, lamports_per_signature);
let (accounts_to_store, transactions) = Self::collect_accounts_to_store(
txs,
res,
loaded,
durable_nonce,
lamports_per_signature,
);
self.accounts_db
.store_cached_inline_update_index((slot, &accounts_to_store[..]), Some(&transactions));
}
@@ -702,8 +730,7 @@ impl Accounts {
}

#[allow(clippy::too_many_arguments)]
fn collect_accounts_to_store<'a>(
&self,
pub fn collect_accounts_to_store<'a>(
txs: &'a [SanitizedTransaction],
execution_results: &'a [TransactionExecutionResult],
load_results: &'a mut [TransactionLoadResult],
@@ -780,6 +807,55 @@ impl Accounts {
}
(accounts, transactions)
}

pub fn lock_accounts_sequential_with_results<'a>(
&self,
txs: impl Iterator<Item = &'a SanitizedTransaction>,
tx_account_lock_limit: usize,
) -> Vec<Result<()>> {
let tx_account_locks_results: Vec<Result<_>> = txs
.map(|tx| tx.get_account_locks(tx_account_lock_limit))
.collect();
self.lock_accounts_sequential_inner(tx_account_locks_results)
}

#[must_use]
fn lock_accounts_sequential_inner(
&self,
tx_account_locks_results: Vec<Result<TransactionAccountLocks>>,
) -> Vec<Result<()>> {
let mut l_account_locks = self.account_locks.lock().unwrap();
Self::lock_accounts_sequential(&mut l_account_locks, tx_account_locks_results)
}

pub fn lock_accounts_sequential(
account_locks: &mut AccountLocks,
tx_account_locks_results: Vec<Result<TransactionAccountLocks>>,
) -> Vec<Result<()>> {
let mut account_in_use_set = false;
tx_account_locks_results
.into_iter()
.map(|tx_account_locks_result| match tx_account_locks_result {
Ok(tx_account_locks) => match account_in_use_set {
true => Err(TransactionError::AccountInUse),
false => {
let locked = Self::lock_account(
account_locks,
tx_account_locks.writable,
tx_account_locks.readonly,
None,
None,
);
if matches!(locked, Err(TransactionError::AccountInUse)) {
account_in_use_set = true;
}
locked
}
},
Err(err) => Err(err),
})
.collect()
}
}

fn post_process_failed_tx(
@@ -1484,6 +1560,8 @@ mod tests {
txs.iter(),
qos_results.into_iter(),
MAX_TX_ACCOUNT_LOCKS,
None,
None,
);

assert_eq!(
@@ -1605,7 +1683,7 @@ mod tests {
}
let txs = vec![tx0.clone(), tx1.clone()];
let execution_results = vec![new_execution_result(Ok(())); 2];
let (collected_accounts, transactions) = accounts.collect_accounts_to_store(
let (collected_accounts, transactions) = Accounts::collect_accounts_to_store(
&txs,
&execution_results,
loaded.as_mut_slice(),
@@ -1849,13 +1927,11 @@ mod tests {
let mut loaded = vec![loaded];

let durable_nonce = DurableNonce::from_blockhash(&Hash::new_unique());
let accounts_db = AccountsDb::new_single_for_tests();
let accounts = Accounts::new(Arc::new(accounts_db));
let txs = vec![tx];
let execution_results = vec![new_execution_result(Err(
TransactionError::InstructionError(1, InstructionError::InvalidArgument),
))];
let (collected_accounts, _) = accounts.collect_accounts_to_store(
let (collected_accounts, _) = Accounts::collect_accounts_to_store(
&txs,
&execution_results,
loaded.as_mut_slice(),
@@ -1949,13 +2025,11 @@ mod tests {
let mut loaded = vec![loaded];

let durable_nonce = DurableNonce::from_blockhash(&Hash::new_unique());
let accounts_db = AccountsDb::new_single_for_tests();
let accounts = Accounts::new(Arc::new(accounts_db));
let txs = vec![tx];
let execution_results = vec![new_execution_result(Err(
TransactionError::InstructionError(1, InstructionError::InvalidArgument),
))];
let (collected_accounts, _) = accounts.collect_accounts_to_store(
let (collected_accounts, _) = Accounts::collect_accounts_to_store(
&txs,
&execution_results,
loaded.as_mut_slice(),
1 change: 1 addition & 0 deletions anchor
Submodule anchor added at 4f52f4
14 changes: 12 additions & 2 deletions banking-bench/src/main.rs
Original file line number Diff line number Diff line change
@@ -9,6 +9,7 @@ use {
solana_core::{
banking_stage::BankingStage,
banking_trace::{BankingPacketBatch, BankingTracer, BANKING_TRACE_DIR_DEFAULT_BYTE_LIMIT},
bundle_stage::bundle_account_locker::BundleAccountLocker,
validator::BlockProductionMethod,
},
solana_gossip::cluster_info::{ClusterInfo, Node},
@@ -37,6 +38,7 @@ use {
solana_streamer::socket::SocketAddrSpace,
solana_tpu_client::tpu_client::DEFAULT_TPU_CONNECTION_POOL_SIZE,
std::{
collections::HashSet,
sync::{atomic::Ordering, Arc, RwLock},
thread::sleep,
time::{Duration, Instant},
@@ -58,9 +60,15 @@ fn check_txs(
let now = Instant::now();
let mut no_bank = false;
loop {
if let Ok((_bank, (entry, _tick_height))) = receiver.recv_timeout(Duration::from_millis(10))
if let Ok(WorkingBankEntry {
bank: _,
entries_ticks,
}) = receiver.recv_timeout(Duration::from_millis(10))
{
total += entry.transactions.len();
total += entries_ticks
.iter()
.map(|e| e.0.transactions.len())
.sum::<usize>();
}
if total >= ref_tx_count {
break;
@@ -475,6 +483,8 @@ fn main() {
bank_forks.clone(),
&Arc::new(PrioritizationFeeCache::new(0u64)),
false,
HashSet::default(),
BundleAccountLocker::default(),
);

// This is so that the signal_receiver does not go out of scope after the closure.
1 change: 1 addition & 0 deletions banks-server/Cargo.toml
Original file line number Diff line number Diff line change
@@ -15,6 +15,7 @@ crossbeam-channel = { workspace = true }
futures = { workspace = true }
solana-banks-interface = { workspace = true }
solana-client = { workspace = true }
solana-gossip = { workspace = true }
solana-runtime = { workspace = true }
solana-sdk = { workspace = true }
solana-send-transaction-service = { workspace = true }
5 changes: 3 additions & 2 deletions banks-server/src/banks_server.rs
Original file line number Diff line number Diff line change
@@ -8,6 +8,7 @@ use {
TransactionSimulationDetails, TransactionStatus,
},
solana_client::connection_cache::ConnectionCache,
solana_gossip::cluster_info::ClusterInfo,
solana_runtime::{
bank::{Bank, TransactionSimulationResult},
bank_forks::BankForks,
@@ -425,7 +426,7 @@ pub async fn start_local_server(

pub async fn start_tcp_server(
listen_addr: SocketAddr,
tpu_addr: SocketAddr,
cluster_info: Arc<ClusterInfo>,
bank_forks: Arc<RwLock<BankForks>>,
block_commitment_cache: Arc<RwLock<BlockCommitmentCache>>,
connection_cache: Arc<ConnectionCache>,
@@ -450,7 +451,7 @@ pub async fn start_tcp_server(
let (sender, receiver) = unbounded();

SendTransactionService::new::<NullTpuInfo>(
tpu_addr,
cluster_info.clone(),
&bank_forks,
None,
receiver,
26 changes: 26 additions & 0 deletions bootstrap
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
#!/usr/bin/env bash
set -eu

BANK_HASH=$(cargo run --release --bin solana-ledger-tool -- -l config/bootstrap-validator bank-hash)

# increase max file handle limit
ulimit -Hn 1000000

# if above fails, run:
# sudo bash -c 'echo "* hard nofile 1000000" >> /etc/security/limits.conf'

# NOTE: make sure tip-payment and tip-distribution program are deployed using the correct pubkeys
RUST_LOG=INFO,solana_core::bundle_stage=DEBUG \
NDEBUG=1 ./multinode-demo/bootstrap-validator.sh \
--wait-for-supermajority 0 \
--expected-bank-hash "$BANK_HASH" \
--block-engine-url http://127.0.0.1 \
--relayer-url http://127.0.0.1:11226 \
--rpc-pubsub-enable-block-subscription \
--enable-rpc-transaction-history \
--tip-payment-program-pubkey T1pyyaTNZsKv2WcRAB8oVnk93mLJw2XzjtVYqCsaHqt \
--tip-distribution-program-pubkey 4R3gSG8BpU4t19KYj8CfnbtRpnT8gtk4dvTHxVRwc2r7 \
--commission-bps 0 \
--shred-receiver-address 127.0.0.1:1002 \
--trust-relayer-packets \
--trust-block-engine-packets
37 changes: 37 additions & 0 deletions bundle/Cargo.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
[package]
name = "solana-bundle"
description = "Library related to handling bundles"
documentation = "https://docs.rs/solana-bundle"
readme = "../README.md"
version = { workspace = true }
authors = { workspace = true }
repository = { workspace = true }
homepage = { workspace = true }
license = { workspace = true }
edition = { workspace = true }

[dependencies]
anchor-lang = { workspace = true }
itertools = { workspace = true }
log = { workspace = true }
serde = { workspace = true }
solana-accounts-db = { workspace = true }
solana-ledger = { workspace = true }
solana-logger = { workspace = true }
solana-measure = { workspace = true }
solana-poh = { workspace = true }
solana-program-runtime = { workspace = true }
solana-runtime = { workspace = true }
solana-sdk = { workspace = true }
solana-svm = { workspace = true }
solana-transaction-status = { workspace = true }
thiserror = { workspace = true }

[dev-dependencies]
assert_matches = { workspace = true }
solana-logger = { workspace = true }
solana-runtime = { workspace = true, features = ["dev-context-only-utils"] }

[lib]
crate-type = ["lib"]
name = "solana_bundle"
1,216 changes: 1,216 additions & 0 deletions bundle/src/bundle_execution.rs

Large diffs are not rendered by default.

60 changes: 60 additions & 0 deletions bundle/src/lib.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
use {
crate::bundle_execution::LoadAndExecuteBundleError,
anchor_lang::error::Error,
serde::{Deserialize, Serialize},
solana_poh::poh_recorder::PohRecorderError,
solana_sdk::pubkey::Pubkey,
thiserror::Error,
};

pub mod bundle_execution;

#[derive(Error, Debug, Clone, Serialize, Deserialize, PartialEq)]
pub enum TipError {
#[error("account is missing from bank: {0}")]
AccountMissing(Pubkey),

#[error("Anchor error: {0}")]
AnchorError(String),

#[error("Lock error")]
LockError,

#[error("Error executing initialize programs")]
InitializeProgramsError,

#[error("Error cranking tip programs")]
CrankTipError,
}

impl From<anchor_lang::error::Error> for TipError {
fn from(anchor_err: Error) -> Self {
match anchor_err {
Error::AnchorError(e) => Self::AnchorError(e.error_msg),
Error::ProgramError(e) => Self::AnchorError(e.to_string()),
}
}
}

pub type BundleExecutionResult<T> = Result<T, BundleExecutionError>;

#[derive(Error, Debug, Clone)]
pub enum BundleExecutionError {
#[error("The bank has hit the max allotted time for processing transactions")]
BankProcessingTimeLimitReached,

#[error("The bundle exceeds the cost model")]
ExceedsCostModel,

#[error("Runtime error while executing the bundle: {0}")]
TransactionFailure(#[from] LoadAndExecuteBundleError),

#[error("Error locking bundle because a transaction is malformed")]
LockError,

#[error("PoH record error: {0}")]
PohRecordError(#[from] PohRecorderError),

#[error("Tip payment error {0}")]
TipError(#[from] TipError),
}
4 changes: 2 additions & 2 deletions ci/buildkite-pipeline-in-disk.sh
Original file line number Diff line number Diff line change
@@ -289,7 +289,7 @@ if [[ -n $BUILDKITE_TAG ]]; then
start_pipeline "Tag pipeline for $BUILDKITE_TAG"

annotate --style info --context release-tag \
"https://github.com/anza-xyz/agave/releases/$BUILDKITE_TAG"
"https://github.com/jito-foundation/jito-solana/releases/$BUILDKITE_TAG"

# Jump directly to the secondary build to publish release artifacts quickly
trigger_secondary_step
@@ -307,7 +307,7 @@ if [[ $BUILDKITE_BRANCH =~ ^pull ]]; then

# Add helpful link back to the corresponding Github Pull Request
annotate --style info --context pr-backlink \
"Github Pull Request: https://github.com/anza-xyz/agave/$BUILDKITE_BRANCH"
"Github Pull Request: https://github.com/jito-foundation/jito-solana/$BUILDKITE_BRANCH"

pull_or_push_steps
exit 0
4 changes: 2 additions & 2 deletions ci/buildkite-pipeline.sh
Original file line number Diff line number Diff line change
@@ -316,7 +316,7 @@ if [[ -n $BUILDKITE_TAG ]]; then
start_pipeline "Tag pipeline for $BUILDKITE_TAG"

annotate --style info --context release-tag \
"https://github.com/anza-xyz/agave/releases/$BUILDKITE_TAG"
"https://github.com/jito-foundation/jito-solana/releases/$BUILDKITE_TAG"

# Jump directly to the secondary build to publish release artifacts quickly
trigger_secondary_step
@@ -334,7 +334,7 @@ if [[ $BUILDKITE_BRANCH =~ ^pull ]]; then

# Add helpful link back to the corresponding Github Pull Request
annotate --style info --context pr-backlink \
"Github Pull Request: https://github.com/anza-xyz/agave/$BUILDKITE_BRANCH"
"Github Pull Request: https://github.com/jito-foundation/jito-solana/$BUILDKITE_BRANCH"

pull_or_push_steps
exit 0
62 changes: 31 additions & 31 deletions ci/buildkite-secondary.yml
Original file line number Diff line number Diff line change
@@ -18,34 +18,34 @@ steps:
agents:
queue: "release-build"
timeout_in_minutes: 5
- wait
- name: "publish docker"
command: "sdk/docker-solana/build.sh"
agents:
queue: "release-build"
timeout_in_minutes: 60
- name: "publish crate"
command: "ci/publish-crate.sh"
agents:
queue: "release-build"
retry:
manual:
permit_on_passed: true
timeout_in_minutes: 240
branches: "!master"
- name: "publish tarball (aarch64-apple-darwin)"
command: "ci/publish-tarball.sh"
agents:
queue: "release-build-aarch64-apple-darwin"
retry:
manual:
permit_on_passed: true
timeout_in_minutes: 60
- name: "publish tarball (x86_64-apple-darwin)"
command: "ci/publish-tarball.sh"
agents:
queue: "release-build-x86_64-apple-darwin"
retry:
manual:
permit_on_passed: true
timeout_in_minutes: 60
# - wait
# - name: "publish docker"
# command: "sdk/docker-solana/build.sh"
# agents:
# queue: "release-build"
# timeout_in_minutes: 60
# - name: "publish crate"
# command: "ci/publish-crate.sh"
# agents:
# queue: "release-build"
# retry:
# manual:
# permit_on_passed: true
# timeout_in_minutes: 240
# branches: "!master"
# - name: "publish tarball (aarch64-apple-darwin)"
# command: "ci/publish-tarball.sh"
# agents:
# queue: "release-build-aarch64-apple-darwin"
# retry:
# manual:
# permit_on_passed: true
# timeout_in_minutes: 60
# - name: "publish tarball (x86_64-apple-darwin)"
# command: "ci/publish-tarball.sh"
# agents:
# queue: "release-build-x86_64-apple-darwin"
# retry:
# manual:
# permit_on_passed: true
# timeout_in_minutes: 60
4 changes: 2 additions & 2 deletions ci/buildkite-solana-private.sh
Original file line number Diff line number Diff line change
@@ -269,7 +269,7 @@ pull_or_push_steps() {
# start_pipeline "Tag pipeline for $BUILDKITE_TAG"

# annotate --style info --context release-tag \
# "https://github.com/solana-labs/solana/releases/$BUILDKITE_TAG"
# "https://github.com/jito-foundation/jito-solana/releases/$BUILDKITE_TAG"

# # Jump directly to the secondary build to publish release artifacts quickly
# trigger_secondary_step
@@ -287,7 +287,7 @@ if [[ $BUILDKITE_BRANCH =~ ^pull ]]; then

# Add helpful link back to the corresponding Github Pull Request
annotate --style info --context pr-backlink \
"Github Pull Request: https://github.com/anza-xyz/agave/$BUILDKITE_BRANCH"
"Github Pull Request: https://github.com/jito-foundation/jito-solana/$BUILDKITE_BRANCH"

pull_or_push_steps
exit 0
2 changes: 1 addition & 1 deletion ci/channel-info.sh
Original file line number Diff line number Diff line change
@@ -11,7 +11,7 @@ here="$(dirname "$0")"
# shellcheck source=ci/semver_bash/semver.sh
source "$here"/semver_bash/semver.sh

remote=https://github.com/anza-xyz/agave.git
remote=https://github.com/jito-foundation/jito-solana.git

# Fetch all vX.Y.Z tags
#
3 changes: 3 additions & 0 deletions ci/check-crates.sh
Original file line number Diff line number Diff line change
@@ -31,6 +31,9 @@ printf "%s\n" "${files[@]}"
error_count=0
for file in "${files[@]}"; do
read -r crate_name package_publish workspace < <(toml get "$file" . | jq -r '(.package.name | tostring)+" "+(.package.publish | tostring)+" "+(.workspace | tostring)')
if [ "$crate_name" == "solana-bundle" ]; then
continue
fi
echo "=== $crate_name ($file) ==="

if [[ $package_publish = 'false' ]]; then
16 changes: 11 additions & 5 deletions ci/publish-installer.sh
Original file line number Diff line number Diff line change
@@ -26,14 +26,20 @@ fi
# upload install script
source ci/upload-ci-artifact.sh

cat >release.anza.xyz-install <<EOF
cat >release.jito.wtf-install <<EOF
SOLANA_RELEASE=$CHANNEL_OR_TAG
SOLANA_INSTALL_INIT_ARGS=$CHANNEL_OR_TAG
SOLANA_DOWNLOAD_ROOT=https://release.anza.xyz
SOLANA_DOWNLOAD_ROOT=https://release.jito.wtf
EOF
cat install/agave-install-init.sh >>release.anza.xyz-install
cat install/agave-install-init.sh >>release.jito.wtf-install

echo --- GCS: "install"
upload-gcs-artifact "/solana/release.anza.xyz-install" "gs://anza-release/$CHANNEL_OR_TAG/install"
upload-gcs-artifact "/solana/release.jito.wtf-install" "gs://jito-release/$CHANNEL_OR_TAG/install"

# Jito added - releases need to support S3
echo --- AWS S3 Store: "install"
upload-s3-artifact "/solana/release.jito.wtf-install" "s3://release.jito.wtf/$CHANNEL_OR_TAG/install"

echo Published to:
ci/format-url.sh https://release.anza.xyz/"$CHANNEL_OR_TAG"/install
ci/format-url.sh https://release.jito.wtf/"$CHANNEL_OR_TAG"/install

8 changes: 6 additions & 2 deletions ci/publish-tarball.sh
Original file line number Diff line number Diff line change
@@ -119,10 +119,14 @@ for file in "${TARBALL_BASENAME}"-$TARGET.tar.bz2 "${TARBALL_BASENAME}"-$TARGET.

if [[ -n $BUILDKITE ]]; then
echo --- GCS Store: "$file"
upload-gcs-artifact "/solana/$file" gs://anza-release/"$CHANNEL_OR_TAG"/"$file"
upload-gcs-artifact "/solana/$file" gs://jito-release/"$CHANNEL_OR_TAG"/"$file"

# Jito added - releases need to support S3
echo --- AWS S3 Store: "$file"
upload-s3-artifact "/solana/$file" s3://release.jito.wtf/"$CHANNEL_OR_TAG"/"$file"

echo Published to:
$DRYRUN ci/format-url.sh https://release.anza.xyz/"$CHANNEL_OR_TAG"/"$file"
$DRYRUN ci/format-url.sh https://release.jito.wtf/"$CHANNEL_OR_TAG"/"$file"

if [[ -n $TAG ]]; then
ci/upload-github-release-asset.sh "$file"
2 changes: 1 addition & 1 deletion ci/test-coverage.sh
Original file line number Diff line number Diff line change
@@ -40,5 +40,5 @@ else
codecov -t "${CODECOV_TOKEN}" --dir "$here/../target/cov/${SHORT_CI_COMMIT}"

annotate --style success --context codecov.io \
"CodeCov report: https://codecov.io/github/anza-xyz/agave/commit/$CI_COMMIT"
"CodeCov report: https://codecov.io/github/jito-foundation/jito-solana/commit/$CI_COMMIT"
fi
2 changes: 1 addition & 1 deletion ci/upload-github-release-asset.sh
Original file line number Diff line number Diff line change
@@ -26,7 +26,7 @@ fi
# Force CI_REPO_SLUG since sometimes
# BUILDKITE_TRIGGERED_FROM_BUILD_PIPELINE_SLUG is not set correctly, causing the
# artifact upload to fail
CI_REPO_SLUG=anza-xyz/agave
CI_REPO_SLUG=jito-foundation/jito-solana
#if [[ -z $CI_REPO_SLUG ]]; then
# echo Error: CI_REPO_SLUG not defined
# exit 1
13 changes: 13 additions & 0 deletions core/Cargo.toml
Original file line number Diff line number Diff line change
@@ -15,6 +15,7 @@ codecov = { repository = "solana-labs/solana", branch = "master", service = "git

[dependencies]
ahash = { workspace = true }
anchor-lang = { workspace = true }
base64 = { workspace = true }
bincode = { workspace = true }
bs58 = { workspace = true }
@@ -26,12 +27,17 @@ etcd-client = { workspace = true, features = ["tls"] }
futures = { workspace = true }
histogram = { workspace = true }
itertools = { workspace = true }
jito-protos = { workspace = true }
jito-tip-distribution = { workspace = true }
jito-tip-payment = { workspace = true }
lazy_static = { workspace = true }
log = { workspace = true }
lru = { workspace = true }
min-max-heap = { workspace = true }
num_enum = { workspace = true }
prio-graph = { workspace = true }
prost = { workspace = true }
prost-types = { workspace = true }
qualifier_attr = { workspace = true }
quinn = { workspace = true }
rand = { workspace = true }
@@ -44,6 +50,7 @@ serde_bytes = { workspace = true }
serde_derive = { workspace = true }
solana-accounts-db = { workspace = true }
solana-bloom = { workspace = true }
solana-bundle = { workspace = true }
solana-client = { workspace = true }
solana-compute-budget = { workspace = true }
solana-connection-cache = { workspace = true }
@@ -65,6 +72,7 @@ solana-rayon-threadlimit = { workspace = true }
solana-rpc = { workspace = true }
solana-rpc-client-api = { workspace = true }
solana-runtime = { workspace = true }
solana-runtime-plugin = { workspace = true }
solana-sdk = { workspace = true }
solana-send-transaction-service = { workspace = true }
solana-streamer = { workspace = true }
@@ -83,19 +91,23 @@ sys-info = { workspace = true }
tempfile = { workspace = true }
thiserror = { workspace = true }
tokio = { workspace = true, features = ["full"] }
tonic = { workspace = true }
trees = { workspace = true }

[dev-dependencies]
assert_matches = { workspace = true }
fs_extra = { workspace = true }
serde_json = { workspace = true }
serial_test = { workspace = true }
solana-accounts-db = { workspace = true }
# See order-crates-for-publishing.py for using this unusual `path = "."`
solana-bundle = { workspace = true }
solana-core = { path = ".", features = ["dev-context-only-utils"] }
solana-ledger = { workspace = true, features = ["dev-context-only-utils"] }
solana-logger = { workspace = true }
solana-poh = { workspace = true, features = ["dev-context-only-utils"] }
solana-program-runtime = { workspace = true }
solana-program-test = { workspace = true }
solana-runtime = { workspace = true, features = ["dev-context-only-utils"] }
solana-sdk = { workspace = true, features = ["dev-context-only-utils"] }
solana-stake-program = { workspace = true }
@@ -111,6 +123,7 @@ sysctl = { workspace = true }

[build-dependencies]
rustc_version = { workspace = true }
tonic-build = { workspace = true }

[features]
dev-context-only-utils = []
24 changes: 21 additions & 3 deletions core/benches/banking_stage.rs
Original file line number Diff line number Diff line change
@@ -25,6 +25,7 @@ use {
BankingStage, BankingStageStats,
},
banking_trace::{BankingPacketBatch, BankingTracer},
bundle_stage::bundle_account_locker::BundleAccountLocker,
},
solana_entry::entry::{next_hash, Entry},
solana_gossip::cluster_info::{ClusterInfo, Node},
@@ -54,6 +55,7 @@ use {
},
solana_streamer::socket::SocketAddrSpace,
std::{
collections::HashSet,
iter::repeat_with,
sync::{atomic::Ordering, Arc},
time::{Duration, Instant},
@@ -65,8 +67,15 @@ fn check_txs(receiver: &Arc<Receiver<WorkingBankEntry>>, ref_tx_count: usize) {
let mut total = 0;
let now = Instant::now();
loop {
if let Ok((_bank, (entry, _tick_height))) = receiver.recv_timeout(Duration::new(1, 0)) {
total += entry.transactions.len();
if let Ok(WorkingBankEntry {
bank: _,
entries_ticks,
}) = receiver.recv_timeout(Duration::new(1, 0))
{
total += entries_ticks
.iter()
.map(|e| e.0.transactions.len())
.sum::<usize>();
}
if total >= ref_tx_count {
break;
@@ -110,7 +119,14 @@ fn bench_consume_buffered(bencher: &mut Bencher) {
);
let (s, _r) = unbounded();
let committer = Committer::new(None, s, Arc::new(PrioritizationFeeCache::new(0u64)));
let consumer = Consumer::new(committer, recorder, QosService::new(1), None);
let consumer = Consumer::new(
committer,
recorder,
QosService::new(1),
None,
HashSet::default(),
BundleAccountLocker::default(),
);
// This tests the performance of buffering packets.
// If the packet buffers are copied, performance will be poor.
bencher.iter(move || {
@@ -304,6 +320,8 @@ fn bench_banking(bencher: &mut Bencher, tx_type: TransactionType) {
bank_forks,
&Arc::new(PrioritizationFeeCache::new(0u64)),
false,
HashSet::default(),
BundleAccountLocker::default(),
);

let chunk_len = verified.len() / CHUNKS;
28 changes: 19 additions & 9 deletions core/benches/consumer.rs
Original file line number Diff line number Diff line change
@@ -7,16 +7,16 @@ use {
iter::IndexedParallelIterator,
prelude::{IntoParallelIterator, IntoParallelRefIterator, ParallelIterator},
},
solana_core::banking_stage::{
committer::Committer, consumer::Consumer, qos_service::QosService,
solana_core::{
banking_stage::{committer::Committer, consumer::Consumer, qos_service::QosService},
bundle_stage::bundle_account_locker::BundleAccountLocker,
},
solana_entry::entry::Entry,
solana_ledger::{
blockstore::Blockstore,
genesis_utils::{create_genesis_config, GenesisConfigInfo},
},
solana_poh::{
poh_recorder::{create_test_recorder, PohRecorder},
poh_recorder::{create_test_recorder, PohRecorder, WorkingBankEntry},
poh_service::PohService,
},
solana_runtime::{bank::Bank, bank_forks::BankForks},
@@ -28,9 +28,12 @@ use {
system_program, system_transaction,
transaction::SanitizedTransaction,
},
std::sync::{
atomic::{AtomicBool, Ordering},
Arc, RwLock,
std::{
collections::HashSet,
sync::{
atomic::{AtomicBool, Ordering},
Arc, RwLock,
},
},
tempfile::TempDir,
test::Bencher,
@@ -84,7 +87,14 @@ fn create_consumer(poh_recorder: &RwLock<PohRecorder>) -> Consumer {
let (replay_vote_sender, _replay_vote_receiver) = unbounded();
let committer = Committer::new(None, replay_vote_sender, Arc::default());
let transaction_recorder = poh_recorder.read().unwrap().new_recorder();
Consumer::new(committer, transaction_recorder, QosService::new(0), None)
Consumer::new(
committer,
transaction_recorder,
QosService::new(0),
None,
HashSet::default(),
BundleAccountLocker::default(),
)
}

struct BenchFrame {
@@ -94,7 +104,7 @@ struct BenchFrame {
exit: Arc<AtomicBool>,
poh_recorder: Arc<RwLock<PohRecorder>>,
poh_service: PohService,
signal_receiver: Receiver<(Arc<Bank>, (Entry, u64))>,
signal_receiver: Receiver<WorkingBankEntry>,
}

fn setup() -> BenchFrame {
57 changes: 57 additions & 0 deletions core/benches/proto_to_packet.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
#![feature(test)]

extern crate test;

use {
jito_protos::proto::packet::{
Meta as PbMeta, Packet as PbPacket, PacketBatch, PacketFlags as PbFlags,
},
solana_core::proto_packet_to_packet,
solana_sdk::packet::{Packet, PACKET_DATA_SIZE},
std::iter::repeat,
test::{black_box, Bencher},
};

fn get_proto_packet(i: u8) -> PbPacket {
PbPacket {
data: repeat(i).take(PACKET_DATA_SIZE).collect(),
meta: Some(PbMeta {
size: PACKET_DATA_SIZE as u64,
addr: "255.255.255.255:65535".to_string(),
port: 65535,
flags: Some(PbFlags {
discard: false,
forwarded: false,
repair: false,
simple_vote_tx: false,
tracer_packet: false,
from_staked_node: false,
}),
sender_stake: 0,
}),
}
}

#[bench]
fn bench_proto_to_packet(bencher: &mut Bencher) {
bencher.iter(|| {
black_box(proto_packet_to_packet(get_proto_packet(1)));
});
}

#[bench]
fn bench_batch_list_to_packets(bencher: &mut Bencher) {
let packet_batch = PacketBatch {
packets: (0..128).map(get_proto_packet).collect(),
};

bencher.iter(|| {
black_box(
packet_batch
.packets
.iter()
.map(|p| proto_packet_to_packet(p.clone()))
.collect::<Vec<Packet>>(),
);
});
}
8 changes: 6 additions & 2 deletions core/src/admin_rpc_post_init.rs
Original file line number Diff line number Diff line change
@@ -1,15 +1,16 @@
use {
crate::{
cluster_slots_service::cluster_slots::ClusterSlots,
proxy::{block_engine_stage::BlockEngineConfig, relayer_stage::RelayerConfig},
repair::{outstanding_requests::OutstandingRequests, serve_repair::ShredRepairType},
},
solana_gossip::cluster_info::ClusterInfo,
solana_runtime::bank_forks::BankForks,
solana_sdk::{pubkey::Pubkey, quic::NotifyKeyUpdate},
std::{
collections::HashSet,
net::UdpSocket,
sync::{Arc, RwLock},
net::{SocketAddr, UdpSocket},
sync::{Arc, Mutex, RwLock},
},
};

@@ -23,4 +24,7 @@ pub struct AdminRpcRequestMetadataPostInit {
pub repair_socket: Arc<UdpSocket>,
pub outstanding_repair_requests: Arc<RwLock<OutstandingRequests<ShredRepairType>>>,
pub cluster_slots: Arc<ClusterSlots>,
pub block_engine_config: Arc<Mutex<BlockEngineConfig>>,
pub relayer_config: Arc<Mutex<RelayerConfig>>,
pub shred_receiver_address: Arc<RwLock<Option<SocketAddr>>>,
}
85 changes: 72 additions & 13 deletions core/src/banking_stage.rs
Original file line number Diff line number Diff line change
@@ -25,6 +25,7 @@ use {
},
},
banking_trace::BankingPacketReceiver,
bundle_stage::bundle_account_locker::BundleAccountLocker,
tracer_packet_stats::TracerPacketStats,
validator::BlockProductionMethod,
},
@@ -40,9 +41,11 @@ use {
bank_forks::BankForks, prioritization_fee_cache::PrioritizationFeeCache,
vote_sender_types::ReplayVoteSender,
},
solana_sdk::timing::AtomicInterval,
solana_sdk::{pubkey::Pubkey, timing::AtomicInterval},
std::{
cmp, env,
cmp,
collections::HashSet,
env,
sync::{
atomic::{AtomicU64, AtomicUsize, Ordering},
Arc, RwLock,
@@ -62,12 +65,12 @@ pub mod unprocessed_packet_batches;
pub mod unprocessed_transaction_storage;

mod consume_worker;
mod decision_maker;
pub(crate) mod decision_maker;
mod forward_packet_batches_by_accounts;
mod forward_worker;
mod immutable_deserialized_packet;
pub(crate) mod immutable_deserialized_packet;
mod latest_unprocessed_votes;
mod leader_slot_timing_metrics;
pub(crate) mod leader_slot_timing_metrics;
mod multi_iterator_scanner;
mod packet_deserializer;
mod packet_filter;
@@ -340,6 +343,8 @@ impl BankingStage {
bank_forks: Arc<RwLock<BankForks>>,
prioritization_fee_cache: &Arc<PrioritizationFeeCache>,
enable_forwarding: bool,
blacklisted_accounts: HashSet<Pubkey>,
bundle_account_locker: BundleAccountLocker,
) -> Self {
Self::new_num_threads(
block_production_method,
@@ -356,6 +361,8 @@ impl BankingStage {
bank_forks,
prioritization_fee_cache,
enable_forwarding,
blacklisted_accounts,
bundle_account_locker,
)
}

@@ -375,6 +382,8 @@ impl BankingStage {
bank_forks: Arc<RwLock<BankForks>>,
prioritization_fee_cache: &Arc<PrioritizationFeeCache>,
enable_forwarding: bool,
blacklisted_accounts: HashSet<Pubkey>,
bundle_account_locker: BundleAccountLocker,
) -> Self {
match block_production_method {
BlockProductionMethod::ThreadLocalMultiIterator => {
@@ -391,6 +400,8 @@ impl BankingStage {
connection_cache,
bank_forks,
prioritization_fee_cache,
blacklisted_accounts,
bundle_account_locker,
)
}
BlockProductionMethod::CentralScheduler => Self::new_central_scheduler(
@@ -407,6 +418,8 @@ impl BankingStage {
bank_forks,
prioritization_fee_cache,
enable_forwarding,
blacklisted_accounts,
bundle_account_locker,
),
}
}
@@ -425,6 +438,8 @@ impl BankingStage {
connection_cache: Arc<ConnectionCache>,
bank_forks: Arc<RwLock<BankForks>>,
prioritization_fee_cache: &Arc<PrioritizationFeeCache>,
blacklisted_accounts: HashSet<Pubkey>,
bundle_account_locker: BundleAccountLocker,
) -> Self {
assert!(num_threads >= MIN_TOTAL_THREADS);
// Single thread to generate entries from many banks.
@@ -492,6 +507,8 @@ impl BankingStage {
log_messages_bytes_limit,
forwarder,
unprocessed_transaction_storage,
blacklisted_accounts.clone(),
bundle_account_locker.clone(),
)
})
.collect();
@@ -513,6 +530,8 @@ impl BankingStage {
bank_forks: Arc<RwLock<BankForks>>,
prioritization_fee_cache: &Arc<PrioritizationFeeCache>,
enable_forwarding: bool,
blacklisted_accounts: HashSet<Pubkey>,
bundle_account_locker: BundleAccountLocker,
) -> Self {
assert!(num_threads >= MIN_TOTAL_THREADS);
// Single thread to generate entries from many banks.
@@ -560,6 +579,8 @@ impl BankingStage {
latest_unprocessed_votes.clone(),
vote_source,
),
blacklisted_accounts.clone(),
bundle_account_locker.clone(),
));
}

@@ -581,6 +602,8 @@ impl BankingStage {
poh_recorder.read().unwrap().new_recorder(),
QosService::new(id),
log_messages_bytes_limit,
blacklisted_accounts.clone(),
bundle_account_locker.clone(),
),
finished_work_sender.clone(),
poh_recorder.read().unwrap().new_leader_bank_notifier(),
@@ -635,6 +658,7 @@ impl BankingStage {
Self { bank_thread_hdls }
}

#[allow(clippy::too_many_arguments)]
fn spawn_thread_local_multi_iterator_thread(
id: u32,
packet_receiver: BankingPacketReceiver,
@@ -645,13 +669,18 @@ impl BankingStage {
log_messages_bytes_limit: Option<usize>,
mut forwarder: Forwarder,
unprocessed_transaction_storage: UnprocessedTransactionStorage,
blacklisted_accounts: HashSet<Pubkey>,
bundle_account_locker: BundleAccountLocker,
) -> JoinHandle<()> {
let mut packet_receiver = PacketReceiver::new(id, packet_receiver, bank_forks);

let consumer = Consumer::new(
committer,
transaction_recorder,
QosService::new(id),
log_messages_bytes_limit,
blacklisted_accounts.clone(),
bundle_account_locker.clone(),
);

Builder::new()
@@ -812,7 +841,7 @@ mod tests {
crate::banking_trace::{BankingPacketBatch, BankingTracer},
crossbeam_channel::{unbounded, Receiver},
itertools::Itertools,
solana_entry::entry::{self, Entry, EntrySlice},
solana_entry::entry::{self, EntrySlice},
solana_gossip::cluster_info::Node,
solana_ledger::{
blockstore::Blockstore,
@@ -826,6 +855,7 @@ mod tests {
solana_poh::{
poh_recorder::{
create_test_recorder, PohRecorderError, Record, RecordTransactionsSummary,
WorkingBankEntry,
},
poh_service::PohService,
},
@@ -897,6 +927,8 @@ mod tests {
bank_forks,
&Arc::new(PrioritizationFeeCache::new(0u64)),
false,
HashSet::default(),
BundleAccountLocker::default(),
);
drop(non_vote_sender);
drop(tpu_vote_sender);
@@ -953,6 +985,8 @@ mod tests {
bank_forks,
&Arc::new(PrioritizationFeeCache::new(0u64)),
false,
HashSet::default(),
BundleAccountLocker::default(),
);
trace!("sending bank");
drop(non_vote_sender);
@@ -965,7 +999,12 @@ mod tests {
trace!("getting entries");
let entries: Vec<_> = entry_receiver
.iter()
.map(|(_bank, (entry, _tick_height))| entry)
.flat_map(
|WorkingBankEntry {
bank: _,
entries_ticks,
}| entries_ticks.into_iter().map(|(e, _)| e),
)
.collect();
trace!("done");
assert_eq!(entries.len(), genesis_config.ticks_per_slot as usize);
@@ -1033,6 +1072,8 @@ mod tests {
bank_forks.clone(), // keep a local-copy of bank-forks so worker threads do not lose weak access to bank-forks
&Arc::new(PrioritizationFeeCache::new(0u64)),
false,
HashSet::default(),
BundleAccountLocker::default(),
);

// fund another account so we can send 2 good transactions in a single batch.
@@ -1084,9 +1125,14 @@ mod tests {
bank.process_transaction(&fund_tx).unwrap();
//receive entries + ticks
loop {
let entries: Vec<Entry> = entry_receiver
let entries: Vec<_> = entry_receiver
.iter()
.map(|(_bank, (entry, _tick_height))| entry)
.flat_map(
|WorkingBankEntry {
bank: _,
entries_ticks,
}| entries_ticks.into_iter().map(|(e, _)| e),
)
.collect();

assert!(entries.verify(&blockhash, &entry::thread_pool_for_tests()));
@@ -1203,6 +1249,8 @@ mod tests {
Arc::new(ConnectionCache::new("connection_cache_test")),
bank_forks,
&Arc::new(PrioritizationFeeCache::new(0u64)),
HashSet::default(),
BundleAccountLocker::default(),
);

// wait for banking_stage to eat the packets
@@ -1221,7 +1269,12 @@ mod tests {
// check that the balance is what we expect.
let entries: Vec<_> = entry_receiver
.iter()
.map(|(_bank, (entry, _tick_height))| entry)
.flat_map(
|WorkingBankEntry {
bank: _,
entries_ticks,
}| entries_ticks.into_iter().map(|(e, _)| e),
)
.collect();

let (bank, _bank_forks) = Bank::new_no_wallclock_throttle_for_tests(&genesis_config);
@@ -1284,15 +1337,19 @@ mod tests {
system_transaction::transfer(&keypair2, &pubkey2, 1, genesis_config.hash()).into(),
];

let _ = recorder.record_transactions(bank.slot(), txs.clone());
let (_bank, (entry, _tick_height)) = entry_receiver.recv().unwrap();
let _ = recorder.record_transactions(bank.slot(), vec![txs.clone()]);
let WorkingBankEntry {
bank,
entries_ticks,
} = entry_receiver.recv().unwrap();
let entry = &entries_ticks.first().unwrap().0;
assert_eq!(entry.transactions, txs);

// Once bank is set to a new bank (setting bank.slot() + 1 in record_transactions),
// record_transactions should throw MaxHeightReached
let next_slot = bank.slot() + 1;
let RecordTransactionsSummary { result, .. } =
recorder.record_transactions(next_slot, txs);
recorder.record_transactions(next_slot, vec![txs]);
assert_matches!(result, Err(PohRecorderError::MaxHeightReached));
// Should receive nothing from PohRecorder b/c record failed
assert!(entry_receiver.try_recv().is_err());
@@ -1395,6 +1452,8 @@ mod tests {
bank_forks,
&Arc::new(PrioritizationFeeCache::new(0u64)),
false,
HashSet::default(),
BundleAccountLocker::default(),
);

let keypairs = (0..100).map(|_| Keypair::new()).collect_vec();
17 changes: 4 additions & 13 deletions core/src/banking_stage/committer.rs
Original file line number Diff line number Diff line change
@@ -12,15 +12,13 @@ use {
transaction_batch::TransactionBatch,
vote_sender_types::ReplayVoteSender,
},
solana_sdk::{hash::Hash, pubkey::Pubkey, saturating_add_assign},
solana_sdk::{hash::Hash, saturating_add_assign},
solana_svm::{
account_loader::TransactionLoadResult,
transaction_results::{TransactionExecutionResult, TransactionResults},
},
solana_transaction_status::{
token_balances::TransactionTokenBalancesSet, TransactionTokenBalance,
},
std::{collections::HashMap, sync::Arc},
solana_transaction_status::{token_balances::TransactionTokenBalancesSet, PreBalanceInfo},
std::sync::Arc,
};

#[derive(Clone, Debug, PartialEq, Eq)]
@@ -32,13 +30,6 @@ pub enum CommitTransactionDetails {
NotCommitted,
}

#[derive(Default)]
pub(super) struct PreBalanceInfo {
pub native: Vec<Vec<u64>>,
pub token: Vec<Vec<TransactionTokenBalance>>,
pub mint_decimals: HashMap<Pubkey, u8>,
}

#[derive(Clone)]
pub struct Committer {
transaction_status_sender: Option<TransactionStatusSender>,
@@ -156,7 +147,7 @@ impl Committer {
let txs = batch.sanitized_transactions().to_vec();
let post_balances = bank.collect_balances(batch);
let post_token_balances =
collect_token_balances(bank, batch, &mut pre_balance_info.mint_decimals);
collect_token_balances(bank, batch, &mut pre_balance_info.mint_decimals, None);
let mut transaction_index = starting_transaction_index.unwrap_or_default();
let batch_transaction_indexes: Vec<_> = tx_results
.execution_results
22 changes: 16 additions & 6 deletions core/src/banking_stage/consume_worker.rs
Original file line number Diff line number Diff line change
@@ -697,11 +697,14 @@ impl ConsumeWorkerTransactionErrorMetrics {
mod tests {
use {
super::*,
crate::banking_stage::{
committer::Committer,
qos_service::QosService,
scheduler_messages::{MaxAge, TransactionBatchId, TransactionId},
tests::{create_slow_genesis_config, sanitize_transactions, simulate_poh},
crate::{
banking_stage::{
committer::Committer,
qos_service::QosService,
scheduler_messages::{MaxAge, TransactionBatchId, TransactionId},
tests::{create_slow_genesis_config, sanitize_transactions, simulate_poh},
},
bundle_stage::bundle_account_locker::BundleAccountLocker,
},
crossbeam_channel::unbounded,
solana_ledger::{
@@ -793,7 +796,14 @@ mod tests {
replay_vote_sender,
Arc::new(PrioritizationFeeCache::new(0u64)),
);
let consumer = Consumer::new(committer, recorder, QosService::new(1), None);
let consumer = Consumer::new(
committer,
recorder,
QosService::new(1),
None,
HashSet::default(),
BundleAccountLocker::default(),
);

let (consume_sender, consume_receiver) = unbounded();
let (consumed_sender, consumed_receiver) = unbounded();
195 changes: 149 additions & 46 deletions core/src/banking_stage/consumer.rs

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion core/src/banking_stage/latest_unprocessed_votes.rs
Original file line number Diff line number Diff line change
@@ -134,7 +134,7 @@ impl LatestValidatorVotePacket {
}

#[derive(Default, Debug)]
pub(crate) struct VoteBatchInsertionMetrics {
pub struct VoteBatchInsertionMetrics {
pub(crate) num_dropped_gossip: usize,
pub(crate) num_dropped_tpu: usize,
}
48 changes: 34 additions & 14 deletions core/src/banking_stage/qos_service.rs
Original file line number Diff line number Diff line change
@@ -6,7 +6,9 @@
use {
super::{committer::CommitTransactionDetails, BatchedTransactionDetails},
solana_cost_model::{
cost_model::CostModel, cost_tracker::UpdatedCosts, transaction_cost::TransactionCost,
cost_model::CostModel,
cost_tracker::{CostTracker, UpdatedCosts},
transaction_cost::TransactionCost,
},
solana_measure::measure::Measure,
solana_runtime::bank::Bank,
@@ -42,6 +44,7 @@ impl QosService {
pub fn select_and_accumulate_transaction_costs(
&self,
bank: &Bank,
cost_tracker: &mut CostTracker, // caller should pass in &mut bank.write_cost_tracker().unwrap()
transactions: &[SanitizedTransaction],
pre_results: impl Iterator<Item = transaction::Result<()>>,
) -> (Vec<transaction::Result<TransactionCost>>, usize) {
@@ -50,7 +53,8 @@ impl QosService {
let (transactions_qos_cost_results, num_included) = self.select_transactions_per_cost(
transactions.iter(),
transaction_costs.into_iter(),
bank,
bank.slot(),
cost_tracker,
);
self.accumulate_estimated_transaction_costs(&Self::accumulate_batched_transaction_costs(
transactions_qos_cost_results.iter(),
@@ -96,24 +100,24 @@ impl QosService {
&self,
transactions: impl Iterator<Item = &'a SanitizedTransaction>,
transactions_costs: impl Iterator<Item = transaction::Result<TransactionCost>>,
bank: &Bank,
slot: Slot,
cost_tracker: &mut CostTracker,
) -> (Vec<transaction::Result<TransactionCost>>, usize) {
let mut cost_tracking_time = Measure::start("cost_tracking_time");
let mut cost_tracker = bank.write_cost_tracker().unwrap();
let mut num_included = 0;
let select_results = transactions.zip(transactions_costs)
.map(|(tx, cost)| {
match cost {
Ok(cost) => {
match cost_tracker.try_add(&cost) {
Ok(UpdatedCosts{updated_block_cost, updated_costliest_account_cost}) => {
debug!("slot {:?}, transaction {:?}, cost {:?}, fit into current block, current block cost {}, updated costliest account cost {}", bank.slot(), tx, cost, updated_block_cost, updated_costliest_account_cost);
debug!("slot {:?}, transaction {:?}, cost {:?}, fit into current block, current block cost {}, updated costliest account cost {}", slot, tx, cost, updated_block_cost, updated_costliest_account_cost);
self.metrics.stats.selected_txs_count.fetch_add(1, Ordering::Relaxed);
num_included += 1;
Ok(cost)
},
Err(e) => {
debug!("slot {:?}, transaction {:?}, cost {:?}, not fit into current block, '{:?}'", bank.slot(), tx, cost, e);
debug!("slot {:?}, transaction {:?}, cost {:?}, not fit into current block, '{:?}'", slot, tx, cost, e);
Err(TransactionError::from(e))
}
}
@@ -685,8 +689,12 @@ mod tests {
bank.write_cost_tracker()
.unwrap()
.set_limits(cost_limit, cost_limit, cost_limit);
let (results, num_selected) =
qos_service.select_transactions_per_cost(txs.iter(), txs_costs.into_iter(), &bank);
let (results, num_selected) = qos_service.select_transactions_per_cost(
txs.iter(),
txs_costs.into_iter(),
bank.slot(),
&mut bank.write_cost_tracker().unwrap(),
);
assert_eq!(num_selected, 2);

// verify that first transfer tx and first vote are allowed
@@ -739,8 +747,12 @@ mod tests {
.iter()
.map(|cost| cost.as_ref().unwrap().sum())
.sum();
let (qos_cost_results, _num_included) =
qos_service.select_transactions_per_cost(txs.iter(), txs_costs.into_iter(), &bank);
let (qos_cost_results, _num_included) = qos_service.select_transactions_per_cost(
txs.iter(),
txs_costs.into_iter(),
bank.slot(),
&mut bank.write_cost_tracker().unwrap(),
);
assert_eq!(
total_txs_cost,
bank.read_cost_tracker().unwrap().block_cost()
@@ -804,8 +816,12 @@ mod tests {
.iter()
.map(|cost| cost.as_ref().unwrap().sum())
.sum();
let (qos_cost_results, _num_included) =
qos_service.select_transactions_per_cost(txs.iter(), txs_costs.into_iter(), &bank);
let (qos_cost_results, _num_included) = qos_service.select_transactions_per_cost(
txs.iter(),
txs_costs.into_iter(),
bank.slot(),
&mut bank.write_cost_tracker().unwrap(),
);
assert_eq!(
total_txs_cost,
bank.read_cost_tracker().unwrap().block_cost()
@@ -859,8 +875,12 @@ mod tests {
.iter()
.map(|cost| cost.as_ref().unwrap().sum())
.sum();
let (qos_cost_results, _num_included) =
qos_service.select_transactions_per_cost(txs.iter(), txs_costs.into_iter(), &bank);
let (qos_cost_results, _num_included) = qos_service.select_transactions_per_cost(
txs.iter(),
txs_costs.into_iter(),
bank.slot(),
&mut bank.write_cost_tracker().unwrap(),
);
assert_eq!(
total_txs_cost,
bank.read_cost_tracker().unwrap().block_cost()
470 changes: 464 additions & 6 deletions core/src/banking_stage/unprocessed_transaction_storage.rs

Large diffs are not rendered by default.

1 change: 1 addition & 0 deletions core/src/banking_trace.rs
Original file line number Diff line number Diff line change
@@ -321,6 +321,7 @@ impl BankingTracer {
}
}

#[derive(Clone)]
pub struct TracedSender {
label: ChannelLabel,
sender: Sender<BankingPacketBatch>,
434 changes: 434 additions & 0 deletions core/src/bundle_stage.rs

Large diffs are not rendered by default.

326 changes: 326 additions & 0 deletions core/src/bundle_stage/bundle_account_locker.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,326 @@
//! Handles pre-locking bundle accounts so that accounts bundles touch can be reserved ahead
// of time for execution. Also, ensures that ALL accounts mentioned across a bundle are locked
// to avoid race conditions between BundleStage and BankingStage.
//
// For instance, imagine a bundle with three transactions and the set of accounts for each transaction
// is: {{A, B}, {B, C}, {C, D}}. We need to lock A, B, and C even though only one is executed at a time.
// Imagine BundleStage is in the middle of processing {C, D} and we didn't have a lock on accounts {A, B, C}.
// In this situation, there's a chance that BankingStage can process a transaction containing A or B
// and commit the results before the bundle completes. By the time the bundle commits the new account
// state for {A, B, C}, A and B would be incorrect and the entries containing the bundle would be
// replayed improperly and that leader would have produced an invalid block.
use {
solana_runtime::bank::Bank,
solana_sdk::{bundle::SanitizedBundle, pubkey::Pubkey, transaction::TransactionAccountLocks},
std::{
collections::{hash_map::Entry, HashMap, HashSet},
sync::{Arc, Mutex, MutexGuard},
},
thiserror::Error,
};

#[derive(Clone, Error, Debug)]
pub enum BundleAccountLockerError {
#[error("locking error")]
LockingError,
}

pub type BundleAccountLockerResult<T> = Result<T, BundleAccountLockerError>;

pub struct LockedBundle<'a, 'b> {
bundle_account_locker: &'a BundleAccountLocker,
sanitized_bundle: &'b SanitizedBundle,
bank: Arc<Bank>,
}

impl<'a, 'b> LockedBundle<'a, 'b> {
pub fn new(
bundle_account_locker: &'a BundleAccountLocker,
sanitized_bundle: &'b SanitizedBundle,
bank: &Arc<Bank>,
) -> Self {
Self {
bundle_account_locker,
sanitized_bundle,
bank: bank.clone(),
}
}

pub fn sanitized_bundle(&self) -> &SanitizedBundle {
self.sanitized_bundle
}
}

// Automatically unlock bundle accounts when destructed
impl<'a, 'b> Drop for LockedBundle<'a, 'b> {
fn drop(&mut self) {
let _ = self
.bundle_account_locker
.unlock_bundle_accounts(self.sanitized_bundle, &self.bank);
}
}

#[derive(Default, Clone)]
pub struct BundleAccountLocks {
read_locks: HashMap<Pubkey, u64>,
write_locks: HashMap<Pubkey, u64>,
}

impl BundleAccountLocks {
pub fn read_locks(&self) -> HashSet<Pubkey> {
self.read_locks.keys().cloned().collect()
}

pub fn write_locks(&self) -> HashSet<Pubkey> {
self.write_locks.keys().cloned().collect()
}

pub fn lock_accounts(
&mut self,
read_locks: HashMap<Pubkey, u64>,
write_locks: HashMap<Pubkey, u64>,
) {
for (acc, count) in read_locks {
*self.read_locks.entry(acc).or_insert(0) += count;
}
for (acc, count) in write_locks {
*self.write_locks.entry(acc).or_insert(0) += count;
}
}

pub fn unlock_accounts(
&mut self,
read_locks: HashMap<Pubkey, u64>,
write_locks: HashMap<Pubkey, u64>,
) {
for (acc, count) in read_locks {
if let Entry::Occupied(mut entry) = self.read_locks.entry(acc) {
let val = entry.get_mut();
*val = val.saturating_sub(count);
if entry.get() == &0 {
let _ = entry.remove();
}
} else {
warn!("error unlocking read-locked account, account: {:?}", acc);
}
}
for (acc, count) in write_locks {
if let Entry::Occupied(mut entry) = self.write_locks.entry(acc) {
let val = entry.get_mut();
*val = val.saturating_sub(count);
if entry.get() == &0 {
let _ = entry.remove();
}
} else {
warn!("error unlocking write-locked account, account: {:?}", acc);
}
}
}
}

#[derive(Clone, Default)]
pub struct BundleAccountLocker {
account_locks: Arc<Mutex<BundleAccountLocks>>,
}

impl BundleAccountLocker {
/// used in BankingStage during TransactionBatch construction to ensure that BankingStage
/// doesn't lock anything currently locked in the BundleAccountLocker
pub fn read_locks(&self) -> HashSet<Pubkey> {
self.account_locks.lock().unwrap().read_locks()
}

/// used in BankingStage during TransactionBatch construction to ensure that BankingStage
/// doesn't lock anything currently locked in the BundleAccountLocker
pub fn write_locks(&self) -> HashSet<Pubkey> {
self.account_locks.lock().unwrap().write_locks()
}

/// used in BankingStage during TransactionBatch construction to ensure that BankingStage
/// doesn't lock anything currently locked in the BundleAccountLocker
pub fn account_locks(&self) -> MutexGuard<BundleAccountLocks> {
self.account_locks.lock().unwrap()
}

/// Prepares a locked bundle and returns a LockedBundle containing locked accounts.
/// When a LockedBundle is dropped, the accounts are automatically unlocked
pub fn prepare_locked_bundle<'a, 'b>(
&'a self,
sanitized_bundle: &'b SanitizedBundle,
bank: &Arc<Bank>,
) -> BundleAccountLockerResult<LockedBundle<'a, 'b>> {
let (read_locks, write_locks) = Self::get_read_write_locks(sanitized_bundle, bank)?;

self.account_locks
.lock()
.unwrap()
.lock_accounts(read_locks, write_locks);
Ok(LockedBundle::new(self, sanitized_bundle, bank))
}

/// Unlocks bundle accounts. Note that LockedBundle::drop will auto-drop the bundle account locks
fn unlock_bundle_accounts(
&self,
sanitized_bundle: &SanitizedBundle,
bank: &Bank,
) -> BundleAccountLockerResult<()> {
let (read_locks, write_locks) = Self::get_read_write_locks(sanitized_bundle, bank)?;

self.account_locks
.lock()
.unwrap()
.unlock_accounts(read_locks, write_locks);
Ok(())
}

/// Returns the read and write locks for this bundle
/// Each lock type contains a HashMap which maps Pubkey to number of locks held
fn get_read_write_locks(
bundle: &SanitizedBundle,
bank: &Bank,
) -> BundleAccountLockerResult<(HashMap<Pubkey, u64>, HashMap<Pubkey, u64>)> {
let transaction_locks: Vec<TransactionAccountLocks> = bundle
.transactions
.iter()
.filter_map(|tx| {
tx.get_account_locks(bank.get_transaction_account_lock_limit())
.ok()
})
.collect();

if transaction_locks.len() != bundle.transactions.len() {
return Err(BundleAccountLockerError::LockingError);
}

let bundle_read_locks = transaction_locks
.iter()
.flat_map(|tx| tx.readonly.iter().map(|a| **a));
let bundle_read_locks =
bundle_read_locks
.into_iter()
.fold(HashMap::new(), |mut map, acc| {
*map.entry(acc).or_insert(0) += 1;
map
});

let bundle_write_locks = transaction_locks
.iter()
.flat_map(|tx| tx.writable.iter().map(|a| **a));
let bundle_write_locks =
bundle_write_locks
.into_iter()
.fold(HashMap::new(), |mut map, acc| {
*map.entry(acc).or_insert(0) += 1;
map
});

Ok((bundle_read_locks, bundle_write_locks))
}
}

#[cfg(test)]
mod tests {
use {
crate::{
bundle_stage::bundle_account_locker::BundleAccountLocker,
immutable_deserialized_bundle::ImmutableDeserializedBundle,
packet_bundle::PacketBundle,
},
solana_ledger::genesis_utils::create_genesis_config,
solana_perf::packet::PacketBatch,
solana_runtime::{bank::Bank, genesis_utils::GenesisConfigInfo},
solana_sdk::{
packet::Packet, signature::Signer, signer::keypair::Keypair, system_program,
system_transaction::transfer, transaction::VersionedTransaction,
},
solana_svm::transaction_error_metrics::TransactionErrorMetrics,
std::collections::HashSet,
};

#[test]
fn test_simple_lock_bundles() {
let GenesisConfigInfo {
genesis_config,
mint_keypair,
..
} = create_genesis_config(2);
let (bank, _) = Bank::new_no_wallclock_throttle_for_tests(&genesis_config);

let bundle_account_locker = BundleAccountLocker::default();

let kp0 = Keypair::new();
let kp1 = Keypair::new();

let tx0 = VersionedTransaction::from(transfer(
&mint_keypair,
&kp0.pubkey(),
1,
genesis_config.hash(),
));
let tx1 = VersionedTransaction::from(transfer(
&mint_keypair,
&kp1.pubkey(),
1,
genesis_config.hash(),
));

let mut packet_bundle0 = PacketBundle {
batch: PacketBatch::new(vec![Packet::from_data(None, &tx0).unwrap()]),
bundle_id: tx0.signatures[0].to_string(),
};
let mut packet_bundle1 = PacketBundle {
batch: PacketBatch::new(vec![Packet::from_data(None, &tx1).unwrap()]),
bundle_id: tx1.signatures[0].to_string(),
};

let mut transaction_errors = TransactionErrorMetrics::default();

let sanitized_bundle0 = ImmutableDeserializedBundle::new(&mut packet_bundle0, None)
.unwrap()
.build_sanitized_bundle(&bank, &HashSet::default(), &mut transaction_errors)
.expect("sanitize bundle 0");
let sanitized_bundle1 = ImmutableDeserializedBundle::new(&mut packet_bundle1, None)
.unwrap()
.build_sanitized_bundle(&bank, &HashSet::default(), &mut transaction_errors)
.expect("sanitize bundle 1");

let locked_bundle0 = bundle_account_locker
.prepare_locked_bundle(&sanitized_bundle0, &bank)
.unwrap();

assert_eq!(
bundle_account_locker.write_locks(),
HashSet::from_iter([mint_keypair.pubkey(), kp0.pubkey()])
);
assert_eq!(
bundle_account_locker.read_locks(),
HashSet::from_iter([system_program::id()])
);

let locked_bundle1 = bundle_account_locker
.prepare_locked_bundle(&sanitized_bundle1, &bank)
.unwrap();
assert_eq!(
bundle_account_locker.write_locks(),
HashSet::from_iter([mint_keypair.pubkey(), kp0.pubkey(), kp1.pubkey()])
);
assert_eq!(
bundle_account_locker.read_locks(),
HashSet::from_iter([system_program::id()])
);

drop(locked_bundle0);
assert_eq!(
bundle_account_locker.write_locks(),
HashSet::from_iter([mint_keypair.pubkey(), kp1.pubkey()])
);
assert_eq!(
bundle_account_locker.read_locks(),
HashSet::from_iter([system_program::id()])
);

drop(locked_bundle1);
assert!(bundle_account_locker.write_locks().is_empty());
assert!(bundle_account_locker.read_locks().is_empty());
}
}
1,584 changes: 1,584 additions & 0 deletions core/src/bundle_stage/bundle_consumer.rs

Large diffs are not rendered by default.

271 changes: 271 additions & 0 deletions core/src/bundle_stage/bundle_packet_deserializer.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,271 @@
//! Deserializes PacketBundles
use {
crate::{
immutable_deserialized_bundle::{DeserializedBundleError, ImmutableDeserializedBundle},
packet_bundle::PacketBundle,
},
crossbeam_channel::{Receiver, RecvTimeoutError},
solana_runtime::bank_forks::BankForks,
solana_sdk::saturating_add_assign,
std::{
sync::{Arc, RwLock},
time::{Duration, Instant},
},
};

/// Results from deserializing packet batches.
#[derive(Debug)]
pub struct ReceiveBundleResults {
/// Deserialized bundles from all received bundle packets
pub deserialized_bundles: Vec<ImmutableDeserializedBundle>,
/// Number of dropped bundles
pub num_dropped_bundles: usize,
}

pub struct BundlePacketDeserializer {
/// Receiver for bundle packets
bundle_packet_receiver: Receiver<Vec<PacketBundle>>,
/// Provides working bank for deserializer to check feature activation
bank_forks: Arc<RwLock<BankForks>>,
/// Max packets per bundle
max_packets_per_bundle: Option<usize>,
}

impl BundlePacketDeserializer {
pub fn new(
bundle_packet_receiver: Receiver<Vec<PacketBundle>>,
bank_forks: Arc<RwLock<BankForks>>,
max_packets_per_bundle: Option<usize>,
) -> Self {
Self {
bundle_packet_receiver,
bank_forks,
max_packets_per_bundle,
}
}

/// Handles receiving bundles and deserializing them
pub fn receive_bundles(
&self,
recv_timeout: Duration,
capacity: usize,
) -> Result<ReceiveBundleResults, RecvTimeoutError> {
let (bundle_count, _packet_count, mut bundles) =
self.receive_until(recv_timeout, capacity)?;

// Note: this can be removed after feature `round_compute_unit_price` is activated in
// mainnet-beta
let _working_bank = self.bank_forks.read().unwrap().working_bank();
let round_compute_unit_price_enabled = false; // TODO get from working_bank.feature_set

Ok(Self::deserialize_and_collect_bundles(
bundle_count,
&mut bundles,
round_compute_unit_price_enabled,
self.max_packets_per_bundle,
))
}

/// Deserialize packet batches, aggregates tracer packet stats, and collect
/// them into ReceivePacketResults
fn deserialize_and_collect_bundles(
bundle_count: usize,
bundles: &mut [PacketBundle],
round_compute_unit_price_enabled: bool,
max_packets_per_bundle: Option<usize>,
) -> ReceiveBundleResults {
let mut deserialized_bundles = Vec::with_capacity(bundle_count);
let mut num_dropped_bundles: usize = 0;

for bundle in bundles.iter_mut() {
match Self::deserialize_bundle(
bundle,
round_compute_unit_price_enabled,
max_packets_per_bundle,
) {
Ok(deserialized_bundle) => {
deserialized_bundles.push(deserialized_bundle);
}
Err(_) => {
saturating_add_assign!(num_dropped_bundles, 1);
}
}
}

ReceiveBundleResults {
deserialized_bundles,
num_dropped_bundles,
}
}

/// Receives bundle packets
fn receive_until(
&self,
recv_timeout: Duration,
bundle_count_upperbound: usize,
) -> Result<(usize, usize, Vec<PacketBundle>), RecvTimeoutError> {
let start = Instant::now();

let mut bundles = self.bundle_packet_receiver.recv_timeout(recv_timeout)?;
let mut num_packets_received: usize = bundles.iter().map(|pb| pb.batch.len()).sum();
let mut num_bundles_received: usize = bundles.len();

if num_bundles_received <= bundle_count_upperbound {
while let Ok(bundle_packets) = self.bundle_packet_receiver.try_recv() {
trace!("got more packet batches in bundle packet deserializer");

saturating_add_assign!(
num_packets_received,
bundle_packets
.iter()
.map(|pb| pb.batch.len())
.sum::<usize>()
);
saturating_add_assign!(num_bundles_received, bundle_packets.len());

bundles.extend(bundle_packets);

if start.elapsed() >= recv_timeout
|| num_bundles_received >= bundle_count_upperbound
{
break;
}
}
}

Ok((num_bundles_received, num_packets_received, bundles))
}

/// Deserializes the Bundle into DeserializedBundlePackets, returning None if any packet in the
/// bundle failed to deserialize
pub fn deserialize_bundle(
bundle: &mut PacketBundle,
round_compute_unit_price_enabled: bool,
max_packets_per_bundle: Option<usize>,
) -> Result<ImmutableDeserializedBundle, DeserializedBundleError> {
bundle.batch.iter_mut().for_each(|p| {
p.meta_mut()
.set_round_compute_unit_price(round_compute_unit_price_enabled);
});

ImmutableDeserializedBundle::new(bundle, max_packets_per_bundle)
}
}

#[cfg(test)]
mod tests {
use {
super::*,
crossbeam_channel::unbounded,
solana_ledger::genesis_utils::create_genesis_config,
solana_perf::packet::PacketBatch,
solana_runtime::{bank::Bank, genesis_utils::GenesisConfigInfo},
solana_sdk::{packet::Packet, signature::Signer, system_transaction::transfer},
};

#[test]
fn test_deserialize_and_collect_bundles_empty() {
let results =
BundlePacketDeserializer::deserialize_and_collect_bundles(0, &mut [], false, Some(5));
assert_eq!(results.deserialized_bundles.len(), 0);
assert_eq!(results.num_dropped_bundles, 0);
}

#[test]
fn test_receive_bundles_capacity() {
solana_logger::setup();

let GenesisConfigInfo {
genesis_config,
mint_keypair,
..
} = create_genesis_config(10_000);
let (_, bank_forks) = Bank::new_no_wallclock_throttle_for_tests(&genesis_config);

let (sender, receiver) = unbounded();

let deserializer = BundlePacketDeserializer::new(receiver, bank_forks, Some(10));

let packet_bundles: Vec<_> = (0..10)
.map(|_| PacketBundle {
batch: PacketBatch::new(vec![Packet::from_data(
None,
transfer(
&mint_keypair,
&mint_keypair.pubkey(),
100,
genesis_config.hash(),
),
)
.unwrap()]),
bundle_id: String::default(),
})
.collect();

sender.send(packet_bundles.clone()).unwrap();

let bundles = deserializer
.receive_bundles(Duration::from_millis(100), 5)
.unwrap();
// this is confusing, but it's sent as one batch
assert_eq!(bundles.deserialized_bundles.len(), 10);
assert_eq!(bundles.num_dropped_bundles, 0);

// make sure empty
assert_matches!(
deserializer.receive_bundles(Duration::from_millis(100), 5),
Err(RecvTimeoutError::Timeout)
);

// send 2x 10 size batches. capacity is 5, but will return 10 since that's the batch size
sender.send(packet_bundles.clone()).unwrap();
sender.send(packet_bundles).unwrap();
let bundles = deserializer
.receive_bundles(Duration::from_millis(100), 5)
.unwrap();
assert_eq!(bundles.deserialized_bundles.len(), 10);
assert_eq!(bundles.num_dropped_bundles, 0);

let bundles = deserializer
.receive_bundles(Duration::from_millis(100), 5)
.unwrap();
assert_eq!(bundles.deserialized_bundles.len(), 10);
assert_eq!(bundles.num_dropped_bundles, 0);

assert_matches!(
deserializer.receive_bundles(Duration::from_millis(100), 5),
Err(RecvTimeoutError::Timeout)
);
}

#[test]
fn test_receive_bundles_bad_bundles() {
solana_logger::setup();

let GenesisConfigInfo {
genesis_config,
mint_keypair: _,
..
} = create_genesis_config(10_000);
let (_, bank_forks) = Bank::new_no_wallclock_throttle_for_tests(&genesis_config);

let (sender, receiver) = unbounded();

let deserializer = BundlePacketDeserializer::new(receiver, bank_forks, Some(10));

let packet_bundles: Vec<_> = (0..10)
.map(|_| PacketBundle {
batch: PacketBatch::new(vec![]),
bundle_id: String::default(),
})
.collect();
sender.send(packet_bundles).unwrap();

let bundles = deserializer
.receive_bundles(Duration::from_millis(100), 5)
.unwrap();
// this is confusing, but it's sent as one batch
assert_eq!(bundles.deserialized_bundles.len(), 0);
assert_eq!(bundles.num_dropped_bundles, 10);
}
}
825 changes: 825 additions & 0 deletions core/src/bundle_stage/bundle_packet_receiver.rs

Large diffs are not rendered by default.

237 changes: 237 additions & 0 deletions core/src/bundle_stage/bundle_reserved_space_manager.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,237 @@
use {solana_runtime::bank::Bank, solana_sdk::clock::Slot, std::sync::Arc};

/// Manager responsible for reserving `bundle_reserved_cost` during the first `reserved_ticks` of a bank
/// and resetting the block cost limit to `block_cost_limit` after the reserved tick period is over
pub struct BundleReservedSpaceManager {
// the bank's cost limit
block_cost_limit: u64,
// bundles get this much reserved space for the first reserved_ticks
bundle_reserved_cost: u64,
// a reduced block_compute_limit is reserved for this many ticks, afterwards it goes back to full cost
reserved_ticks: u64,
last_slot_updated: Slot,
}

impl BundleReservedSpaceManager {
pub fn new(block_cost_limit: u64, bundle_reserved_cost: u64, reserved_ticks: u64) -> Self {
Self {
block_cost_limit,
bundle_reserved_cost,
reserved_ticks,
last_slot_updated: u64::MAX,
}
}

/// Call this on creation of new bank and periodically while bundle processing
/// to manage the block_cost_limits
pub fn tick(&mut self, bank: &Arc<Bank>) {
if self.last_slot_updated == bank.slot() && !self.is_in_reserved_tick_period(bank) {
// new slot logic already ran, need to revert the block cost limit to original if
// ticks are past the reserved tick mark
debug!(
"slot: {} ticks: {}, resetting block_cost_limit to {}",
bank.slot(),
bank.tick_height(),
self.block_cost_limit
);
bank.write_cost_tracker()
.unwrap()
.set_block_cost_limit(self.block_cost_limit);
} else if self.last_slot_updated != bank.slot() && self.is_in_reserved_tick_period(bank) {
// new slot, if in the first max_tick - tick_height slots reserve space
// otherwise can leave the current block limit as is
let new_block_cost_limit = self.reduced_block_cost_limit();
debug!(
"slot: {} ticks: {}, reserving block_cost_limit with block_cost_limit of {}",
bank.slot(),
bank.tick_height(),
new_block_cost_limit
);
bank.write_cost_tracker()
.unwrap()
.set_block_cost_limit(new_block_cost_limit);
self.last_slot_updated = bank.slot();
}
}

/// return true if the bank is still in the period where block_cost_limits is reduced
pub fn is_in_reserved_tick_period(&self, bank: &Bank) -> bool {
bank.tick_height() % bank.ticks_per_slot() < self.reserved_ticks
}

/// return the block_cost_limits as determined by the tick height of the bank
pub fn expected_block_cost_limits(&self, bank: &Bank) -> u64 {
if self.is_in_reserved_tick_period(bank) {
self.reduced_block_cost_limit()
} else {
self.block_cost_limit()
}
}

pub fn reduced_block_cost_limit(&self) -> u64 {
self.block_cost_limit
.saturating_sub(self.bundle_reserved_cost)
}

pub fn block_cost_limit(&self) -> u64 {
self.block_cost_limit
}
}

#[cfg(test)]
mod tests {
use {
crate::bundle_stage::bundle_reserved_space_manager::BundleReservedSpaceManager,
solana_ledger::genesis_utils::create_genesis_config, solana_runtime::bank::Bank,
solana_sdk::pubkey::Pubkey, std::sync::Arc,
};

#[test]
fn test_reserve_block_cost_limits_during_reserved_ticks() {
const BUNDLE_BLOCK_COST_LIMITS_RESERVATION: u64 = 100;

let genesis_config_info = create_genesis_config(100);
let bank = Arc::new(Bank::new_for_tests(&genesis_config_info.genesis_config));

let block_cost_limits = bank.read_cost_tracker().unwrap().block_cost_limit();

let mut reserved_space = BundleReservedSpaceManager::new(
block_cost_limits,
BUNDLE_BLOCK_COST_LIMITS_RESERVATION,
5,
);
reserved_space.tick(&bank);

assert_eq!(
bank.read_cost_tracker().unwrap().block_cost_limit(),
block_cost_limits - BUNDLE_BLOCK_COST_LIMITS_RESERVATION
);
}

#[test]
fn test_dont_reserve_block_cost_limits_after_reserved_ticks() {
const BUNDLE_BLOCK_COST_LIMITS_RESERVATION: u64 = 100;

let genesis_config_info = create_genesis_config(100);
let bank = Arc::new(Bank::new_for_tests(&genesis_config_info.genesis_config));

let block_cost_limits = bank.read_cost_tracker().unwrap().block_cost_limit();

for _ in 0..5 {
bank.register_default_tick_for_test();
}

let mut reserved_space = BundleReservedSpaceManager::new(
block_cost_limits,
BUNDLE_BLOCK_COST_LIMITS_RESERVATION,
5,
);
reserved_space.tick(&bank);

assert_eq!(
bank.read_cost_tracker().unwrap().block_cost_limit(),
block_cost_limits
);
}

#[test]
fn test_dont_reset_block_cost_limits_during_reserved_ticks() {
const BUNDLE_BLOCK_COST_LIMITS_RESERVATION: u64 = 100;

let genesis_config_info = create_genesis_config(100);
let bank = Arc::new(Bank::new_for_tests(&genesis_config_info.genesis_config));

let block_cost_limits = bank.read_cost_tracker().unwrap().block_cost_limit();

let mut reserved_space = BundleReservedSpaceManager::new(
block_cost_limits,
BUNDLE_BLOCK_COST_LIMITS_RESERVATION,
5,
);

reserved_space.tick(&bank);
bank.register_default_tick_for_test();
reserved_space.tick(&bank);

assert_eq!(
bank.read_cost_tracker().unwrap().block_cost_limit(),
block_cost_limits - BUNDLE_BLOCK_COST_LIMITS_RESERVATION
);
}

#[test]
fn test_reset_block_cost_limits_after_reserved_ticks() {
const BUNDLE_BLOCK_COST_LIMITS_RESERVATION: u64 = 100;

let genesis_config_info = create_genesis_config(100);
let bank = Arc::new(Bank::new_for_tests(&genesis_config_info.genesis_config));

let block_cost_limits = bank.read_cost_tracker().unwrap().block_cost_limit();

let mut reserved_space = BundleReservedSpaceManager::new(
block_cost_limits,
BUNDLE_BLOCK_COST_LIMITS_RESERVATION,
5,
);

reserved_space.tick(&bank);

for _ in 0..5 {
bank.register_default_tick_for_test();
}
reserved_space.tick(&bank);

assert_eq!(
bank.read_cost_tracker().unwrap().block_cost_limit(),
block_cost_limits
);
}

#[test]
fn test_block_limits_after_first_slot() {
const BUNDLE_BLOCK_COST_LIMITS_RESERVATION: u64 = 100;
const RESERVED_TICKS: u64 = 5;
let genesis_config_info = create_genesis_config(100);
let bank = Arc::new(Bank::new_for_tests(&genesis_config_info.genesis_config));

for _ in 0..genesis_config_info.genesis_config.ticks_per_slot {
bank.register_default_tick_for_test();
}
assert!(bank.is_complete());
bank.freeze();
assert_eq!(
bank.read_cost_tracker().unwrap().block_cost_limit(),
solana_cost_model::block_cost_limits::MAX_BLOCK_UNITS,
);

let bank1 = Arc::new(Bank::new_from_parent(bank.clone(), &Pubkey::default(), 1));
assert_eq!(bank1.slot(), 1);
assert_eq!(bank1.tick_height(), 64);
assert_eq!(bank1.max_tick_height(), 128);

// reserve space
let block_cost_limits = bank1.read_cost_tracker().unwrap().block_cost_limit();
let mut reserved_space = BundleReservedSpaceManager::new(
block_cost_limits,
BUNDLE_BLOCK_COST_LIMITS_RESERVATION,
RESERVED_TICKS,
);
reserved_space.tick(&bank1);

// wait for reservation to be over
(0..RESERVED_TICKS).for_each(|_| {
bank1.register_default_tick_for_test();
assert_eq!(
bank1.read_cost_tracker().unwrap().block_cost_limit(),
block_cost_limits - BUNDLE_BLOCK_COST_LIMITS_RESERVATION
);
});
reserved_space.tick(&bank1);

// after reservation, revert back to normal limit
assert_eq!(
bank1.read_cost_tracker().unwrap().block_cost_limit(),
solana_cost_model::block_cost_limits::MAX_BLOCK_UNITS,
);
}
}
506 changes: 506 additions & 0 deletions core/src/bundle_stage/bundle_stage_leader_metrics.rs

Large diffs are not rendered by default.

227 changes: 227 additions & 0 deletions core/src/bundle_stage/committer.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,227 @@
use {
crate::banking_stage::{
committer::CommitTransactionDetails,
leader_slot_timing_metrics::LeaderExecuteAndCommitTimings,
},
solana_bundle::bundle_execution::LoadAndExecuteBundleOutput,
solana_ledger::blockstore_processor::TransactionStatusSender,
solana_measure::measure_us,
solana_runtime::{
bank::{Bank, ExecutedTransactionCounts, TransactionBalances, TransactionBalancesSet},
bank_utils,
prioritization_fee_cache::PrioritizationFeeCache,
vote_sender_types::ReplayVoteSender,
},
solana_sdk::{hash::Hash, saturating_add_assign, transaction::SanitizedTransaction},
solana_svm::transaction_results::TransactionResults,
solana_transaction_status::{
token_balances::{TransactionTokenBalances, TransactionTokenBalancesSet},
PreBalanceInfo,
},
std::sync::Arc,
};

#[derive(Clone, Debug, Default, PartialEq, Eq)]
pub struct CommitBundleDetails {
pub commit_transaction_details: Vec<Vec<CommitTransactionDetails>>,
}

pub struct Committer {
transaction_status_sender: Option<TransactionStatusSender>,
replay_vote_sender: ReplayVoteSender,
prioritization_fee_cache: Arc<PrioritizationFeeCache>,
}

impl Committer {
pub fn new(
transaction_status_sender: Option<TransactionStatusSender>,
replay_vote_sender: ReplayVoteSender,
prioritization_fee_cache: Arc<PrioritizationFeeCache>,
) -> Self {
Self {
transaction_status_sender,
replay_vote_sender,
prioritization_fee_cache,
}
}

pub(crate) fn transaction_status_sender_enabled(&self) -> bool {
self.transaction_status_sender.is_some()
}

/// Very similar to Committer::commit_transactions, but works with bundles.
/// The main difference is there's multiple non-parallelizable transaction vectors to commit
/// and post-balances are collected after execution instead of from the bank in Self::collect_balances_and_send_status_batch.
#[allow(clippy::too_many_arguments)]
pub(crate) fn commit_bundle<'a>(
&self,
bundle_execution_output: &'a mut LoadAndExecuteBundleOutput<'a>,
last_blockhash: Hash,
lamports_per_signature: u64,
mut starting_transaction_index: Option<usize>,
bank: &Arc<Bank>,
execute_and_commit_timings: &mut LeaderExecuteAndCommitTimings,
) -> (u64, CommitBundleDetails) {
let transaction_output = bundle_execution_output.bundle_transaction_results_mut();

let (commit_transaction_details, commit_times): (Vec<_>, Vec<_>) = transaction_output
.iter_mut()
.map(|bundle_results| {
let executed_transactions_count = bundle_results
.load_and_execute_transactions_output()
.executed_transactions_count
as u64;

let executed_non_vote_transactions_count = bundle_results
.load_and_execute_transactions_output()
.executed_non_vote_transactions_count
as u64;

let executed_with_failure_result_count = bundle_results
.load_and_execute_transactions_output()
.executed_transactions_count
.saturating_sub(
bundle_results
.load_and_execute_transactions_output()
.executed_with_successful_result_count,
) as u64;

let signature_count = bundle_results
.load_and_execute_transactions_output()
.signature_count;

let sanitized_transactions = bundle_results.transactions().to_vec();
let execution_results = bundle_results.execution_results().to_vec();

let loaded_transactions = bundle_results.loaded_transactions_mut();
debug!("loaded_transactions: {:?}", loaded_transactions);

let (tx_results, commit_time_us) = measure_us!(bank.commit_transactions(
&sanitized_transactions,
loaded_transactions,
execution_results,
last_blockhash,
lamports_per_signature,
ExecutedTransactionCounts {
executed_transactions_count,
executed_non_vote_transactions_count,
executed_with_failure_result_count,
signature_count,
},
&mut execute_and_commit_timings.execute_timings,
));

let commit_transaction_statuses: Vec<_> = tx_results
.execution_results
.iter()
.zip(tx_results.loaded_accounts_stats.iter())
.map(|(execution_result, loaded_accounts_stats)| {
match execution_result.details() {
// reports actual execution CUs, and actual loaded accounts size for
// transaction committed to block. qos_service uses these information to adjust
// reserved block space.
Some(details) => CommitTransactionDetails::Committed {
compute_units: details.executed_units,
loaded_accounts_data_size: loaded_accounts_stats
.as_ref()
.map_or(0, |stats| stats.loaded_accounts_data_size),
},
None => CommitTransactionDetails::NotCommitted,
}
})
.collect();

let ((), find_and_send_votes_us) = measure_us!({
bank_utils::find_and_send_votes(
&sanitized_transactions,
&tx_results,
Some(&self.replay_vote_sender),
);

let post_balance_info = bundle_results.post_balance_info().clone();
let pre_balance_info = bundle_results.pre_balance_info();

let num_committed = tx_results
.execution_results
.iter()
.filter(|r| r.was_executed())
.count();

self.collect_balances_and_send_status_batch(
tx_results,
bank,
sanitized_transactions,
pre_balance_info,
post_balance_info,
starting_transaction_index,
);

// NOTE: we're doing batched records, so we need to increment the poh starting_transaction_index
// by number committed so the next batch will have the correct starting_transaction_index
starting_transaction_index =
starting_transaction_index.map(|starting_transaction_index| {
starting_transaction_index.saturating_add(num_committed)
});

self.prioritization_fee_cache
.update(bank, bundle_results.executed_transactions().into_iter());
});
saturating_add_assign!(
execute_and_commit_timings.find_and_send_votes_us,
find_and_send_votes_us
);

(commit_transaction_statuses, commit_time_us)
})
.unzip();

(
commit_times.iter().sum(),
CommitBundleDetails {
commit_transaction_details,
},
)
}

fn collect_balances_and_send_status_batch(
&self,
tx_results: TransactionResults,
bank: &Arc<Bank>,
sanitized_transactions: Vec<SanitizedTransaction>,
pre_balance_info: &mut PreBalanceInfo,
(post_balances, post_token_balances): (TransactionBalances, TransactionTokenBalances),
starting_transaction_index: Option<usize>,
) {
if let Some(transaction_status_sender) = &self.transaction_status_sender {
let mut transaction_index = starting_transaction_index.unwrap_or_default();
let batch_transaction_indexes: Vec<_> = tx_results
.execution_results
.iter()
.map(|result| {
if result.was_executed() {
let this_transaction_index = transaction_index;
saturating_add_assign!(transaction_index, 1);
this_transaction_index
} else {
0
}
})
.collect();
transaction_status_sender.send_transaction_status_batch(
bank.clone(),
sanitized_transactions,
tx_results.execution_results,
TransactionBalancesSet::new(
std::mem::take(&mut pre_balance_info.native),
post_balances,
),
TransactionTokenBalancesSet::new(
std::mem::take(&mut pre_balance_info.token),
post_token_balances,
),
tx_results.rent_debits,
batch_transaction_indexes,
);
}
}
}
41 changes: 41 additions & 0 deletions core/src/bundle_stage/result.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
use {
crate::{
bundle_stage::bundle_account_locker::BundleAccountLockerError, tip_manager::TipPaymentError,
},
anchor_lang::error::Error,
solana_bundle::bundle_execution::LoadAndExecuteBundleError,
solana_poh::poh_recorder::PohRecorderError,
thiserror::Error,
};

pub type BundleExecutionResult<T> = Result<T, BundleExecutionError>;

#[derive(Error, Debug, Clone)]
pub enum BundleExecutionError {
#[error("PoH record error: {0}")]
PohRecordError(#[from] PohRecorderError),

#[error("Bank is done processing")]
BankProcessingDone,

#[error("Execution error: {0}")]
ExecutionError(#[from] LoadAndExecuteBundleError),

#[error("The bundle exceeds the cost model")]
ExceedsCostModel,

#[error("Tip error {0}")]
TipError(#[from] TipPaymentError),

#[error("Error locking bundle")]
LockError(#[from] BundleAccountLockerError),
}

impl From<anchor_lang::error::Error> for TipPaymentError {
fn from(anchor_err: Error) -> Self {
match anchor_err {
Error::AnchorError(e) => Self::AnchorError(e.error_msg),
Error::ProgramError(e) => Self::AnchorError(e.to_string()),
}
}
}
52 changes: 52 additions & 0 deletions core/src/consensus_cache_updater.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
use {
solana_runtime::bank::Bank,
solana_sdk::{clock::Epoch, pubkey::Pubkey},
std::collections::HashSet,
};

#[derive(Default)]
pub(crate) struct ConsensusCacheUpdater {
last_epoch_updated: Epoch,
consensus_accounts_cache: HashSet<Pubkey>,
}

impl ConsensusCacheUpdater {
pub(crate) fn consensus_accounts_cache(&self) -> &HashSet<Pubkey> {
&self.consensus_accounts_cache
}

/// Builds a HashSet of all consensus related accounts for the Bank's epoch
fn get_consensus_accounts(bank: &Bank) -> HashSet<Pubkey> {
let mut consensus_accounts: HashSet<Pubkey> = HashSet::new();
if let Some(epoch_stakes) = bank.epoch_stakes(bank.epoch()) {
// votes use the following accounts:
// - vote_account pubkey: writeable
// - authorized_voter_pubkey: read-only
// - node_keypair pubkey: payer (writeable)
let node_id_vote_accounts = epoch_stakes.node_id_to_vote_accounts();

let vote_accounts = node_id_vote_accounts
.values()
.flat_map(|v| v.vote_accounts.clone());

// vote_account
consensus_accounts.extend(vote_accounts);
// authorized_voter_pubkey
consensus_accounts.extend(epoch_stakes.epoch_authorized_voters().keys());
// node_keypair
consensus_accounts.extend(epoch_stakes.node_id_to_vote_accounts().keys());
}
consensus_accounts
}

/// Updates consensus-related accounts on epoch boundaries
pub(crate) fn maybe_update(&mut self, bank: &Bank) -> bool {
if bank.epoch() > self.last_epoch_updated {
self.consensus_accounts_cache = Self::get_consensus_accounts(bank);
self.last_epoch_updated = bank.epoch();
true
} else {
false
}
}
}
490 changes: 490 additions & 0 deletions core/src/immutable_deserialized_bundle.rs

Large diffs are not rendered by default.

50 changes: 50 additions & 0 deletions core/src/lib.rs
Original file line number Diff line number Diff line change
@@ -12,20 +12,25 @@ pub mod accounts_hash_verifier;
pub mod admin_rpc_post_init;
pub mod banking_stage;
pub mod banking_trace;
pub mod bundle_stage;
pub mod cache_block_meta_service;
pub mod cluster_info_vote_listener;
pub mod cluster_slots_service;
pub mod commitment_service;
pub mod completed_data_sets_service;
pub mod consensus;
pub mod consensus_cache_updater;
pub mod cost_update_service;
pub mod drop_bank_service;
pub mod fetch_stage;
pub mod gen_keys;
pub mod immutable_deserialized_bundle;
pub mod next_leader;
pub mod optimistic_confirmation_verifier;
pub mod packet_bundle;
pub mod poh_timing_report_service;
pub mod poh_timing_reporter;
pub mod proxy;
pub mod repair;
pub mod replay_stage;
mod result;
@@ -38,6 +43,7 @@ pub mod snapshot_packager_service;
pub mod staked_nodes_updater_service;
pub mod stats_reporter_service;
pub mod system_monitor_service;
pub mod tip_manager;
pub mod tpu;
mod tpu_entry_notifier;
pub mod tracer_packet_stats;
@@ -66,3 +72,47 @@ extern crate solana_frozen_abi_macro;
#[cfg(test)]
#[macro_use]
extern crate assert_matches;

use {
solana_sdk::packet::{Meta, Packet, PacketFlags, PACKET_DATA_SIZE},
std::{
cmp::min,
net::{IpAddr, Ipv4Addr},
},
};

const UNKNOWN_IP: IpAddr = IpAddr::V4(Ipv4Addr::new(0, 0, 0, 0));

// NOTE: last profiled at around 180ns
pub fn proto_packet_to_packet(p: jito_protos::proto::packet::Packet) -> Packet {
let mut data = [0; PACKET_DATA_SIZE];
let copy_len = min(data.len(), p.data.len());
data[..copy_len].copy_from_slice(&p.data[..copy_len]);
let mut packet = Packet::new(data, Meta::default());
if let Some(meta) = p.meta {
packet.meta_mut().size = meta.size as usize;
packet.meta_mut().addr = meta.addr.parse().unwrap_or(UNKNOWN_IP);
packet.meta_mut().port = meta.port as u16;
if let Some(flags) = meta.flags {
if flags.simple_vote_tx {
packet.meta_mut().flags.insert(PacketFlags::SIMPLE_VOTE_TX);
}
if flags.forwarded {
packet.meta_mut().flags.insert(PacketFlags::FORWARDED);
}
if flags.tracer_packet {
packet.meta_mut().flags.insert(PacketFlags::TRACER_PACKET);
}
if flags.repair {
packet.meta_mut().flags.insert(PacketFlags::REPAIR);
}
if flags.from_staked_node {
packet
.meta_mut()
.flags
.insert(PacketFlags::FROM_STAKED_NODE)
}
}
}
packet
}
7 changes: 7 additions & 0 deletions core/src/packet_bundle.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
use solana_perf::packet::PacketBatch;

#[derive(Clone, Debug)]
pub struct PacketBundle {
pub batch: PacketBatch,
pub bundle_id: String,
}
185 changes: 185 additions & 0 deletions core/src/proxy/auth.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,185 @@
use {
crate::proxy::ProxyError,
chrono::Utc,
jito_protos::proto::auth::{
auth_service_client::AuthServiceClient, GenerateAuthChallengeRequest,
GenerateAuthTokensRequest, RefreshAccessTokenRequest, Role, Token,
},
solana_gossip::cluster_info::ClusterInfo,
solana_sdk::signature::{Keypair, Signer},
std::{
sync::{Arc, Mutex},
time::Duration,
},
tokio::time::timeout,
tonic::{service::Interceptor, transport::Channel, Code, Request, Status},
};

/// Interceptor responsible for adding the access token to request headers.
pub(crate) struct AuthInterceptor {
/// The token added to each request header.
access_token: Arc<Mutex<Token>>,
}

impl AuthInterceptor {
pub(crate) fn new(access_token: Arc<Mutex<Token>>) -> Self {
Self { access_token }
}
}

impl Interceptor for AuthInterceptor {
fn call(&mut self, mut request: Request<()>) -> Result<Request<()>, Status> {
request.metadata_mut().insert(
"authorization",
format!("Bearer {}", self.access_token.lock().unwrap().value)
.parse()
.unwrap(),
);

Ok(request)
}
}

/// Generates an auth challenge then generates and returns validated auth tokens.
pub async fn generate_auth_tokens(
auth_service_client: &mut AuthServiceClient<Channel>,
// used to sign challenges
keypair: &Keypair,
) -> crate::proxy::Result<(
Token, /* access_token */
Token, /* refresh_token */
)> {
debug!("generate_auth_challenge");
let challenge_response = auth_service_client
.generate_auth_challenge(GenerateAuthChallengeRequest {
role: Role::Validator as i32,
pubkey: keypair.pubkey().as_ref().to_vec(),
})
.await
.map_err(|e: Status| {
if e.code() == Code::PermissionDenied {
ProxyError::AuthenticationPermissionDenied
} else {
ProxyError::AuthenticationError(e.to_string())
}
})?;

let formatted_challenge = format!(
"{}-{}",
keypair.pubkey(),
challenge_response.into_inner().challenge
);

let signed_challenge = keypair
.sign_message(formatted_challenge.as_bytes())
.as_ref()
.to_vec();

debug!(
"formatted_challenge: {} signed_challenge: {:?}",
formatted_challenge, signed_challenge
);

debug!("generate_auth_tokens");
let auth_tokens = auth_service_client
.generate_auth_tokens(GenerateAuthTokensRequest {
challenge: formatted_challenge,
client_pubkey: keypair.pubkey().as_ref().to_vec(),
signed_challenge,
})
.await
.map_err(|e| ProxyError::AuthenticationError(e.to_string()))?;

let inner = auth_tokens.into_inner();
let access_token = get_validated_token(inner.access_token)?;
let refresh_token = get_validated_token(inner.refresh_token)?;

Ok((access_token, refresh_token))
}

/// Tries to refresh the access token or run full-reauth if needed.
pub async fn maybe_refresh_auth_tokens(
auth_service_client: &mut AuthServiceClient<Channel>,
access_token: &Arc<Mutex<Token>>,
refresh_token: &Token,
cluster_info: &Arc<ClusterInfo>,
connection_timeout: &Duration,
refresh_within_s: u64,
) -> crate::proxy::Result<(
Option<Token>, // access token
Option<Token>, // refresh token
)> {
let access_token_expiry: u64 = access_token
.lock()
.unwrap()
.expires_at_utc
.as_ref()
.map(|ts| ts.seconds as u64)
.unwrap_or_default();
let refresh_token_expiry: u64 = refresh_token
.expires_at_utc
.as_ref()
.map(|ts| ts.seconds as u64)
.unwrap_or_default();

let now = Utc::now().timestamp() as u64;

let should_refresh_access =
access_token_expiry.checked_sub(now).unwrap_or_default() <= refresh_within_s;
let should_generate_new_tokens =
refresh_token_expiry.checked_sub(now).unwrap_or_default() <= refresh_within_s;

if should_generate_new_tokens {
let kp = cluster_info.keypair().clone();

let (new_access_token, new_refresh_token) = timeout(
*connection_timeout,
generate_auth_tokens(auth_service_client, kp.as_ref()),
)
.await
.map_err(|_| ProxyError::MethodTimeout("generate_auth_tokens".to_string()))?
.map_err(|e| ProxyError::MethodError(e.to_string()))?;

return Ok((Some(new_access_token), Some(new_refresh_token)));
} else if should_refresh_access {
let new_access_token = timeout(
*connection_timeout,
refresh_access_token(auth_service_client, refresh_token),
)
.await
.map_err(|_| ProxyError::MethodTimeout("refresh_access_token".to_string()))?
.map_err(|e| ProxyError::MethodError(e.to_string()))?;

return Ok((Some(new_access_token), None));
}

Ok((None, None))
}

pub async fn refresh_access_token(
auth_service_client: &mut AuthServiceClient<Channel>,
refresh_token: &Token,
) -> crate::proxy::Result<Token> {
let response = auth_service_client
.refresh_access_token(RefreshAccessTokenRequest {
refresh_token: refresh_token.value.clone(),
})
.await
.map_err(|e| ProxyError::AuthenticationError(e.to_string()))?;
get_validated_token(response.into_inner().access_token)
}

/// An invalid token is one where any of its fields are None or the token itself is None.
/// Performs the necessary validations on the auth tokens before returning,
/// i.e. it is safe to call .unwrap() on the token fields from the call-site.
fn get_validated_token(maybe_token: Option<Token>) -> crate::proxy::Result<Token> {
let token = maybe_token
.ok_or_else(|| ProxyError::BadAuthenticationToken("received a null token".to_string()))?;
if token.expires_at_utc.is_none() {
Err(ProxyError::BadAuthenticationToken(
"expires_at_utc field is null".to_string(),
))
} else {
Ok(token)
}
}
571 changes: 571 additions & 0 deletions core/src/proxy/block_engine_stage.rs

Large diffs are not rendered by default.

170 changes: 170 additions & 0 deletions core/src/proxy/fetch_stage_manager.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,170 @@
use {
crate::proxy::{HeartbeatEvent, ProxyError},
crossbeam_channel::{select, tick, Receiver, Sender},
solana_client::connection_cache::Protocol,
solana_gossip::{cluster_info::ClusterInfo, contact_info},
solana_perf::packet::PacketBatch,
std::{
net::SocketAddr,
sync::{
atomic::{AtomicBool, Ordering},
Arc,
},
thread::{self, Builder, JoinHandle},
time::{Duration, Instant},
},
};

const HEARTBEAT_TIMEOUT: Duration = Duration::from_millis(1500); // Empirically determined from load testing
const DISCONNECT_DELAY: Duration = Duration::from_secs(60);
const METRICS_CADENCE: Duration = Duration::from_secs(1);

/// Manages switching between the validator's tpu ports and that of the proxy's.
/// Switch-overs are triggered by late and missed heartbeats.
pub struct FetchStageManager {
t_hdl: JoinHandle<()>,
}

impl FetchStageManager {
pub fn new(
// ClusterInfo is used to switch between advertising the proxy's TPU ports and that of this validator's.
cluster_info: Arc<ClusterInfo>,
// Channel that heartbeats are received from. Entirely responsible for triggering switch-overs.
heartbeat_rx: Receiver<HeartbeatEvent>,
// Channel that packets from FetchStage are intercepted from.
packet_intercept_rx: Receiver<PacketBatch>,
// Intercepted packets get piped through here.
packet_tx: Sender<PacketBatch>,
exit: Arc<AtomicBool>,
) -> Self {
let t_hdl = Self::start(
cluster_info,
heartbeat_rx,
packet_intercept_rx,
packet_tx,
exit,
);

Self { t_hdl }
}

/// Disconnect fetch behaviour
/// Starts connected
/// When connected and a packet is received, forward it
/// When disconnected, packet is dropped
/// When receiving heartbeat while connected and not pending disconnect
/// Sets pending_disconnect to true and records time
/// When receiving heartbeat while connected, and pending for > DISCONNECT_DELAY_SEC
/// Sets fetch_connected to false, pending_disconnect to false
/// Advertises TPU ports sent in heartbeat
/// When tick is received without heartbeat_received
/// Sets fetch_connected to true, pending_disconnect to false
/// Advertises saved contact info
fn start(
cluster_info: Arc<ClusterInfo>,
heartbeat_rx: Receiver<HeartbeatEvent>,
packet_intercept_rx: Receiver<PacketBatch>,
packet_tx: Sender<PacketBatch>,
exit: Arc<AtomicBool>,
) -> JoinHandle<()> {
Builder::new().name("fetch-stage-manager".into()).spawn(move || {
let my_fallback_contact_info = cluster_info.my_contact_info();

let mut fetch_connected = true;
let mut heartbeat_received = false;
let mut pending_disconnect = false;

let mut pending_disconnect_ts = Instant::now();

let heartbeat_tick = tick(HEARTBEAT_TIMEOUT);
let metrics_tick = tick(METRICS_CADENCE);
let mut packets_forwarded = 0;
let mut heartbeats_received = 0;
loop {
select! {
recv(packet_intercept_rx) -> pkt => {
match pkt {
Ok(pkt) => {
if fetch_connected {
if packet_tx.send(pkt).is_err() {
error!("{:?}", ProxyError::PacketForwardError);
return;
}
packets_forwarded += 1;
}
}
Err(_) => {
warn!("packet intercept receiver disconnected, shutting down");
return;
}
}
}
recv(heartbeat_tick) -> _ => {
if exit.load(Ordering::Relaxed) {
break;
}
if !heartbeat_received && (!fetch_connected || pending_disconnect) {
warn!("heartbeat late, reconnecting fetch stage");
fetch_connected = true;
pending_disconnect = false;

// yes, using UDP here is extremely confusing for the validator
// since the entire network is running QUIC. However, it's correct.
if let Err(e) = Self::set_tpu_addresses(&cluster_info, my_fallback_contact_info.tpu(Protocol::UDP).unwrap(), my_fallback_contact_info.tpu_forwards(Protocol::UDP).unwrap()) {
error!("error setting tpu or tpu_fwd to ({:?}, {:?}), error: {:?}", my_fallback_contact_info.tpu(Protocol::UDP).unwrap(), my_fallback_contact_info.tpu_forwards(Protocol::UDP).unwrap(), e);
}
heartbeats_received = 0;
}
heartbeat_received = false;
}
recv(heartbeat_rx) -> tpu_info => {
if let Ok((tpu_addr, tpu_forward_addr)) = tpu_info {
heartbeats_received += 1;
heartbeat_received = true;
if fetch_connected && !pending_disconnect {
info!("received heartbeat while fetch stage connected, pending disconnect after delay");
pending_disconnect_ts = Instant::now();
pending_disconnect = true;
}
if fetch_connected && pending_disconnect && pending_disconnect_ts.elapsed() > DISCONNECT_DELAY {
info!("disconnecting fetch stage");
fetch_connected = false;
pending_disconnect = false;
if let Err(e) = Self::set_tpu_addresses(&cluster_info, tpu_addr, tpu_forward_addr) {
error!("error setting tpu or tpu_fwd to ({:?}, {:?}), error: {:?}", tpu_addr, tpu_forward_addr, e);
}
}
} else {
{
warn!("relayer heartbeat receiver disconnected, shutting down");
return;
}
}
}
recv(metrics_tick) -> _ => {
datapoint_info!(
"relayer-heartbeat",
("fetch_stage_packets_forwarded", packets_forwarded, i64),
("heartbeats_received", heartbeats_received, i64),
);

}
}
}
}).unwrap()
}

fn set_tpu_addresses(
cluster_info: &Arc<ClusterInfo>,
tpu_address: SocketAddr,
tpu_forward_address: SocketAddr,
) -> Result<(), contact_info::Error> {
cluster_info.set_tpu(tpu_address)?;
cluster_info.set_tpu_forwards(tpu_forward_address)?;
Ok(())
}

pub fn join(self) -> thread::Result<()> {
self.t_hdl.join()
}
}
100 changes: 100 additions & 0 deletions core/src/proxy/mod.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
//! This module contains logic for connecting to an external Relayer and Block Engine.
//! The Relayer acts as an external TPU and TPU Forward socket while the Block Engine
//! is tasked with streaming high value bundles to the validator. The validator can run
//! in one of 3 modes:
//! 1. Connected to Relayer and Block Engine.
//! - This is the ideal mode as it increases the probability of building the most profitable blocks.
//! 2. Connected only to Relayer.
//! - A validator may choose to run in this mode if the main concern is to offload ingress traffic deduplication and sig-verification.
//! 3. Connected only to Block Engine.
//! - Running in this mode means pending transactions are not exposed to external actors. This mode is ideal if the validator wishes
//! to accept bundles while maintaining some level of privacy for in-flight transactions.
mod auth;
pub mod block_engine_stage;
pub mod fetch_stage_manager;
pub mod relayer_stage;

use {
std::{
net::{AddrParseError, SocketAddr},
result,
},
thiserror::Error,
tonic::Status,
};

type Result<T> = result::Result<T, ProxyError>;
type HeartbeatEvent = (SocketAddr, SocketAddr);

#[derive(Error, Debug)]
pub enum ProxyError {
#[error("grpc error: {0}")]
GrpcError(#[from] Status),

#[error("stream disconnected")]
GrpcStreamDisconnected,

#[error("heartbeat error")]
HeartbeatChannelError,

#[error("heartbeat expired")]
HeartbeatExpired,

#[error("error forwarding packet to banking stage")]
PacketForwardError,

#[error("missing tpu config: {0:?}")]
MissingTpuSocket(String),

#[error("invalid socket address: {0:?}")]
InvalidSocketAddress(#[from] AddrParseError),

#[error("invalid gRPC data: {0:?}")]
InvalidData(String),

#[error("timeout: {0:?}")]
ConnectionError(#[from] tonic::transport::Error),

#[error("AuthenticationConnectionTimeout")]
AuthenticationConnectionTimeout,

#[error("AuthenticationTimeout")]
AuthenticationTimeout,

#[error("AuthenticationConnectionError: {0:?}")]
AuthenticationConnectionError(String),

#[error("BlockEngineConnectionTimeout")]
BlockEngineConnectionTimeout,

#[error("BlockEngineTimeout")]
BlockEngineTimeout,

#[error("BlockEngineConnectionError: {0:?}")]
BlockEngineConnectionError(String),

#[error("RelayerConnectionTimeout")]
RelayerConnectionTimeout,

#[error("RelayerTimeout")]
RelayerEngineTimeout,

#[error("RelayerConnectionError: {0:?}")]
RelayerConnectionError(String),

#[error("AuthenticationError: {0:?}")]
AuthenticationError(String),

#[error("AuthenticationPermissionDenied")]
AuthenticationPermissionDenied,

#[error("BadAuthenticationToken: {0:?}")]
BadAuthenticationToken(String),

#[error("MethodTimeout: {0:?}")]
MethodTimeout(String),

#[error("MethodError: {0:?}")]
MethodError(String),
}
515 changes: 515 additions & 0 deletions core/src/proxy/relayer_stage.rs

Large diffs are not rendered by default.

588 changes: 588 additions & 0 deletions core/src/tip_manager.rs

Large diffs are not rendered by default.

113 changes: 104 additions & 9 deletions core/src/tpu.rs
Original file line number Diff line number Diff line change
@@ -6,14 +6,21 @@ use {
crate::{
banking_stage::BankingStage,
banking_trace::{BankingTracer, TracerThread},
bundle_stage::{bundle_account_locker::BundleAccountLocker, BundleStage},
cluster_info_vote_listener::{
ClusterInfoVoteListener, DuplicateConfirmedSlotsSender, GossipVerifiedVoteHashSender,
VerifiedVoteSender, VoteTracker,
},
fetch_stage::FetchStage,
proxy::{
block_engine_stage::{BlockBuilderFeeInfo, BlockEngineConfig, BlockEngineStage},
fetch_stage_manager::FetchStageManager,
relayer_stage::{RelayerConfig, RelayerStage},
},
sigverify::TransactionSigVerifier,
sigverify_stage::SigVerifyStage,
staked_nodes_updater_service::StakedNodesUpdaterService,
tip_manager::{TipManager, TipManagerConfig},
tpu_entry_notifier::TpuEntryNotifier,
validator::{BlockProductionMethod, GeneratorConfig},
},
@@ -35,17 +42,22 @@ use {
prioritization_fee_cache::PrioritizationFeeCache,
vote_sender_types::{ReplayVoteReceiver, ReplayVoteSender},
},
solana_sdk::{clock::Slot, pubkey::Pubkey, quic::NotifyKeyUpdate, signature::Keypair},
solana_sdk::{
clock::Slot,
pubkey::Pubkey,
quic::NotifyKeyUpdate,
signature::{Keypair, Signer},
},
solana_streamer::{
nonblocking::quic::{DEFAULT_MAX_STREAMS_PER_MS, DEFAULT_WAIT_FOR_CHUNK_TIMEOUT},
quic::{spawn_server, SpawnServerResult, MAX_STAKED_CONNECTIONS, MAX_UNSTAKED_CONNECTIONS},
streamer::StakedNodes,
},
solana_turbine::broadcast_stage::{BroadcastStage, BroadcastStageType},
std::{
collections::HashMap,
collections::{HashMap, HashSet},
net::{SocketAddr, UdpSocket},
sync::{atomic::AtomicBool, Arc, RwLock},
sync::{atomic::AtomicBool, Arc, Mutex, RwLock},
thread,
time::Duration,
},
@@ -76,6 +88,10 @@ pub struct Tpu {
tpu_entry_notifier: Option<TpuEntryNotifier>,
staked_nodes_updater_service: StakedNodesUpdaterService,
tracer_thread_hdl: TracerThread,
relayer_stage: RelayerStage,
block_engine_stage: BlockEngineStage,
fetch_stage_manager: FetchStageManager,
bundle_stage: BundleStage,
}

impl Tpu {
@@ -116,6 +132,11 @@ impl Tpu {
block_production_method: BlockProductionMethod,
enable_block_production_forwarding: bool,
_generator_config: Option<GeneratorConfig>, /* vestigial code for replay invalidator */
block_engine_config: Arc<Mutex<BlockEngineConfig>>,
relayer_config: Arc<Mutex<RelayerConfig>>,
tip_manager_config: TipManagerConfig,
shred_receiver_address: Arc<RwLock<Option<SocketAddr>>>,
preallocated_bundle_cost: u64,
) -> (Self, Vec<Arc<dyn NotifyKeyUpdate + Sync + Send>>) {
let TpuSockets {
transactions: transactions_sockets,
@@ -126,15 +147,18 @@ impl Tpu {
transactions_forwards_quic: transactions_forwards_quic_sockets,
} = sockets;

let (packet_sender, packet_receiver) = unbounded();
// Packets from fetch stage and quic server are intercepted and sent through fetch_stage_manager
// If relayer is connected, packets are dropped. If not, packets are forwarded on to packet_sender
let (packet_intercept_sender, packet_intercept_receiver) = unbounded();

let (vote_packet_sender, vote_packet_receiver) = unbounded();
let (forwarded_packet_sender, forwarded_packet_receiver) = unbounded();
let fetch_stage = FetchStage::new_with_sender(
transactions_sockets,
tpu_forwards_sockets,
tpu_vote_sockets,
exit.clone(),
&packet_sender,
&packet_intercept_sender,
&vote_packet_sender,
&forwarded_packet_sender,
forwarded_packet_receiver,
@@ -162,7 +186,7 @@ impl Tpu {
"quic_streamer_tpu",
transactions_quic_sockets,
keypair,
packet_sender,
packet_intercept_sender,
exit.clone(),
MAX_QUIC_CONNECTIONS_PER_PEER,
staked_nodes.clone(),
@@ -197,8 +221,10 @@ impl Tpu {
)
.unwrap();

let (packet_sender, packet_receiver) = unbounded();

let sigverify_stage = {
let verifier = TransactionSigVerifier::new(non_vote_sender);
let verifier = TransactionSigVerifier::new(non_vote_sender.clone());
SigVerifyStage::new(packet_receiver, verifier, "solSigVerTpu", "tpu-verifier")
};

@@ -216,6 +242,41 @@ impl Tpu {

let (gossip_vote_sender, gossip_vote_receiver) =
banking_tracer.create_channel_gossip_vote();

let block_builder_fee_info = Arc::new(Mutex::new(BlockBuilderFeeInfo {
block_builder: cluster_info.keypair().pubkey(),
block_builder_commission: 0,
}));

let (bundle_sender, bundle_receiver) = unbounded();
let block_engine_stage = BlockEngineStage::new(
block_engine_config,
bundle_sender,
cluster_info.clone(),
packet_sender.clone(),
non_vote_sender.clone(),
exit.clone(),
&block_builder_fee_info,
);

let (heartbeat_tx, heartbeat_rx) = unbounded();
let fetch_stage_manager = FetchStageManager::new(
cluster_info.clone(),
heartbeat_rx,
packet_intercept_receiver,
packet_sender.clone(),
exit.clone(),
);

let relayer_stage = RelayerStage::new(
relayer_config,
cluster_info.clone(),
heartbeat_tx,
packet_sender,
non_vote_sender,
exit.clone(),
);

let cluster_info_vote_listener = ClusterInfoVoteListener::new(
exit.clone(),
cluster_info.clone(),
@@ -232,20 +293,45 @@ impl Tpu {
duplicate_confirmed_slot_sender,
);

let tip_manager = TipManager::new(tip_manager_config);

let bundle_account_locker = BundleAccountLocker::default();

// The tip program can't be used in BankingStage to avoid someone from stealing tips mid-slot.
let mut blacklisted_accounts = HashSet::new();
blacklisted_accounts.insert(tip_manager.tip_payment_program_id());
let banking_stage = BankingStage::new(
block_production_method,
cluster_info,
poh_recorder,
non_vote_receiver,
tpu_vote_receiver,
gossip_vote_receiver,
transaction_status_sender,
replay_vote_sender,
transaction_status_sender.clone(),
replay_vote_sender.clone(),
log_messages_bytes_limit,
connection_cache.clone(),
bank_forks.clone(),
prioritization_fee_cache,
enable_block_production_forwarding,
blacklisted_accounts,
bundle_account_locker.clone(),
);

let bundle_stage = BundleStage::new(
cluster_info,
poh_recorder,
bundle_receiver,
transaction_status_sender,
replay_vote_sender,
log_messages_bytes_limit,
exit.clone(),
tip_manager,
bundle_account_locker,
&block_builder_fee_info,
preallocated_bundle_cost,
bank_forks.clone(),
prioritization_fee_cache,
);

let (entry_receiver, tpu_entry_notifier) =
@@ -272,6 +358,7 @@ impl Tpu {
bank_forks,
shred_version,
turbine_quic_endpoint_sender,
shred_receiver_address,
);

(
@@ -287,6 +374,10 @@ impl Tpu {
tpu_entry_notifier,
staked_nodes_updater_service,
tracer_thread_hdl,
block_engine_stage,
relayer_stage,
fetch_stage_manager,
bundle_stage,
},
vec![key_updater, forwards_key_updater],
)
@@ -302,6 +393,10 @@ impl Tpu {
self.staked_nodes_updater_service.join(),
self.tpu_quic_t.join(),
self.tpu_forwards_quic_t.join(),
self.bundle_stage.join(),
self.relayer_stage.join(),
self.block_engine_stage.join(),
self.fetch_stage_manager.join(),
];
let broadcast_result = self.broadcast_stage.join();
for result in results {
66 changes: 40 additions & 26 deletions core/src/tpu_entry_notifier.rs
Original file line number Diff line number Diff line change
@@ -61,43 +61,57 @@ impl TpuEntryNotifier {
current_index: &mut usize,
current_transaction_index: &mut usize,
) -> Result<(), RecvTimeoutError> {
let (bank, (entry, tick_height)) = entry_receiver.recv_timeout(Duration::from_secs(1))?;
let WorkingBankEntry {
bank,
entries_ticks,
} = entry_receiver.recv_timeout(Duration::from_secs(1))?;
let slot = bank.slot();
let index = if slot != *current_slot {
*current_index = 0;
*current_transaction_index = 0;
*current_slot = slot;
0
} else {
*current_index += 1;
*current_index
};
let mut indices_sent = vec![];

let entry_summary = EntrySummary {
num_hashes: entry.num_hashes,
hash: entry.hash,
num_transactions: entry.transactions.len() as u64,
};
if let Err(err) = entry_notification_sender.send(EntryNotification {
slot,
index,
entry: entry_summary,
starting_transaction_index: *current_transaction_index,
}) {
warn!(
entries_ticks.iter().for_each(|(entry, _)| {
let index = if slot != *current_slot {
*current_index = 0;
*current_transaction_index = 0;
*current_slot = slot;
0
} else {
*current_index += 1;
*current_index
};

let entry_summary = EntrySummary {
num_hashes: entry.num_hashes,
hash: entry.hash,
num_transactions: entry.transactions.len() as u64,
};
if let Err(err) = entry_notification_sender.send(EntryNotification {
slot,
index,
entry: entry_summary,
starting_transaction_index: *current_transaction_index
}) {
warn!(
"Failed to send slot {slot:?} entry {index:?} from Tpu to EntryNotifierService, error {err:?}",
);
}
*current_transaction_index += entry.transactions.len();
}

if let Err(err) = broadcast_entry_sender.send((bank, (entry, tick_height))) {
*current_transaction_index += entry.transactions.len();

indices_sent.push(index);
});

if let Err(err) = broadcast_entry_sender.send(WorkingBankEntry {
bank,
entries_ticks,
}) {
warn!(
"Failed to send slot {slot:?} entry {index:?} from Tpu to BroadcastStage, error {err:?}",
"Failed to send slot {slot:?} entries {indices_sent:?} from Tpu to BroadcastStage, error {err:?}",
);
// If the BroadcastStage channel is closed, the validator has halted. Try to exit
// gracefully.
exit.store(true, Ordering::Relaxed);
}

Ok(())
}

3 changes: 3 additions & 0 deletions core/src/tvu.rs
Original file line number Diff line number Diff line change
@@ -159,6 +159,7 @@ impl Tvu {
outstanding_repair_requests: Arc<RwLock<OutstandingShredRepairs>>,
cluster_slots: Arc<ClusterSlots>,
wen_restart_repair_slots: Option<Arc<RwLock<Vec<Slot>>>>,
shred_receiver_addr: Arc<RwLock<Option<SocketAddr>>>,
) -> Result<Self, String> {
let TvuSockets {
repair: repair_socket,
@@ -207,6 +208,7 @@ impl Tvu {
retransmit_receiver,
max_slots.clone(),
Some(rpc_subscriptions.clone()),
shred_receiver_addr,
);

let (ancestor_duplicate_slots_sender, ancestor_duplicate_slots_receiver) = unbounded();
@@ -521,6 +523,7 @@ pub mod tests {
outstanding_repair_requests,
cluster_slots,
None,
Arc::new(RwLock::new(None)),
)
.expect("assume success");
exit.store(true, Ordering::Relaxed);
103 changes: 77 additions & 26 deletions core/src/validator.rs
Original file line number Diff line number Diff line change
@@ -15,6 +15,7 @@ use {
ExternalRootSource, Tower,
},
poh_timing_report_service::PohTimingReportService,
proxy::{block_engine_stage::BlockEngineConfig, relayer_stage::RelayerConfig},
repair::{self, serve_repair::ServeRepair, serve_repair_service::ServeRepairService},
rewards_recorder_service::{RewardsRecorderSender, RewardsRecorderService},
sample_performance_service::SamplePerformanceService,
@@ -24,6 +25,7 @@ use {
system_monitor_service::{
verify_net_stats_access, SystemMonitorService, SystemMonitorStatsReportConfig,
},
tip_manager::TipManagerConfig,
tpu::{Tpu, TpuSockets, DEFAULT_TPU_COALESCE},
tvu::{Tvu, TvuConfig, TvuSockets},
},
@@ -105,6 +107,10 @@ use {
snapshot_hash::StartingSnapshotHashes,
snapshot_utils::{self, clean_orphaned_account_snapshot_dirs},
},
solana_runtime_plugin::{
runtime_plugin_admin_rpc_service::RuntimePluginManagerRpcRequest,
runtime_plugin_service::RuntimePluginService,
},
solana_sdk::{
clock::Slot,
epoch_schedule::MAX_LEADER_SCHEDULE_EPOCH_OFFSET,
@@ -129,7 +135,7 @@ use {
path::{Path, PathBuf},
sync::{
atomic::{AtomicBool, AtomicU64, Ordering},
Arc, RwLock,
Arc, Mutex, RwLock,
},
thread::{sleep, Builder, JoinHandle},
time::{Duration, Instant},
@@ -215,7 +221,8 @@ pub struct ValidatorConfig {
pub rpc_config: JsonRpcConfig,
/// Specifies which plugins to start up with
pub on_start_geyser_plugin_config_files: Option<Vec<PathBuf>>,
pub rpc_addrs: Option<(SocketAddr, SocketAddr)>, // (JsonRpc, JsonRpcPubSub)
pub rpc_addrs: Option<(SocketAddr, SocketAddr)>,
// (JsonRpc, JsonRpcPubSub)
pub pubsub_config: PubSubConfig,
pub snapshot_config: SnapshotConfig,
pub max_ledger_shreds: Option<u64>,
@@ -225,10 +232,14 @@ pub struct ValidatorConfig {
pub fixed_leader_schedule: Option<FixedSchedule>,
pub wait_for_supermajority: Option<Slot>,
pub new_hard_forks: Option<Vec<Slot>>,
pub known_validators: Option<HashSet<Pubkey>>, // None = trust all
pub repair_validators: Option<HashSet<Pubkey>>, // None = repair from all
pub repair_whitelist: Arc<RwLock<HashSet<Pubkey>>>, // Empty = repair with all
pub gossip_validators: Option<HashSet<Pubkey>>, // None = gossip with all
pub known_validators: Option<HashSet<Pubkey>>,
// None = trust all
pub repair_validators: Option<HashSet<Pubkey>>,
// None = repair from all
pub repair_whitelist: Arc<RwLock<HashSet<Pubkey>>>,
// Empty = repair with all
pub gossip_validators: Option<HashSet<Pubkey>>,
// None = gossip with all
pub accounts_hash_interval_slots: u64,
pub max_genesis_archive_unpacked_size: u64,
pub wal_recovery_mode: Option<BlockstoreRecoveryMode>,
@@ -275,6 +286,12 @@ pub struct ValidatorConfig {
pub replay_forks_threads: NonZeroUsize,
pub replay_transactions_threads: NonZeroUsize,
pub delay_leader_block_for_pending_fork: bool,
pub relayer_config: Arc<Mutex<RelayerConfig>>,
pub block_engine_config: Arc<Mutex<BlockEngineConfig>>,
// Using Option inside RwLock is ugly, but only convenient way to allow toggle on/off
pub shred_receiver_address: Arc<RwLock<Option<SocketAddr>>>,
pub tip_manager_config: TipManagerConfig,
pub preallocated_bundle_cost: u64,
}

impl Default for ValidatorConfig {
@@ -347,6 +364,11 @@ impl Default for ValidatorConfig {
replay_forks_threads: NonZeroUsize::new(1).expect("1 is non-zero"),
replay_transactions_threads: NonZeroUsize::new(1).expect("1 is non-zero"),
delay_leader_block_for_pending_fork: false,
relayer_config: Arc::new(Mutex::new(RelayerConfig::default())),
block_engine_config: Arc::new(Mutex::new(BlockEngineConfig::default())),
shred_receiver_address: Arc::new(RwLock::new(None)),
tip_manager_config: TipManagerConfig::default(),
preallocated_bundle_cost: u64::default(),
}
}
}
@@ -385,7 +407,8 @@ impl ValidatorConfig {
// having to watch log messages.
#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]
pub enum ValidatorStartProgress {
Initializing, // Catch all, default state
Initializing,
// Catch all, default state
SearchingForRpcService,
DownloadingSnapshot {
slot: Slot,
@@ -399,7 +422,8 @@ pub enum ValidatorStartProgress {
max_slot: Slot,
},
StartingServices,
Halted, // Validator halted due to `--dev-halt-at-slot` argument
Halted,
// Validator halted due to `--dev-halt-at-slot` argument
WaitingForSupermajority {
slot: Slot,
gossip_stake_percent: u64,
@@ -516,6 +540,10 @@ impl Validator {
tpu_enable_udp: bool,
tpu_max_connections_per_ipaddr_per_minute: u64,
admin_rpc_service_post_init: Arc<RwLock<Option<AdminRpcRequestMetadataPostInit>>>,
runtime_plugin_configs_and_request_rx: Option<(
Vec<PathBuf>,
Receiver<RuntimePluginManagerRpcRequest>,
)>,
) -> Result<Self, String> {
let start_time = Instant::now();

@@ -929,6 +957,17 @@ impl Validator {
None,
));

if let Some((runtime_plugin_configs, request_rx)) = runtime_plugin_configs_and_request_rx {
RuntimePluginService::start(
&runtime_plugin_configs,
request_rx,
bank_forks.clone(),
block_commitment_cache.clone(),
exit.clone(),
)
.map_err(|e| format!("Failed to start runtime plugin service: {e:?}"))?;
}

let max_slots = Arc::new(MaxSlots::default());

let startup_verification_complete;
@@ -1369,6 +1408,7 @@ impl Validator {
outstanding_repair_requests.clone(),
cluster_slots.clone(),
wen_restart_repair_slots.clone(),
config.shred_receiver_address.clone(),
)?;

if in_wen_restart {
@@ -1437,6 +1477,11 @@ impl Validator {
config.block_production_method.clone(),
config.enable_block_production_forwarding,
config.generator_config.clone(),
config.block_engine_config.clone(),
config.relayer_config.clone(),
config.tip_manager_config.clone(),
config.shred_receiver_address.clone(),
config.preallocated_bundle_cost,
);

datapoint_info!(
@@ -1461,6 +1506,9 @@ impl Validator {
repair_socket: Arc::new(node.sockets.repair),
outstanding_repair_requests,
cluster_slots,
block_engine_config: config.block_engine_config.clone(),
relayer_config: config.relayer_config.clone(),
shred_receiver_address: config.shred_receiver_address.clone(),
});

Ok(Self {
@@ -1927,6 +1975,7 @@ fn load_blockstore(
.map(|service| service.sender()),
accounts_update_notifier,
exit,
true,
)
.map_err(|err| err.to_string())?;

@@ -2618,6 +2667,7 @@ mod tests {
DEFAULT_TPU_ENABLE_UDP,
32, // max connections per IpAddr per minute for test
Arc::new(RwLock::new(None)),
None,
)
.expect("assume successful validator start");
assert_eq!(
@@ -2695,7 +2745,7 @@ mod tests {
Arc::new(RwLock::new(vec![Arc::new(vote_account_keypair)])),
vec![leader_node.info.clone()],
&config,
true, // should_check_duplicate_instance.
true, // should_check_duplicate_instance
None, // rpc_to_plugin_manager_receiver
Arc::new(RwLock::new(ValidatorStartProgress::default())),
SocketAddrSpace::Unspecified,
@@ -2704,6 +2754,7 @@ mod tests {
DEFAULT_TPU_ENABLE_UDP,
32, // max connections per IpAddr per minute for test
Arc::new(RwLock::new(None)),
None,
)
.expect("assume successful validator start")
})
@@ -2820,86 +2871,86 @@ mod tests {

assert!(is_snapshot_config_valid(
&new_snapshot_config(300, 200),
100
100,
));

let default_accounts_hash_interval =
snapshot_bank_utils::DEFAULT_INCREMENTAL_SNAPSHOT_ARCHIVE_INTERVAL_SLOTS;
assert!(is_snapshot_config_valid(
&new_snapshot_config(
snapshot_bank_utils::DEFAULT_FULL_SNAPSHOT_ARCHIVE_INTERVAL_SLOTS,
snapshot_bank_utils::DEFAULT_INCREMENTAL_SNAPSHOT_ARCHIVE_INTERVAL_SLOTS
snapshot_bank_utils::DEFAULT_INCREMENTAL_SNAPSHOT_ARCHIVE_INTERVAL_SLOTS,
),
default_accounts_hash_interval,
));
assert!(is_snapshot_config_valid(
&new_snapshot_config(
snapshot_bank_utils::DEFAULT_FULL_SNAPSHOT_ARCHIVE_INTERVAL_SLOTS,
DISABLED_SNAPSHOT_ARCHIVE_INTERVAL
DISABLED_SNAPSHOT_ARCHIVE_INTERVAL,
),
default_accounts_hash_interval
default_accounts_hash_interval,
));
assert!(is_snapshot_config_valid(
&new_snapshot_config(
snapshot_bank_utils::DEFAULT_INCREMENTAL_SNAPSHOT_ARCHIVE_INTERVAL_SLOTS,
DISABLED_SNAPSHOT_ARCHIVE_INTERVAL
DISABLED_SNAPSHOT_ARCHIVE_INTERVAL,
),
default_accounts_hash_interval
default_accounts_hash_interval,
));
assert!(is_snapshot_config_valid(
&new_snapshot_config(
DISABLED_SNAPSHOT_ARCHIVE_INTERVAL,
DISABLED_SNAPSHOT_ARCHIVE_INTERVAL
DISABLED_SNAPSHOT_ARCHIVE_INTERVAL,
),
Slot::MAX
Slot::MAX,
));

assert!(!is_snapshot_config_valid(&new_snapshot_config(0, 100), 100));
assert!(!is_snapshot_config_valid(&new_snapshot_config(100, 0), 100));
assert!(!is_snapshot_config_valid(
&new_snapshot_config(42, 100),
100
100,
));
assert!(!is_snapshot_config_valid(
&new_snapshot_config(100, 42),
100
100,
));
assert!(!is_snapshot_config_valid(
&new_snapshot_config(100, 100),
100
100,
));
assert!(!is_snapshot_config_valid(
&new_snapshot_config(100, 200),
100
100,
));
assert!(!is_snapshot_config_valid(
&new_snapshot_config(444, 200),
100
100,
));
assert!(!is_snapshot_config_valid(
&new_snapshot_config(400, 222),
100
100,
));

assert!(is_snapshot_config_valid(
&SnapshotConfig::new_load_only(),
100
100,
));
assert!(is_snapshot_config_valid(
&SnapshotConfig {
full_snapshot_archive_interval_slots: 41,
incremental_snapshot_archive_interval_slots: 37,
..SnapshotConfig::new_load_only()
},
100
100,
));
assert!(is_snapshot_config_valid(
&SnapshotConfig {
full_snapshot_archive_interval_slots: DISABLED_SNAPSHOT_ARCHIVE_INTERVAL,
incremental_snapshot_archive_interval_slots: DISABLED_SNAPSHOT_ARCHIVE_INTERVAL,
..SnapshotConfig::new_load_only()
},
100
100,
));
}

2 changes: 2 additions & 0 deletions core/tests/epoch_accounts_hash.rs
Original file line number Diff line number Diff line change
@@ -439,6 +439,7 @@ fn test_snapshots_have_expected_epoch_accounts_hash() {
if let Some(full_snapshot_archive_info) =
snapshot_utils::get_highest_full_snapshot_archive_info(
&snapshot_config.full_snapshot_archives_dir,
None,
)
{
if full_snapshot_archive_info.slot() == bank.slot() {
@@ -562,6 +563,7 @@ fn test_background_services_request_handling_for_epoch_accounts_hash() {
info!("Taking full snapshot...");
while snapshot_utils::get_highest_full_snapshot_archive_slot(
&snapshot_config.full_snapshot_archives_dir,
None,
) != Some(bank.slot())
{
trace!("waiting for full snapshot...");
2 changes: 2 additions & 0 deletions core/tests/snapshots.rs
Original file line number Diff line number Diff line change
@@ -780,6 +780,7 @@ fn test_snapshots_with_background_services(
&snapshot_test_config
.snapshot_config
.full_snapshot_archives_dir,
None,
) != Some(slot)
{
assert!(
@@ -798,6 +799,7 @@ fn test_snapshots_with_background_services(
.snapshot_config
.incremental_snapshot_archives_dir,
last_full_snapshot_slot.unwrap(),
None,
) != Some(slot)
{
assert!(
8 changes: 8 additions & 0 deletions cost-model/src/cost_tracker.rs
Original file line number Diff line number Diff line change
@@ -130,6 +130,10 @@ impl CostTracker {
self.vote_cost_limit = vote_cost_limit;
}

pub fn set_block_cost_limit(&mut self, block_cost_limit: u64) {
self.block_cost_limit = block_cost_limit;
}

pub fn in_flight_transaction_count(&self) -> usize {
self.in_flight_transaction_count
}
@@ -192,6 +196,10 @@ impl CostTracker {
self.block_cost
}

pub fn block_cost_limit(&self) -> u64 {
self.block_cost_limit
}

pub fn transaction_count(&self) -> u64 {
self.transaction_count
}
17 changes: 17 additions & 0 deletions deploy_programs
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
#!/usr/bin/env sh
# Deploys the tip payment and tip distribution programs on local validator at predetermined address
set -eux

WALLET_LOCATION=~/.config/solana/id.json

# build this solana binary to ensure we're using a version compatible with the validator
cargo b --release --bin solana

./target/release/solana airdrop -ul 1000 $WALLET_LOCATION

(cd jito-programs/tip-payment && anchor build)

# NOTE: make sure the declare_id! is set correctly in the programs
# Also, || true to make sure if fails the first time around, tip_payment can still be deployed
RUST_INFO=trace ./target/release/solana deploy --keypair $WALLET_LOCATION -ul ./jito-programs/tip-payment/target/deploy/tip_distribution.so ./jito-programs/tip-payment/dev/dev_tip_distribution.json || true
RUST_INFO=trace ./target/release/solana deploy --keypair $WALLET_LOCATION -ul ./jito-programs/tip-payment/target/deploy/tip_payment.so ./jito-programs/tip-payment/dev/dev_tip_payment.json
48 changes: 48 additions & 0 deletions dev/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
FROM rust:1.64-slim-bullseye as builder

# Add Google Protocol Buffers for Libra's metrics library.
ENV PROTOC_VERSION 3.8.0
ENV PROTOC_ZIP protoc-$PROTOC_VERSION-linux-x86_64.zip

RUN set -x \
&& apt update \
&& apt install -y \
clang \
cmake \
libudev-dev \
make \
unzip \
libssl-dev \
pkg-config \
zlib1g-dev \
curl \
&& rustup component add rustfmt \
&& rustup component add clippy \
&& rustc --version \
&& cargo --version \
&& curl -OL https://github.com/google/protobuf/releases/download/v$PROTOC_VERSION/$PROTOC_ZIP \
&& unzip -o $PROTOC_ZIP -d /usr/local bin/protoc \
&& unzip -o $PROTOC_ZIP -d /usr/local include/* \
&& rm -f $PROTOC_ZIP


WORKDIR /solana
COPY . .
RUN mkdir -p docker-output

ARG ci_commit
# NOTE: Keep this here before build since variable is referenced during CI build step.
ENV CI_COMMIT=$ci_commit

ARG debug

# Uses docker buildkit to cache the image.
# /usr/local/cargo/git needed for crossbeam patch
RUN --mount=type=cache,mode=0777,target=/solana/target \
--mount=type=cache,mode=0777,target=/usr/local/cargo/registry \
--mount=type=cache,mode=0777,target=/usr/local/cargo/git \
if [ "$debug" = "false" ] ; then \
./cargo stable build --release && cp target/release/solana* ./docker-output && cp target/release/agave* ./docker-output; \
else \
RUSTFLAGS='-g -C force-frame-pointers=yes' ./cargo stable build --release && cp target/release/solana* ./docker-output && cp target/release/agave* ./docker-output; \
fi
37 changes: 14 additions & 23 deletions docs/src/cli/install.md
Original file line number Diff line number Diff line change
@@ -20,11 +20,11 @@ on your preferred workflow:
- Open your favorite Terminal application

- Install the Agave release
[LATEST_AGAVE_RELEASE_VERSION](https://github.com/anza-xyz/agave/releases/tag/LATEST_AGAVE_RELEASE_VERSION)
[LATEST_AGAVE_RELEASE_VERSION](https://github.com/jito-foundation/jito-solana/releases/tag/LATEST_AGAVE_RELEASE_VERSION)
on your machine by running:

```bash
sh -c "$(curl -sSfL https://release.anza.xyz/LATEST_AGAVE_RELEASE_VERSION/install)"
sh -c "$(curl -sSfL https://release.jito.wtf/LATEST_AGAVE_RELEASE_VERSION/install)"
```

- You can replace `LATEST_AGAVE_RELEASE_VERSION` with the release tag matching
@@ -38,7 +38,7 @@ downloading LATEST_AGAVE_RELEASE_VERSION installer
Configuration: /home/solana/.config/solana/install/config.yml
Active release directory: /home/solana/.local/share/solana/install/active_release
* Release version: LATEST_AGAVE_RELEASE_VERSION
* Release URL: https://github.com/anza-xyz/agave/releases/download/LATEST_AGAVE_RELEASE_VERSION/solana-release-x86_64-unknown-linux-gnu.tar.bz2
* Release URL: https://github.com/jito-foundation/jito-solana/releases/download/LATEST_AGAVE_RELEASE_VERSION/solana-release-x86_64-unknown-linux-gnu.tar.bz2
Update successful
```

@@ -65,16 +65,16 @@ solana --version

- Open a Command Prompt (`cmd.exe`) as an Administrator

- Search for Command Prompt in the Windows search bar. When the Command Prompt
app appears, right-click and select “Open as Administrator”. If you are
prompted by a pop-up window asking “Do you want to allow this app to make
changes to your device?”, click Yes.
- Search for Command Prompt in the Windows search bar. When the Command Prompt
app appears, right-click and select “Open as Administrator”. If you are
prompted by a pop-up window asking “Do you want to allow this app to make
changes to your device?”, click Yes.

- Copy and paste the following command, then press Enter to download the Solana
installer into a temporary directory:

```bash
cmd /c "curl https://release.anza.xyz/LATEST_AGAVE_RELEASE_VERSION/agave-install-init-x86_64-pc-windows-msvc.exe --output C:\agave-install-tmp\agave-install-init.exe --create-dirs"
cmd /c "curl https://release.jito.wtf/LATEST_AGAVE_RELEASE_VERSION/agave-install-init-x86_64-pc-windows-msvc.exe --output C:\agave-install-tmp\agave-install-init.exe --create-dirs"
```

- Copy and paste the following command, then press Enter to install the latest
@@ -89,8 +89,8 @@ C:\agave-install-tmp\agave-install-init.exe LATEST_AGAVE_RELEASE_VERSION

- Close the command prompt window and re-open a new command prompt window as a
normal user
- Search for "Command Prompt" in the search bar, then left click on the
Command Prompt app icon, no need to run as Administrator)
- Search for "Command Prompt" in the search bar, then left click on the
Command Prompt app icon, no need to run as Administrator)
- Confirm you have the desired version of `solana` installed by entering:

```bash
@@ -108,9 +108,7 @@ manually download and install the binaries.
### Linux

Download the binaries by navigating to
[https://github.com/anza-xyz/agave/releases/latest](https://github.com/anza-xyz/agave/releases/latest),
download **solana-release-x86_64-unknown-linux-gnu.tar.bz2**, then extract the
archive:
[https://github.com/jito-foundation/jito-solana/releases/latest](https://github.com/jito-foundation/jito-solana/releases/latest),

```bash
tar jxf solana-release-x86_64-unknown-linux-gnu.tar.bz2
@@ -121,9 +119,7 @@ export PATH=$PWD/bin:$PATH
### MacOS

Download the binaries by navigating to
[https://github.com/anza-xyz/agave/releases/latest](https://github.com/anza-xyz/agave/releases/latest),
download **solana-release-x86_64-apple-darwin.tar.bz2**, then extract the
archive:
[https://github.com/jito-foundation/jito-solana/releases/latest](https://github.com/jito-foundation/jito-solana/releases/latest),

```bash
tar jxf solana-release-x86_64-apple-darwin.tar.bz2
@@ -134,10 +130,7 @@ export PATH=$PWD/bin:$PATH
### Windows

- Download the binaries by navigating to
[https://github.com/anza-xyz/agave/releases/latest](https://github.com/anza-xyz/agave/releases/latest),
download **solana-release-x86_64-pc-windows-msvc.tar.bz2**, then extract the
archive using WinZip or similar.

[https://github.com/jito-foundation/jito-solana/releases/latest](https://github.com/jito-foundation/jito-solana/releases/latest),
- Open a Command Prompt and navigate to the directory into which you extracted
the binaries and run:

@@ -242,9 +235,7 @@ above.

After installing the prerequisites, proceed with building Solana from source,
navigate to
[Solana's GitHub releases page](https://github.com/anza-xyz/agave/releases/latest),
and download the **Source Code** archive. Extract the code and build the
binaries with:
[Solana's GitHub releases page](https://github.com/jito-foundation/jito-solana/releases/latest),

```bash
./scripts/cargo-install-all.sh .
2 changes: 1 addition & 1 deletion docs/src/clusters/benchmark.md
Original file line number Diff line number Diff line change
@@ -6,7 +6,7 @@ The Solana git repository contains all the scripts you might need to spin up you

For all four variations, you'd need the latest Rust toolchain and the Solana source code:

First, setup Rust, Cargo and system packages as described in the Solana [README](https://github.com/solana-labs/solana#1-install-rustc-cargo-and-rustfmt)
First, setup Rust, Cargo and system packages as described in the Solana [README](https://github.com/jito-foundation/jito-solana#1-install-rustc-cargo-and-rustfmt)

Now checkout the code from github:

35 changes: 23 additions & 12 deletions docs/src/implemented-proposals/installer.md
Original file line number Diff line number Diff line change
@@ -2,9 +2,12 @@
title: Cluster Software Installation and Updates
---

Currently users are required to build the solana cluster software themselves from the git repository and manually update it, which is error prone and inconvenient.
Currently users are required to build the solana cluster software themselves from the git repository and manually update
it, which is error prone and inconvenient.

This document proposes an easy to use software install and updater that can be used to deploy pre-built binaries for supported platforms. Users may elect to use binaries supplied by Solana or any other party provider. Deployment of updates is managed using an on-chain update manifest program.
This document proposes an easy to use software install and updater that can be used to deploy pre-built binaries for
supported platforms. Users may elect to use binaries supplied by Solana or any other party provider. Deployment of
updates is managed using an on-chain update manifest program.

## Motivating Examples

@@ -13,24 +16,25 @@ This document proposes an easy to use software install and updater that can be u
The easiest install method for supported platforms:

```bash
$ curl -sSf https://raw.githubusercontent.com/solana-labs/solana/v1.0.0/install/agave-install-init.sh | sh
$ curl -sSf https://raw.githubusercontent.com/jito-foundation/jito-solana/v1.0.0/install/agave-install-init.sh | sh
```

This script will check github for the latest tagged release and download and run the `agave-install-init` binary from there.
This script will check github for the latest tagged release and download and run the `agave-install-init` binary from
there.

If additional arguments need to be specified during the installation, the following shell syntax is used:

```bash
$ init_args=.... # arguments for `agave-install-init ...`
$ curl -sSf https://raw.githubusercontent.com/solana-labs/solana/v1.0.0/install/agave-install-init.sh | sh -s - ${init_args}
$ curl -sSf https://raw.githubusercontent.com/jito-foundation/jito-solana/v1.0.0/install/agave-install-init.sh | sh -s - ${init_args}
```

### Fetch and run a pre-built installer from a Github release

With a well-known release URL, a pre-built binary can be obtained for supported platforms:

```bash
$ curl -o agave-install-init https://github.com/solana-labs/solana/releases/download/v1.0.0/agave-install-init-x86_64-apple-darwin
$ curl -o agave-install-init https://github.com/jito-foundation/jito-solana/releases/download/v1.0.0/agave-install-init-x86_64-apple-darwin
$ chmod +x ./agave-install-init
$ ./agave-install-init --help
```
@@ -40,14 +44,15 @@ $ ./agave-install-init --help
If a pre-built binary is not available for a given platform, building the installer from source is always an option:

```bash
$ git clone https://github.com/solana-labs/solana.git
$ git clone https://github.com/jito-foundation/jito-solana.git
$ cd solana/install
$ cargo run -- --help
```

### Deploy a new update to a cluster

Given a solana release tarball \(as created by `ci/publish-tarball.sh`\) that has already been uploaded to a publicly accessible URL, the following commands will deploy the update:
Given a solana release tarball \(as created by `ci/publish-tarball.sh`\) that has already been uploaded to a publicly
accessible URL, the following commands will deploy the update:

```bash
$ solana-keygen new -o update-manifest.json # <-- only generated once, the public key is shared with users
@@ -65,7 +70,10 @@ $ agave-install run agave-validator ... # <-- runs a validator, restarting it a

## On-chain Update Manifest

An update manifest is used to advertise the deployment of new release tarballs on a solana cluster. The update manifest is stored using the `config` program, and each update manifest account describes a logical update channel for a given target triple \(eg, `x86_64-apple-darwin`\). The account public key is well-known between the entity deploying new updates and users consuming those updates.
An update manifest is used to advertise the deployment of new release tarballs on a solana cluster. The update manifest
is stored using the `config` program, and each update manifest account describes a logical update channel for a given
target triple \(eg, `x86_64-apple-darwin`\). The account public key is well-known between the entity deploying new
updates and users consuming those updates.

The update tarball itself is hosted elsewhere, off-chain and can be fetched from the specified `download_url`.

@@ -87,9 +95,11 @@ pub struct SignedUpdateManifest {
}
```

Note that the `manifest` field itself contains a corresponding signature \(`manifest_signature`\) to guard against man-in-the-middle attacks between the `agave-install` tool and the solana cluster RPC API.
Note that the `manifest` field itself contains a corresponding signature \(`manifest_signature`\) to guard against
man-in-the-middle attacks between the `agave-install` tool and the solana cluster RPC API.

To guard against rollback attacks, `agave-install` will refuse to install an update with an older `timestamp_secs` than what is currently installed.
To guard against rollback attacks, `agave-install` will refuse to install an update with an older `timestamp_secs` than
what is currently installed.

## Release Archive Contents

@@ -116,7 +126,8 @@ The `agave-install` tool is used by the user to install and update their cluster
It manages the following files and directories in the user's home directory:

- `~/.config/solana/install/config.yml` - user configuration and information about currently installed software version
- `~/.local/share/solana/install/bin` - a symlink to the current release. eg, `~/.local/share/solana-update/<update-pubkey>-<manifest_signature>/bin`
- `~/.local/share/solana/install/bin` - a symlink to the current release.
eg, `~/.local/share/solana-update/<update-pubkey>-<manifest_signature>/bin`
- `~/.local/share/solana/install/releases/<download_sha256>/` - contents of a release

### Command-line Interface
2 changes: 1 addition & 1 deletion entry/src/entry.rs
Original file line number Diff line number Diff line change
@@ -220,7 +220,7 @@ pub fn hash_transactions(transactions: &[VersionedTransaction]) -> Hash {
.iter()
.flat_map(|tx| tx.signatures.iter())
.collect();
let merkle_tree = MerkleTree::new(&signatures);
let merkle_tree = MerkleTree::new(&signatures, false);
if let Some(root_hash) = merkle_tree.get_root() {
*root_hash
} else {
29 changes: 20 additions & 9 deletions entry/src/poh.rs
Original file line number Diff line number Diff line change
@@ -72,19 +72,30 @@ impl Poh {
}

pub fn record(&mut self, mixin: Hash) -> Option<PohEntry> {
if self.remaining_hashes == 1 {
let entries = self.record_bundle(&[mixin]);
entries.unwrap_or_default().pop()
}

pub fn record_bundle(&mut self, mixins: &[Hash]) -> Option<Vec<PohEntry>> {
if self.remaining_hashes <= mixins.len() as u64 {
return None; // Caller needs to `tick()` first
}

self.hash = hashv(&[self.hash.as_ref(), mixin.as_ref()]);
let num_hashes = self.num_hashes + 1;
self.num_hashes = 0;
self.remaining_hashes -= 1;
let entries = mixins
.iter()
.map(|m| {
self.hash = hashv(&[self.hash.as_ref(), m.as_ref()]);
let num_hashes = self.num_hashes + 1;
self.num_hashes = 0;
self.remaining_hashes -= 1;
PohEntry {
num_hashes,
hash: self.hash,
}
})
.collect();

Some(PohEntry {
num_hashes,
hash: self.hash,
})
Some(entries)
}

pub fn tick(&mut self) -> Option<PohEntry> {
30 changes: 30 additions & 0 deletions f
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
#!/usr/bin/env bash
# Builds jito-solana in a docker container.
# Useful for running on machines that might not have cargo installed but can run docker (Flatcar Linux).
# run `./f true` to compile with debug flags

set -eux

SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" &>/dev/null && pwd)"

GIT_SHA="$(git rev-parse --short HEAD)"

echo "Git hash: $GIT_SHA"

DEBUG_FLAGS=${1-false}

DOCKER_BUILDKIT=1 docker build \
--build-arg debug=$DEBUG_FLAGS \
--build-arg ci_commit=$GIT_SHA \
-t jitolabs/build-solana \
-f dev/Dockerfile . \
--progress=plain

# Creates a temporary container, copies solana-validator built inside container there and
# removes the temporary container.
docker rm temp || true
docker container create --name temp jitolabs/build-solana
mkdir -p $SCRIPT_DIR/docker-output
# Outputs the solana-validator binary to $SOLANA/docker-output/solana-validator
docker container cp temp:/solana/docker-output $SCRIPT_DIR/
docker rm temp
41 changes: 31 additions & 10 deletions fetch-spl.sh
Original file line number Diff line number Diff line change
@@ -13,8 +13,24 @@ fetch_program() {
declare version=$2
declare address=$3
declare loader=$4
declare repo=$5

declare so=spl_$name-$version.so
case $repo in
"jito")
so=$name-$version.so
so_name="$name.so"
url="https://github.com/jito-foundation/jito-programs/releases/download/v$version/$so_name"
;;
"solana")
so=spl_$name-$version.so
so_name="spl_${name//-/_}.so"
url="https://github.com/solana-labs/solana-program-library/releases/download/$name-v$version/$so_name"
;;
*)
echo "Unsupported repo: $repo"
return 1
;;
esac

if [[ $loader == "$upgradeableLoader" ]]; then
genesis_args+=(--upgradeable-program "$address" "$loader" "$so" none)
@@ -30,12 +46,11 @@ fetch_program() {
cp ~/.cache/solana-spl/"$so" "$so"
else
echo "Downloading $name $version"
so_name="spl_${name//-/_}.so"
(
set -x
curl -L --retry 5 --retry-delay 2 --retry-connrefused \
-o "$so" \
"https://github.com/solana-labs/solana-program-library/releases/download/$name-v$version/$so_name"
"$url"
)

mkdir -p ~/.cache/solana-spl
@@ -44,19 +59,25 @@ fetch_program() {

}

fetch_program token 3.5.0 TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA BPFLoader2111111111111111111111111111111111
fetch_program token-2022 5.0.2 TokenzQdBNbLqP5VEhdkAS6EPFLC1PHnBqCXEpPxuEb BPFLoaderUpgradeab1e11111111111111111111111
fetch_program memo 1.0.0 Memo1UhkJRfHyvLMcVucJwxXeuD728EqVDDwQDxFMNo BPFLoader1111111111111111111111111111111111
fetch_program memo 3.0.0 MemoSq4gqABAXKb96qnH8TysNcWxMyWCqXgDLGmfcHr BPFLoader2111111111111111111111111111111111
fetch_program associated-token-account 1.1.2 ATokenGPvbdGVxr1b2hvZbsiqW5xWH25efTNsLJA8knL BPFLoader2111111111111111111111111111111111
fetch_program feature-proposal 1.0.0 Feat1YXHhH6t1juaWF74WLcfv4XoNocjXA6sPWHNgAse BPFLoader2111111111111111111111111111111111
fetch_program token 3.5.0 TokenkegQfeZyiNwAJbNbGKPFXCWuBvf9Ss623VQ5DA BPFLoader2111111111111111111111111111111111 solana
fetch_program token-2022 5.0.2 TokenzQdBNbLqP5VEhdkAS6EPFLC1PHnBqCXEpPxuEb BPFLoaderUpgradeab1e11111111111111111111111 solana
fetch_program memo 1.0.0 Memo1UhkJRfHyvLMcVucJwxXeuD728EqVDDwQDxFMNo BPFLoader1111111111111111111111111111111111 solana
fetch_program memo 3.0.0 MemoSq4gqABAXKb96qnH8TysNcWxMyWCqXgDLGmfcHr BPFLoader2111111111111111111111111111111111 solana
fetch_program associated-token-account 1.1.2 ATokenGPvbdGVxr1b2hvZbsiqW5xWH25efTNsLJA8knL BPFLoader2111111111111111111111111111111111 solana
fetch_program feature-proposal 1.0.0 Feat1YXHhH6t1juaWF74WLcfv4XoNocjXA6sPWHNgAse BPFLoader2111111111111111111111111111111111 solana
# jito programs
fetch_program jito_tip_payment 0.1.4 T1pyyaTNZsKv2WcRAB8oVnk93mLJw2XzjtVYqCsaHqt BPFLoaderUpgradeab1e11111111111111111111111 jito
fetch_program jito_tip_distribution 0.1.4 4R3gSG8BpU4t19KYj8CfnbtRpnT8gtk4dvTHxVRwc2r7 BPFLoaderUpgradeab1e11111111111111111111111 jito

echo "${genesis_args[@]}" > spl-genesis-args.sh
echo "${genesis_args[@]}" >spl-genesis-args.sh

echo
echo "Available SPL programs:"
ls -l spl_*.so

echo "Available Jito programs:"
ls -l jito*.so

echo
echo "solana-genesis command-line arguments (spl-genesis-args.sh):"
cat spl-genesis-args.sh
4 changes: 4 additions & 0 deletions gossip/src/cluster_info.rs
Original file line number Diff line number Diff line change
@@ -572,6 +572,10 @@ impl ClusterInfo {
*self.entrypoints.write().unwrap() = entrypoints;
}

pub fn set_my_contact_info(&self, my_contact_info: ContactInfo) {
*self.my_contact_info.write().unwrap() = my_contact_info;
}

pub fn save_contact_info(&self) {
let nodes = {
let entrypoint_gossip_addrs = self
4 changes: 2 additions & 2 deletions install/agave-install-init.sh
Original file line number Diff line number Diff line change
@@ -16,9 +16,9 @@
{ # this ensures the entire script is downloaded #

if [ -z "$SOLANA_DOWNLOAD_ROOT" ]; then
SOLANA_DOWNLOAD_ROOT="https://github.com/anza-xyz/agave/releases/download/"
SOLANA_DOWNLOAD_ROOT="https://github.com/jito-foundation/jito-solana/releases/download/"
fi
GH_LATEST_RELEASE="https://api.github.com/repos/anza-xyz/agave/releases/latest"
GH_LATEST_RELEASE="https://api.github.com/repos/jito-foundation/jito-solana/releases/latest"

set -e

8 changes: 4 additions & 4 deletions install/src/command.rs
Original file line number Diff line number Diff line change
@@ -568,23 +568,23 @@ pub fn init(

fn github_release_download_url(release_semver: &str) -> String {
format!(
"https://github.com/anza-xyz/agave/releases/download/v{}/solana-release-{}.tar.bz2",
"https://github.com/jito-foundation/jito-solana/releases/download/v{}/solana-release-{}.tar.bz2",
release_semver,
crate::build_env::TARGET
)
}

fn release_channel_download_url(release_channel: &str) -> String {
format!(
"https://release.anza.xyz/{}/solana-release-{}.tar.bz2",
"https://release.jito.wtf/{}/solana-release-{}.tar.bz2",
release_channel,
crate::build_env::TARGET
)
}

fn release_channel_version_url(release_channel: &str) -> String {
format!(
"https://release.anza.xyz/{}/solana-release-{}.yml",
"https://release.jito.wtf/{}/solana-release-{}.yml",
release_channel,
crate::build_env::TARGET
)
@@ -901,7 +901,7 @@ fn check_for_newer_github_release(

while page == 1 || releases.len() == PER_PAGE {
let url = reqwest::Url::parse_with_params(
"https://api.github.com/repos/anza-xyz/agave/releases",
"https://api.github.com/repos/jito-foundation/jito-solana/releases",
&[
("per_page", &format!("{PER_PAGE}")),
("page", &format!("{page}")),
1 change: 1 addition & 0 deletions jito-programs
Submodule jito-programs added at d2b9c5
19 changes: 19 additions & 0 deletions jito-protos/Cargo.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
[package]
name = "jito-protos"
version = { workspace = true }
edition = { workspace = true }
publish = false

[dependencies]
bytes = { workspace = true }
prost = { workspace = true }
prost-types = { workspace = true }
tonic = { workspace = true }

[build-dependencies]
tonic-build = { workspace = true }

# windows users should install the protobuf compiler manually and set the PROTOC
# envar to point to the installed binary
[target."cfg(not(windows))".build-dependencies]
protobuf-src = { workspace = true }
38 changes: 38 additions & 0 deletions jito-protos/build.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
use tonic_build::configure;

fn main() -> Result<(), std::io::Error> {
const PROTOC_ENVAR: &str = "PROTOC";
if std::env::var(PROTOC_ENVAR).is_err() {
#[cfg(not(windows))]
std::env::set_var(PROTOC_ENVAR, protobuf_src::protoc());
}

let proto_base_path = std::path::PathBuf::from("protos");
let proto_files = [
"auth.proto",
"block_engine.proto",
"bundle.proto",
"packet.proto",
"relayer.proto",
"shared.proto",
];
let mut protos = Vec::new();
for proto_file in &proto_files {
let proto = proto_base_path.join(proto_file);
println!("cargo:rerun-if-changed={}", proto.display());
protos.push(proto);
}

configure()
.build_client(true)
.build_server(false)
.type_attribute(
"TransactionErrorType",
"#[cfg_attr(test, derive(enum_iterator::Sequence))]",
)
.type_attribute(
"InstructionErrorType",
"#[cfg_attr(test, derive(enum_iterator::Sequence))]",
)
.compile(&protos, &[proto_base_path])
}
1 change: 1 addition & 0 deletions jito-protos/protos
Submodule protos added at b74a23
25 changes: 25 additions & 0 deletions jito-protos/src/lib.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
pub mod proto {
pub mod auth {
tonic::include_proto!("auth");
}

pub mod block_engine {
tonic::include_proto!("block_engine");
}

pub mod bundle {
tonic::include_proto!("bundle");
}

pub mod packet {
tonic::include_proto!("packet");
}

pub mod relayer {
tonic::include_proto!("relayer");
}

pub mod shared {
tonic::include_proto!("shared");
}
}
1 change: 1 addition & 0 deletions ledger-tool/src/bigtable.rs
Original file line number Diff line number Diff line change
@@ -1380,6 +1380,7 @@ pub fn bigtable_process_command(ledger_path: &Path, matches: &ArgMatches<'_>) {
blockstore.clone(),
process_options,
None,
true,
);

let bank = bank_forks.read().unwrap().working_bank();
18 changes: 15 additions & 3 deletions ledger-tool/src/ledger_utils.rs
Original file line number Diff line number Diff line change
@@ -112,13 +112,15 @@ pub fn load_and_process_ledger_or_exit(
blockstore: Arc<Blockstore>,
process_options: ProcessOptions,
transaction_status_sender: Option<TransactionStatusSender>,
ignore_halt_at_slot_for_snapshot_loading: bool,
) -> LoadAndProcessLedgerOutput {
load_and_process_ledger(
arg_matches,
genesis_config,
blockstore,
process_options,
transaction_status_sender,
ignore_halt_at_slot_for_snapshot_loading,
)
.unwrap_or_else(|err| {
eprintln!("Exiting. Failed to load and process ledger: {err}");
@@ -132,6 +134,7 @@ pub fn load_and_process_ledger(
blockstore: Arc<Blockstore>,
process_options: ProcessOptions,
transaction_status_sender: Option<TransactionStatusSender>,
ignore_halt_at_slot_for_snapshot_loading: bool,
) -> Result<LoadAndProcessLedgerOutput, LoadAndProcessLedgerError> {
let bank_snapshots_dir = if blockstore.is_primary_access() {
blockstore.ledger_path().join("snapshot")
@@ -142,6 +145,12 @@ pub fn load_and_process_ledger(
.join("snapshot")
};

let snapshot_halt_at_slot = if ignore_halt_at_slot_for_snapshot_loading {
None
} else {
process_options.halt_at_slot
};

let mut starting_slot = 0; // default start check with genesis
let snapshot_config = if arg_matches.is_present("no_snapshot") {
None
@@ -155,13 +164,15 @@ pub fn load_and_process_ledger(
.ok()
.map(PathBuf::from)
.unwrap_or_else(|| full_snapshot_archives_dir.clone());
if let Some(full_snapshot_slot) =
snapshot_utils::get_highest_full_snapshot_archive_slot(&full_snapshot_archives_dir)
{
if let Some(full_snapshot_slot) = snapshot_utils::get_highest_full_snapshot_archive_slot(
&full_snapshot_archives_dir,
snapshot_halt_at_slot,
) {
let incremental_snapshot_slot =
snapshot_utils::get_highest_incremental_snapshot_archive_slot(
&incremental_snapshot_archives_dir,
full_snapshot_slot,
snapshot_halt_at_slot,
)
.unwrap_or_default();
starting_slot = std::cmp::max(full_snapshot_slot, incremental_snapshot_slot);
@@ -294,6 +305,7 @@ pub fn load_and_process_ledger(
None, // Maybe support this later, though
accounts_update_notifier,
exit.clone(),
ignore_halt_at_slot_for_snapshot_loading,
)
.map_err(LoadAndProcessLedgerError::LoadBankForks)?;
let block_verification_method = value_t!(
8 changes: 7 additions & 1 deletion ledger-tool/src/main.rs
Original file line number Diff line number Diff line change
@@ -1438,8 +1438,8 @@ fn main() {
Arc::new(blockstore),
process_options,
None,
true,
);

println!(
"{}",
compute_shred_version(
@@ -1632,6 +1632,7 @@ fn main() {
Arc::new(blockstore),
process_options,
transaction_status_sender,
true,
);

let working_bank = bank_forks.read().unwrap().working_bank();
@@ -1699,6 +1700,7 @@ fn main() {
Arc::new(blockstore),
process_options,
None,
true,
);

let dot = graph_forks(&bank_forks.read().unwrap(), &graph_config);
@@ -1872,6 +1874,7 @@ fn main() {
blockstore.clone(),
process_options,
None,
false,
);
// Snapshot creation will implicitly perform AccountsDb
// flush and clean operations. These operations cannot be
@@ -2267,6 +2270,7 @@ fn main() {
Arc::new(blockstore),
process_options,
None,
true,
);
let bank = bank_forks.read().unwrap().working_bank();

@@ -2319,7 +2323,9 @@ fn main() {
Arc::new(blockstore),
process_options,
None,
true,
);

let bank_forks = bank_forks.read().unwrap();
let slot = bank_forks.working_bank().slot();
let bank = bank_forks.get(slot).unwrap_or_else(|| {
1 change: 1 addition & 0 deletions ledger-tool/src/program.rs
Original file line number Diff line number Diff line change
@@ -85,6 +85,7 @@ fn load_blockstore(ledger_path: &Path, arg_matches: &ArgMatches<'_>) -> Arc<Bank
Arc::new(blockstore),
process_options,
None,
true,
);
let bank = bank_forks.read().unwrap().working_bank();
bank
22 changes: 19 additions & 3 deletions ledger/src/bank_forks_utils.rs
Original file line number Diff line number Diff line change
@@ -22,7 +22,7 @@ use {
snapshot_hash::{FullSnapshotHash, IncrementalSnapshotHash, StartingSnapshotHashes},
snapshot_utils,
},
solana_sdk::genesis_config::GenesisConfig,
solana_sdk::{clock::Slot, genesis_config::GenesisConfig},
std::{
path::PathBuf,
result,
@@ -98,6 +98,7 @@ pub fn load(
entry_notification_sender,
accounts_update_notifier,
exit,
true,
)?;
blockstore_processor::process_blockstore_from_root(
blockstore,
@@ -125,9 +126,12 @@ pub fn load_bank_forks(
entry_notification_sender: Option<&EntryNotifierSender>,
accounts_update_notifier: Option<AccountsUpdateNotifier>,
exit: Arc<AtomicBool>,
ignore_halt_at_slot_for_snapshot_loading: bool,
) -> LoadResult {
fn get_snapshots_to_load(
snapshot_config: Option<&SnapshotConfig>,
halt_at_slot: Option<Slot>,
ignore_halt_at_slot_for_snapshot_loading: bool,
) -> Option<(
FullSnapshotArchiveInfo,
Option<IncrementalSnapshotArchiveInfo>,
@@ -137,9 +141,16 @@ pub fn load_bank_forks(
return None;
};

let halt_at_slot = if ignore_halt_at_slot_for_snapshot_loading {
None
} else {
halt_at_slot
};

let Some(full_snapshot_archive_info) =
snapshot_utils::get_highest_full_snapshot_archive_info(
&snapshot_config.full_snapshot_archives_dir,
halt_at_slot,
)
else {
warn!(
@@ -153,6 +164,7 @@ pub fn load_bank_forks(
snapshot_utils::get_highest_incremental_snapshot_archive_info(
&snapshot_config.incremental_snapshot_archives_dir,
full_snapshot_archive_info.slot(),
halt_at_slot,
);

Some((
@@ -163,7 +175,11 @@ pub fn load_bank_forks(

let (bank_forks, starting_snapshot_hashes) =
if let Some((full_snapshot_archive_info, incremental_snapshot_archive_info)) =
get_snapshots_to_load(snapshot_config)
get_snapshots_to_load(
snapshot_config,
process_options.halt_at_slot,
ignore_halt_at_slot_for_snapshot_loading,
)
{
// SAFETY: Having snapshots to load ensures a snapshot config
let snapshot_config = snapshot_config.unwrap();
@@ -222,7 +238,7 @@ pub fn load_bank_forks(
}

#[allow(clippy::too_many_arguments)]
fn bank_forks_from_snapshot(
pub fn bank_forks_from_snapshot(
full_snapshot_archive_info: FullSnapshotArchiveInfo,
incremental_snapshot_archive_info: Option<IncrementalSnapshotArchiveInfo>,
genesis_config: &GenesisConfig,
5 changes: 3 additions & 2 deletions ledger/src/blockstore_processor.rs
55 changes: 42 additions & 13 deletions ledger/src/token_balances.rs
3 changes: 3 additions & 0 deletions local-cluster/src/local_cluster.rs
6 changes: 5 additions & 1 deletion local-cluster/src/local_cluster_snapshot_utils.rs
5 changes: 5 additions & 0 deletions local-cluster/src/validator_configs.rs
17 changes: 15 additions & 2 deletions local-cluster/tests/local_cluster.rs
46 changes: 35 additions & 11 deletions merkle-tree/src/merkle_tree.rs
34 changes: 34 additions & 0 deletions multinode-demo/bootstrap-validator.sh
40 changes: 40 additions & 0 deletions multinode-demo/validator.sh
2 changes: 1 addition & 1 deletion perf/src/sigverify.rs
136 changes: 91 additions & 45 deletions poh/src/poh_recorder.rs
34 changes: 18 additions & 16 deletions poh/src/poh_service.rs
23 changes: 18 additions & 5 deletions program-runtime/src/timings.rs
18 changes: 18 additions & 0 deletions program-test/src/programs.rs
Binary file not shown.
Binary file not shown.
616 changes: 485 additions & 131 deletions programs/sbf/Cargo.lock
4 changes: 2 additions & 2 deletions programs/sbf/tests/programs.rs
2 changes: 2 additions & 0 deletions rpc-client-api/Cargo.toml
166 changes: 166 additions & 0 deletions rpc-client-api/src/bundles.rs
1 change: 1 addition & 0 deletions rpc-client-api/src/lib.rs
2 changes: 2 additions & 0 deletions rpc-client-api/src/request.rs
56 changes: 55 additions & 1 deletion rpc-client/src/nonblocking/rpc_client.rs
14 changes: 14 additions & 0 deletions rpc-client/src/rpc_client.rs
1 change: 1 addition & 0 deletions rpc-test/Cargo.toml
2 changes: 2 additions & 0 deletions rpc-test/tests/rpc.rs
2 changes: 2 additions & 0 deletions rpc/Cargo.toml
489 changes: 467 additions & 22 deletions rpc/src/rpc.rs
9 changes: 3 additions & 6 deletions rpc/src/rpc_service.rs
22 changes: 22 additions & 0 deletions runtime-plugin/Cargo.toml
4 changes: 4 additions & 0 deletions runtime-plugin/src/lib.rs
41 changes: 41 additions & 0 deletions runtime-plugin/src/runtime_plugin.rs
326 changes: 326 additions & 0 deletions runtime-plugin/src/runtime_plugin_admin_rpc_service.rs
275 changes: 275 additions & 0 deletions runtime-plugin/src/runtime_plugin_manager.rs
123 changes: 123 additions & 0 deletions runtime-plugin/src/runtime_plugin_service.rs
97 changes: 93 additions & 4 deletions runtime/src/bank.rs
16 changes: 9 additions & 7 deletions runtime/src/snapshot_bank_utils.rs
24 changes: 19 additions & 5 deletions runtime/src/snapshot_utils.rs
4 changes: 2 additions & 2 deletions runtime/src/stake_account.rs
12 changes: 6 additions & 6 deletions runtime/src/stakes.rs
24 changes: 23 additions & 1 deletion runtime/src/transaction_batch.rs
5 changes: 5 additions & 0 deletions rustfmt.toml
15 changes: 15 additions & 0 deletions s
4 changes: 2 additions & 2 deletions scripts/agave-install-deploy.sh
2 changes: 2 additions & 0 deletions scripts/increment-cargo-version.sh
4 changes: 4 additions & 0 deletions scripts/run.sh
17 changes: 10 additions & 7 deletions sdk/Cargo.toml
33 changes: 33 additions & 0 deletions sdk/src/bundle/mod.rs
1 change: 1 addition & 0 deletions sdk/src/lib.rs
2 changes: 2 additions & 0 deletions send-transaction-service/Cargo.toml
47 changes: 34 additions & 13 deletions send-transaction-service/src/send_transaction_service.rs
9 changes: 9 additions & 0 deletions start
30 changes: 30 additions & 0 deletions start_multi
5 changes: 5 additions & 0 deletions svm/src/account_loader.rs
6 changes: 5 additions & 1 deletion svm/src/account_overrides.rs
2 changes: 1 addition & 1 deletion svm/src/transaction_processor.rs
1 change: 1 addition & 0 deletions test-validator/src/lib.rs
61 changes: 61 additions & 0 deletions tip-distributor/Cargo.toml
52 changes: 52 additions & 0 deletions tip-distributor/README.md
190 changes: 190 additions & 0 deletions tip-distributor/src/bin/claim-mev-tips.rs
34 changes: 34 additions & 0 deletions tip-distributor/src/bin/merkle-root-generator.rs
54 changes: 54 additions & 0 deletions tip-distributor/src/bin/merkle-root-uploader.rs
67 changes: 67 additions & 0 deletions tip-distributor/src/bin/stake-meta-generator.rs
398 changes: 398 additions & 0 deletions tip-distributor/src/claim_mev_workflow.rs
1,062 changes: 1,062 additions & 0 deletions tip-distributor/src/lib.rs
54 changes: 54 additions & 0 deletions tip-distributor/src/merkle_root_generator_workflow.rs
138 changes: 138 additions & 0 deletions tip-distributor/src/merkle_root_upload_workflow.rs
310 changes: 310 additions & 0 deletions tip-distributor/src/reclaim_rent_workflow.rs
973 changes: 973 additions & 0 deletions tip-distributor/src/stake_meta_generator_workflow.rs
9 changes: 8 additions & 1 deletion transaction-status/src/lib.rs
1 change: 1 addition & 0 deletions turbine/benches/cluster_info.rs
3 changes: 2 additions & 1 deletion turbine/benches/retransmit_stage.rs
51 changes: 43 additions & 8 deletions turbine/src/broadcast_stage.rs
1 change: 1 addition & 0 deletions turbine/src/broadcast_stage/broadcast_duplicates_run.rs
1 change: 1 addition & 0 deletions turbine/src/broadcast_stage/broadcast_fake_shreds_run.rs
55 changes: 40 additions & 15 deletions turbine/src/broadcast_stage/broadcast_utils.rs
24 changes: 21 additions & 3 deletions turbine/src/broadcast_stage/standard_broadcast_run.rs
15 changes: 14 additions & 1 deletion turbine/src/retransmit_stage.rs
2 changes: 2 additions & 0 deletions validator/Cargo.toml
110 changes: 109 additions & 1 deletion validator/src/admin_rpc_service.rs
3 changes: 2 additions & 1 deletion validator/src/bootstrap.rs
205 changes: 205 additions & 0 deletions validator/src/cli.rs
268 changes: 267 additions & 1 deletion validator/src/main.rs
2 changes: 1 addition & 1 deletion version/src/lib.rs
4 changes: 2 additions & 2 deletions wen-restart/src/wen_restart.rs

0 comments on commit ffc0054

Please sign in to comment.