Skip to content

Electra spec changes for v1.5.0-beta.0 #6731

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 64 commits into from
Jan 13, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
64 commits
Select commit Hold shift + click to select a range
25feedf
First pass
pawanjay176 Aug 29, 2024
4dc6e65
Add restrictions to RuntimeVariableList api
pawanjay176 Aug 30, 2024
a9cb329
Use empty_uninitialized and fix warnings
pawanjay176 Aug 30, 2024
60100fc
Fix some todos
pawanjay176 Aug 30, 2024
13f9bba
Merge branch 'unstable' into max-blobs-preset
pawanjay176 Sep 3, 2024
e71020e
Fix take impl on RuntimeFixedList
pawanjay176 Sep 4, 2024
52bb581
cleanup
pawanjay176 Sep 4, 2024
d37733b
Fix test compilations
pawanjay176 Sep 4, 2024
12c6ef1
Fix some more tests
pawanjay176 Sep 4, 2024
2fcb293
Fix test from unstable
pawanjay176 Sep 7, 2024
21ecb58
Merge branch 'unstable' into max-blobs-preset
pawanjay176 Oct 21, 2024
9d5a3af
Implement "Bugfix and more withdrawal tests"
michaelsproul Dec 19, 2024
f892849
Implement "Add missed exit checks to consolidation processing"
michaelsproul Dec 19, 2024
f27216f
Implement "Update initial earliest_exit_epoch calculation"
michaelsproul Dec 19, 2024
c31f1cc
Implement "Limit consolidating balance by validator.effective_balance"
michaelsproul Dec 20, 2024
282eefb
Implement "Use 16-bit random value in validator filter"
michaelsproul Dec 20, 2024
52e602d
Implement "Do not change creds type on consolidation"
michaelsproul Dec 20, 2024
293db28
Rename PendingPartialWithdraw index field to validator_index
eserilev Jan 2, 2025
9b8a25f
Skip slots to get test to pass and add TODO
eserilev Jan 2, 2025
23d331b
Implement "Synchronously check all transactions to have non-zero length"
eserilev Jan 2, 2025
0c2c8c4
Merge remote-tracking branch 'origin/unstable' into max-blobs-preset
michaelsproul Jan 6, 2025
de01f92
Remove footgun function
michaelsproul Jan 6, 2025
1095d60
Minor simplifications
michaelsproul Jan 6, 2025
2e86585
Move from preset to config
michaelsproul Jan 6, 2025
32483d3
Fix typo
michaelsproul Jan 6, 2025
88bedf0
Revert "Remove footgun function"
michaelsproul Jan 6, 2025
063b79c
Try fixing tests
michaelsproul Jan 6, 2025
251bca7
Implement "bump minimal preset MAX_BLOB_COMMITMENTS_PER_BLOCK and KZG…
eserilev Jan 6, 2025
e4bfe71
Thread through ChainSpec
michaelsproul Jan 6, 2025
f66e179
Fix release tests
michaelsproul Jan 6, 2025
440e854
Move RuntimeFixedVector into module and rename
michaelsproul Jan 6, 2025
04b3743
Add test
michaelsproul Jan 6, 2025
3d3bc6d
Implement "Remove post-altair `initialize_beacon_state_from_eth1` fro…
eserilev Jan 6, 2025
2dbd3b7
Update preset YAML
michaelsproul Jan 7, 2025
0cd263f
Remove empty RuntimeVarList awefullness
pawanjay176 Jan 7, 2025
3b788bf
Make max_blobs_per_block a config parameter (#6329)
michaelsproul Jan 7, 2025
7c215f8
Fix tests
pawanjay176 Jan 7, 2025
0f26408
Implement max_blobs_per_block_electra
michaelsproul Jan 7, 2025
d65e821
Fix config issues
michaelsproul Jan 7, 2025
26c409c
Simplify BlobSidecarListFromRoot
michaelsproul Jan 7, 2025
eee9218
Disable PeerDAS tests
michaelsproul Jan 7, 2025
07c039c
Merge remote-tracking branch 'origin/unstable' into max-blobs-preset
michaelsproul Jan 9, 2025
d4e152c
Bump quota to account for new target (6)
michaelsproul Jan 9, 2025
a73ecb5
Remove clone
michaelsproul Jan 9, 2025
f13bdfc
Fix issue from review
pawanjay176 Jan 9, 2025
70917f7
Try to remove ugliness
pawanjay176 Jan 10, 2025
22b7fcb
Merge branch 'unstable' into max-blobs-preset
pawanjay176 Jan 10, 2025
9e972b1
Merge remote-tracking branch 'origin/unstable' into electra-alpha10
michaelsproul Jan 10, 2025
bb59e7a
Merge commit '04b3743ec1e0b650269dd8e58b540c02430d1c0d' into electra-…
michaelsproul Jan 10, 2025
1d4dc59
Merge remote-tracking branch 'pawan/max-blobs-preset' into electra-al…
michaelsproul Jan 10, 2025
bba7310
Update tests to v1.5.0-beta.0
michaelsproul Jan 10, 2025
ef13f0f
Resolve merge conflicts
eserilev Jan 10, 2025
72bcc8a
Linting
eserilev Jan 10, 2025
344ba2b
fmt
eserilev Jan 10, 2025
a45c0f0
Fix test and add TODO
eserilev Jan 10, 2025
f4fe1b8
Gracefully handle slashed proposers in fork choice tests
michaelsproul Jan 13, 2025
1205d83
Merge remote-tracking branch 'origin/unstable' into electra-alpha10
michaelsproul Jan 13, 2025
3b17732
Keep latest changes from max_blobs_per_block PR in codec.rs
michaelsproul Jan 13, 2025
e2ff440
Revert a few more regressions and add a comment
michaelsproul Jan 13, 2025
7df6560
Disable more DAS tests
michaelsproul Jan 13, 2025
4fbca37
Improve validator monitor test a little
michaelsproul Jan 13, 2025
e821e62
Make test more robust
michaelsproul Jan 13, 2025
0b36d60
Fix sync test that didn't understand blobs
michaelsproul Jan 13, 2025
4f08ac7
Fill out cropped comment
michaelsproul Jan 13, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
82 changes: 33 additions & 49 deletions beacon_node/beacon_chain/tests/validator_monitor.rs
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ use beacon_chain::test_utils::{
use beacon_chain::validator_monitor::{ValidatorMonitorConfig, MISSED_BLOCK_LAG_SLOTS};
use logging::test_logger;
use std::sync::LazyLock;
use types::{Epoch, EthSpec, ForkName, Keypair, MainnetEthSpec, PublicKeyBytes, Slot};
use types::{Epoch, EthSpec, Keypair, MainnetEthSpec, PublicKeyBytes, Slot};

// Should ideally be divisible by 3.
pub const VALIDATOR_COUNT: usize = 48;
Expand Down Expand Up @@ -117,7 +117,7 @@ async fn missed_blocks_across_epochs() {
}

#[tokio::test]
async fn produces_missed_blocks() {
async fn missed_blocks_basic() {
let validator_count = 16;

let slots_per_epoch = E::slots_per_epoch();
Expand All @@ -127,13 +127,10 @@ async fn produces_missed_blocks() {
// Generate 63 slots (2 epochs * 32 slots per epoch - 1)
let initial_blocks = slots_per_epoch * nb_epoch_to_simulate.as_u64() - 1;

// The validator index of the validator that is 'supposed' to miss a block
let validator_index_to_monitor = 1;

// 1st scenario //
//
// Missed block happens when slot and prev_slot are in the same epoch
let harness1 = get_harness(validator_count, vec![validator_index_to_monitor]);
let harness1 = get_harness(validator_count, vec![]);
harness1
.extend_chain(
initial_blocks as usize,
Expand All @@ -153,7 +150,7 @@ async fn produces_missed_blocks() {
let mut prev_slot = Slot::new(idx - 1);
let mut duplicate_block_root = *_state.block_roots().get(idx as usize).unwrap();
let mut validator_indexes = _state.get_beacon_proposer_indices(&harness1.spec).unwrap();
let mut validator_index = validator_indexes[slot_in_epoch.as_usize()];
let mut missed_block_proposer = validator_indexes[slot_in_epoch.as_usize()];
let mut proposer_shuffling_decision_root = _state
.proposer_shuffling_decision_root(duplicate_block_root)
.unwrap();
Expand All @@ -170,7 +167,7 @@ async fn produces_missed_blocks() {
beacon_proposer_cache.lock().insert(
epoch,
proposer_shuffling_decision_root,
validator_indexes.into_iter().collect::<Vec<usize>>(),
validator_indexes,
_state.fork()
),
Ok(())
Expand All @@ -187,12 +184,15 @@ async fn produces_missed_blocks() {
// Let's validate the state which will call the function responsible for
// adding the missed blocks to the validator monitor
let mut validator_monitor = harness1.chain.validator_monitor.write();

validator_monitor.add_validator_pubkey(KEYPAIRS[missed_block_proposer].pk.compress());
validator_monitor.process_valid_state(nb_epoch_to_simulate, _state, &harness1.chain.spec);

// We should have one entry in the missed blocks map
assert_eq!(
validator_monitor.get_monitored_validator_missed_block_count(validator_index as u64),
1
validator_monitor
.get_monitored_validator_missed_block_count(missed_block_proposer as u64),
1,
);
}

Expand All @@ -201,23 +201,7 @@ async fn produces_missed_blocks() {
// Missed block happens when slot and prev_slot are not in the same epoch
// making sure that the cache reloads when the epoch changes
// in that scenario the slot that missed a block is the first slot of the epoch
// We are adding other validators to monitor as these ones will miss a block depending on
// the fork name specified when running the test as the proposer cache differs depending on
// the fork name (cf. seed)
//
// If you are adding a new fork and seeing errors, print
// `validator_indexes[slot_in_epoch.as_usize()]` and add it below.
let validator_index_to_monitor = match harness1.spec.fork_name_at_slot::<E>(Slot::new(0)) {
ForkName::Base => 7,
ForkName::Altair => 2,
ForkName::Bellatrix => 4,
ForkName::Capella => 11,
ForkName::Deneb => 3,
ForkName::Electra => 1,
ForkName::Fulu => 6,
};

let harness2 = get_harness(validator_count, vec![validator_index_to_monitor]);
let harness2 = get_harness(validator_count, vec![]);
let advance_slot_by = 9;
harness2
.extend_chain(
Expand All @@ -238,11 +222,7 @@ async fn produces_missed_blocks() {
slot_in_epoch = slot % slots_per_epoch;
duplicate_block_root = *_state2.block_roots().get(idx as usize).unwrap();
validator_indexes = _state2.get_beacon_proposer_indices(&harness2.spec).unwrap();
validator_index = validator_indexes[slot_in_epoch.as_usize()];
// If you are adding a new fork and seeing errors, it means the fork seed has changed the
// validator_index. Uncomment this line, run the test again and add the resulting index to the
// list above.
//eprintln!("new index which needs to be added => {:?}", validator_index);
missed_block_proposer = validator_indexes[slot_in_epoch.as_usize()];

let beacon_proposer_cache = harness2
.chain
Expand All @@ -256,7 +236,7 @@ async fn produces_missed_blocks() {
beacon_proposer_cache.lock().insert(
epoch,
duplicate_block_root,
validator_indexes.into_iter().collect::<Vec<usize>>(),
validator_indexes.clone(),
_state2.fork()
),
Ok(())
Expand All @@ -271,30 +251,33 @@ async fn produces_missed_blocks() {
// Let's validate the state which will call the function responsible for
// adding the missed blocks to the validator monitor
let mut validator_monitor2 = harness2.chain.validator_monitor.write();
validator_monitor2.add_validator_pubkey(KEYPAIRS[missed_block_proposer].pk.compress());
validator_monitor2.process_valid_state(epoch, _state2, &harness2.chain.spec);
// We should have one entry in the missed blocks map
assert_eq!(
validator_monitor2.get_monitored_validator_missed_block_count(validator_index as u64),
validator_monitor2
.get_monitored_validator_missed_block_count(missed_block_proposer as u64),
1
);

// 3rd scenario //
//
// A missed block happens but the validator is not monitored
// it should not be flagged as a missed block
idx = initial_blocks + (advance_slot_by) - 7;
while validator_indexes[(idx % slots_per_epoch) as usize] == missed_block_proposer
&& idx / slots_per_epoch == epoch.as_u64()
{
idx += 1;
}
slot = Slot::new(idx);
prev_slot = Slot::new(idx - 1);
slot_in_epoch = slot % slots_per_epoch;
duplicate_block_root = *_state2.block_roots().get(idx as usize).unwrap();
validator_indexes = _state2.get_beacon_proposer_indices(&harness2.spec).unwrap();
let not_monitored_validator_index = validator_indexes[slot_in_epoch.as_usize()];
// This could do with a refactor: https://github.com/sigp/lighthouse/issues/6293
assert_ne!(
not_monitored_validator_index,
validator_index_to_monitor,
"this test has a fragile dependency on hardcoded indices. you need to tweak some settings or rewrite this"
);
let second_missed_block_proposer = validator_indexes[slot_in_epoch.as_usize()];

// This test may fail if we can't find another distinct proposer in the same epoch.
// However, this should be vanishingly unlikely: P ~= (1/16)^32 = 2e-39.
assert_ne!(missed_block_proposer, second_missed_block_proposer);

assert_eq!(
_state2.set_block_root(prev_slot, duplicate_block_root),
Expand All @@ -306,10 +289,9 @@ async fn produces_missed_blocks() {
validator_monitor2.process_valid_state(epoch, _state2, &harness2.chain.spec);

// We shouldn't have any entry in the missed blocks map
assert_ne!(validator_index, not_monitored_validator_index);
assert_eq!(
validator_monitor2
.get_monitored_validator_missed_block_count(not_monitored_validator_index as u64),
.get_monitored_validator_missed_block_count(second_missed_block_proposer as u64),
0
);
}
Expand All @@ -318,7 +300,7 @@ async fn produces_missed_blocks() {
//
// A missed block happens at state.slot - LOG_SLOTS_PER_EPOCH
// it shouldn't be flagged as a missed block
let harness3 = get_harness(validator_count, vec![validator_index_to_monitor]);
let harness3 = get_harness(validator_count, vec![]);
harness3
.extend_chain(
slots_per_epoch as usize,
Expand All @@ -338,7 +320,7 @@ async fn produces_missed_blocks() {
prev_slot = Slot::new(idx - 1);
duplicate_block_root = *_state3.block_roots().get(idx as usize).unwrap();
validator_indexes = _state3.get_beacon_proposer_indices(&harness3.spec).unwrap();
validator_index = validator_indexes[slot_in_epoch.as_usize()];
missed_block_proposer = validator_indexes[slot_in_epoch.as_usize()];
proposer_shuffling_decision_root = _state3
.proposer_shuffling_decision_root_at_epoch(epoch, duplicate_block_root)
.unwrap();
Expand All @@ -355,7 +337,7 @@ async fn produces_missed_blocks() {
beacon_proposer_cache.lock().insert(
epoch,
proposer_shuffling_decision_root,
validator_indexes.into_iter().collect::<Vec<usize>>(),
validator_indexes,
_state3.fork()
),
Ok(())
Expand All @@ -372,11 +354,13 @@ async fn produces_missed_blocks() {
// Let's validate the state which will call the function responsible for
// adding the missed blocks to the validator monitor
let mut validator_monitor3 = harness3.chain.validator_monitor.write();
validator_monitor3.add_validator_pubkey(KEYPAIRS[missed_block_proposer].pk.compress());
validator_monitor3.process_valid_state(epoch, _state3, &harness3.chain.spec);

// We shouldn't have one entry in the missed blocks map
assert_eq!(
validator_monitor3.get_monitored_validator_missed_block_count(validator_index as u64),
validator_monitor3
.get_monitored_validator_missed_block_count(missed_block_proposer as u64),
0
);
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -128,6 +128,11 @@ impl<'block, E: EthSpec> NewPayloadRequest<'block, E> {

let _timer = metrics::start_timer(&metrics::EXECUTION_LAYER_VERIFY_BLOCK_HASH);

// Check that no transactions in the payload are zero length
if payload.transactions().iter().any(|slice| slice.is_empty()) {
return Err(Error::ZeroLengthTransaction);
}

let (header_hash, rlp_transactions_root) = calculate_execution_block_hash(
payload,
parent_beacon_block_root,
Expand Down
1 change: 1 addition & 0 deletions beacon_node/execution_layer/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -157,6 +157,7 @@ pub enum Error {
payload: ExecutionBlockHash,
transactions_root: Hash256,
},
ZeroLengthTransaction,
PayloadBodiesByRangeNotSupported,
InvalidJWTSecret(String),
InvalidForkForPayload,
Expand Down
2 changes: 1 addition & 1 deletion beacon_node/lighthouse_network/src/rpc/protocol.rs
Original file line number Diff line number Diff line change
Expand Up @@ -710,7 +710,7 @@ pub fn rpc_blob_limits<E: EthSpec>() -> RpcLimits {
}
}

// TODO(peerdas): fix hardcoded max here
// TODO(das): fix hardcoded max here
pub fn rpc_data_column_limits<E: EthSpec>(fork_name: ForkName) -> RpcLimits {
RpcLimits::new(
DataColumnSidecar::<E>::empty().as_ssz_bytes().len(),
Expand Down
53 changes: 39 additions & 14 deletions beacon_node/network/src/sync/tests/range.rs
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,15 @@ use crate::sync::manager::SLOT_IMPORT_TOLERANCE;
use crate::sync::range_sync::RangeSyncType;
use crate::sync::SyncMessage;
use beacon_chain::test_utils::{AttestationStrategy, BlockStrategy};
use beacon_chain::EngineState;
use beacon_chain::{block_verification_types::RpcBlock, EngineState, NotifyExecutionLayer};
use lighthouse_network::rpc::{RequestType, StatusMessage};
use lighthouse_network::service::api_types::{AppRequestId, Id, SyncRequestId};
use lighthouse_network::{PeerId, SyncInfo};
use std::time::Duration;
use types::{EthSpec, Hash256, MinimalEthSpec as E, SignedBeaconBlock, Slot};
use types::{
BlobSidecarList, BlockImportSource, EthSpec, Hash256, MinimalEthSpec as E, SignedBeaconBlock,
SignedBeaconBlockHash, Slot,
};

const D: Duration = Duration::new(0, 0);

Expand Down Expand Up @@ -154,7 +157,9 @@ impl TestRig {
}
}

async fn create_canonical_block(&mut self) -> SignedBeaconBlock<E> {
async fn create_canonical_block(
&mut self,
) -> (SignedBeaconBlock<E>, Option<BlobSidecarList<E>>) {
self.harness.advance_slot();

let block_root = self
Expand All @@ -165,19 +170,39 @@ impl TestRig {
AttestationStrategy::AllValidators,
)
.await;
self.harness
.chain
.store
.get_full_block(&block_root)
.unwrap()
.unwrap()
// TODO(das): this does not handle data columns yet
let store = &self.harness.chain.store;
let block = store.get_full_block(&block_root).unwrap().unwrap();
let blobs = if block.fork_name_unchecked().deneb_enabled() {
store.get_blobs(&block_root).unwrap().blobs()
} else {
None
};
(block, blobs)
}

async fn remember_block(&mut self, block: SignedBeaconBlock<E>) {
self.harness
.process_block(block.slot(), block.canonical_root(), (block.into(), None))
async fn remember_block(
&mut self,
(block, blob_sidecars): (SignedBeaconBlock<E>, Option<BlobSidecarList<E>>),
) {
// This code is kind of duplicated from Harness::process_block, but takes sidecars directly.
let block_root = block.canonical_root();
self.harness.set_current_slot(block.slot());
let _: SignedBeaconBlockHash = self
.harness
.chain
.process_block(
block_root,
RpcBlock::new(Some(block_root), block.into(), blob_sidecars).unwrap(),
NotifyExecutionLayer::Yes,
BlockImportSource::RangeSync,
|| Ok(()),
)
.await
.unwrap()
.try_into()
.unwrap();
self.harness.chain.recompute_head_at_current_slot().await;
}
}

Expand Down Expand Up @@ -217,9 +242,9 @@ async fn state_update_while_purging() {
// Need to create blocks that can be inserted into the fork-choice and fit the "known
// conditions" below.
let head_peer_block = rig_2.create_canonical_block().await;
let head_peer_root = head_peer_block.canonical_root();
let head_peer_root = head_peer_block.0.canonical_root();
let finalized_peer_block = rig_2.create_canonical_block().await;
let finalized_peer_root = finalized_peer_block.canonical_root();
let finalized_peer_root = finalized_peer_block.0.canonical_root();

// Get a peer with an advanced head
let head_peer = rig.add_head_peer_with_root(head_peer_root);
Expand Down
Loading
Loading