-
Notifications
You must be signed in to change notification settings - Fork 134
Beacon sync update header and reorg blocks processing #3306
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
mjfh
merged 7 commits into
master
from
Beacon-sync-update-header-and-reorg-blocks-processing
May 19, 2025
Merged
Beacon sync update header and reorg blocks processing #3306
mjfh
merged 7 commits into
master
from
Beacon-sync-update-header-and-reorg-blocks-processing
May 19, 2025
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
why Beware of outliers (remember law of iterated log.) also No need to reorg the header queue, anymore. This was a pre-PR #3125 feature which was needed to curb the queue when it grow too large. This cannot happen anymore as there is always a deterministic fetch that can solve any immediate gap preventing the queue from serialising headers.
reason for crash The syncer will stop trying downloading headers after failing on 30 different sync peers. The state machine will advance to `cancelHeaders` causing all sync peers to stop as soon as they can without updating the bookkeeping for unprocessed headers which might leave the `books` in an open or non-finalised state. Unfortunately, when synchronising all simultaneously running sync peers, the *books* were checked for sort of being finalised already before cleaning up (aka finalising.)
why Not needed anymore as the block queue will run on a smaller memory footprint, anyway.
why The last PRs merged seem to have made a change, presumably in the `FC` module running `async`. This allows for importing/executing blocks while fetching new ones at the same without depleting sync peers. Previously, all sync peers were gone after a while when doing this.
why Blocks download and import is now modelled after how it is done for the headers: + if a sync peer can import right at the top of the `FC` module, download a list of blocks and import right away + Otherwise, if a sync peer cannot directly import, then download and queue a list of blocks if there is space on the queue As a separate pseudo task, fetch a list of blocks from the queue if it can be imported right at the top of the `FC` module
mjfh
added a commit
that referenced
this pull request
Jun 2, 2025
why Headers download and stashing on header chain cache is now updated after how it is done for blocks (which was originally re-modelled somehow after the headers download in PR #3306.) Apart from a code cleanup, the main change is that each queued record will now hold only a single sync peer response (previously this was a list of several concatenated responses.)
mjfh
added a commit
that referenced
this pull request
Jun 2, 2025
* Update/generalise last-slow-peer management why With slow peer management, the last remaining sync peer is never zombified if it is *slow* but delivers some data. This was implemented for the blocks download only, now extended to headers download. * Reorg headers download and stashing on header chain cache why Headers download and stashing on header chain cache is now updated after how it is done for blocks (which was originally re-modelled somehow after the headers download in PR #3306.) Apart from a code cleanup, the main change is that each queued record will now hold only a single sync peer response (previously this was a list of several concatenated responses.) * Remove restriction on the number of sync peers why This restriction is a legacy construct which was used for + allowing to run on a single peer for testing + implicitly restrict the header and block queues when the size was restricted by a high-water-mark rather than a strict upper bound. * Reduce number of headers requested at a time via ethXX to 800 why This reduces some some need for in-memory cache space. When downloading 22.6m headers from `mainnet` with download request size 1024 one can make it in just under a hour on a well exposed site (so that enough peers are available.) Reducing the request size to 800 one gets just some minutes over an hour. * Update copyright year
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.