Skip to content

Beacon sync update header and reorg blocks processing #3306

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
May 19, 2025

Conversation

mjfh
Copy link
Contributor

@mjfh mjfh commented May 19, 2025

No description provided.

mjfh added 7 commits May 19, 2025 15:59
why
  Beware of outliers (remember law of iterated log.)

also
  No need to reorg the header queue, anymore. This was a pre-PR #3125
  feature which was needed to curb the queue when it grow too large.

  This cannot happen anymore as there is always a deterministic fetch
  that can solve any immediate gap preventing the queue from
  serialising headers.
reason for crash
  The syncer will stop trying downloading headers after failing on 30
  different sync peers.

  The state machine will advance to `cancelHeaders` causing all sync
  peers to stop as soon as they can without updating the bookkeeping
  for unprocessed headers which might leave the `books` in an open or
  non-finalised state.

  Unfortunately, when synchronising all simultaneously running sync
  peers, the *books* were checked for sort of being finalised already
  before cleaning up (aka finalising.)
why
  Not needed anymore as the block queue will run on a smaller memory
  footprint, anyway.
why
  The last PRs merged seem to have made a change, presumably in the
 `FC` module running `async`. This allows for importing/executing
  blocks while fetching new ones at the same without depleting
  sync peers.

  Previously, all sync peers were gone after a while when doing this.
why
  Blocks download and import is now modelled after how it is done for
  the headers:
    + if a sync peer can import right at the top of the `FC` module,
      download a list of blocks and import right away
    + Otherwise, if a sync peer cannot directly import, then download
      and queue a list of blocks if there is space on the queue
  As a separate pseudo task, fetch a list of blocks from the queue if
  it can be imported right at the top of the `FC` module
@mjfh mjfh merged commit 05eaffb into master May 19, 2025
5 checks passed
@mjfh mjfh deleted the Beacon-sync-update-header-and-reorg-blocks-processing branch May 19, 2025 16:25
mjfh added a commit that referenced this pull request Jun 2, 2025
why
  Headers download and stashing on header chain cache is now updated
  after how it is done for blocks (which was originally re-modelled
  somehow after the headers download in PR #3306.)

  Apart from a code cleanup, the main change is that each queued
  record will now hold only a single sync peer response (previously
  this was a list of several concatenated responses.)
mjfh added a commit that referenced this pull request Jun 2, 2025
* Update/generalise last-slow-peer management

why
  With slow peer management, the last remaining sync peer is never
  zombified if it is *slow* but delivers some data.

  This was implemented for the blocks download only, now extended
  to headers download.

* Reorg headers download and stashing on header chain cache

why
  Headers download and stashing on header chain cache is now updated
  after how it is done for blocks (which was originally re-modelled
  somehow after the headers download in PR #3306.)

  Apart from a code cleanup, the main change is that each queued
  record will now hold only a single sync peer response (previously
  this was a list of several concatenated responses.)

* Remove restriction on the number of sync peers

why
  This restriction is a legacy construct which was used for
  + allowing to run on a single peer for testing
  + implicitly restrict the header and block queues when the size
    was restricted by a high-water-mark rather than a strict upper
    bound.

* Reduce number of headers requested at a time via ethXX to 800

why
  This reduces some some need for in-memory cache space.

  When downloading 22.6m headers from `mainnet` with download request
  size 1024 one can make it in just under a hour on a well exposed
  site (so that enough peers are available.)

  Reducing the request size to 800 one gets just some minutes over an
  hour.

* Update copyright year
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant