Skip to content

core/filtermaps: properly handle history cutoff while rendering index head (WIP) #31447

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 2 commits into from

Conversation

zsfelfoldi
Copy link
Contributor

This PR ensures safe behavior of the log indexer when head indexing cannot be performed because of history cutoff. Previously only the tail indexing conditions were aware of the cutoff point.
Now if a receipt is missing, the renderer returns errHistoryCutoff in which case the indexer main loop will not reattempt the same thing but instead it will reset the database, triggering a new checkpoint initialization attempt (maybe the database was old but the client has been updated and has recent checkpoints). If the latest checkpoint is also older than the cutoff point then the init function also returns errHistoryCutoff and the indexer goes into disabled state.
Another issue fixed here is that in case of unexpected errors waitForEvent was called which was incorrect because what it really did was that it waited for a new target head and if there was already an unindexed target head then it did not block at all, letting the indexer loop to spin on max speed with the same error, not even noticing node shutdown.

This PR is based on top of #31081 because it also touches the database reset logic which has been refactored in that previous PR.

// deletePrefixRange deletes everything with the given prefix from the database.
func deletePrefixRange(db ethdb.KeyValueRangeDeleter, prefix []byte) error {
end := bytes.Clone(prefix)
end[len(end)-1]++
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not 100% correct, but probably good enough for the normal cases.

The issue is if the last byte of prefix is 0xff, then the range will be invalid.

// increaseKey increase the input key by one bit. Return nil if the entire
// addition operation overflows.
func increaseKey(key []byte) []byte {
	for i := len(key) - 1; i >= 0; i-- {
		key[i]++
		if key[i] != 0x0 {
			return key
		}
	}
	return nil
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know but is is never 0xff, it is a constant we specify in schema.go.

@zsfelfoldi
Copy link
Contributor Author

Fixing the indexing with cutoff sync will take more time and is not essential to today's release so I'll mark this PR as WIP and not targeted for this release.

@zsfelfoldi zsfelfoldi changed the title core/filtermaps: properly handle history cutoff while rendering index head core/filtermaps: properly handle history cutoff while rendering index head (WIP) Mar 21, 2025
@zsfelfoldi
Copy link
Contributor Author

Closed in favor of #31455

@zsfelfoldi zsfelfoldi closed this Mar 23, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants