-
Notifications
You must be signed in to change notification settings - Fork 29
add SWIP-39: balanced neighbourhood registry aka smart neighbourhood management #74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR introduces SWIP-39, a smart neighbourhood management system for decentralized service networks. The proposal aims to solve the "one operator, one node in a neighbourhood" problem through a balanced assignment mechanism that ensures fair load distribution and prevents sybil attacks.
Key changes include:
- A comprehensive specification for balanced neighbourhood registry with random assignment
- Smart contract implementation for managing node registration and neighbourhood assignments
- Mathematical formulations for neighbourhood depth calculation and overlay address validation
Co-authored-by: Copilot <[email protected]>
Co-authored-by: Copilot <[email protected]>
Co-authored-by: Copilot <[email protected]>
Co-authored-by: Copilot <[email protected]>
Co-authored-by: Copilot <[email protected]>
Co-authored-by: Copilot <[email protected]>
Co-authored-by: Copilot <[email protected]>
Co-authored-by: Copilot <[email protected]>
Thanks for the well-thought-out SWIP — the design is elegant and clearly addresses Sybil resistance and balanced assignment. I had a few questions and points I’d like to discuss for further clarity and robustness:
|
Could _upgradeDepth() become too expensive to execute as the number of assigned nodes grows? The _upgradeDepth() function doubles the assignment and remaining lists, copies all existing nodes to their new positions using bitwise logic, and clears/rebuilds state — all in a single call. If the number of nodes reaches high volumes (e.g. 1,000+ or 10,000+), this could approach or exceed the block gas limit, making the function fail or stall the system. Proposed solutions: |
Can the committers array become inefficient with a large number of registrants (e.g. 20k nodes)? The committers[] array is iterated over in _expire(), _findEntryFor(), and _removeCommitter() using for loops. If a large number of nodes register, or if expired entries are not promptly cleared, the gas cost of these operations can grow linearly and become prohibitively expensive. Proposed solutions: |
I did not consider it realistic, since each registrant entry expires in max 256 blocks, that is in a matter of <4 game rounds and they lose their deposit if they refuse to pay, so likely all the potential players may organically wait out.
but they need to be removed at some point.... and I am not sure how a mapping that needs to be reindexed after every entry removed, will solve this. |
well, maybe. To be honest, there is also another way. We do not need to allow, just any length of the committer list. The length represents the queue, and the length of valid entries are the ones in the queue you can skip. This effectively quantifies the tries that you got (effectively mining) but also the realistic probability that that someone will come in and change the neighbourhood you (thought you were) assigned to. If this probability is high (there is a lot of nodes that can submit mined overlays), then it can easily happen, that whenever an assigned neighbourhood is read off, nodes will frontrun. So it would just make sense to limit this skip queue to a fix constant number. But this means that the committers list should effectively have a limited length. Now if we siply reject registrations beyond this limit, then the shorter this length, the harder it is for the same amount of currently aspiring nodes to commit. Now in order to avoid that the registration tx needs to be continuously retried (due to it most likely be frontrun by competing resistrants), we should introduce another proper FIFO queue (that is unlimited but does not need iteration). In this case the validity period starts when you enter the limited queue.
Not sure I get how these structures would be useful: index needs reindexing or keeps inactive entries, head pointer just delays the problem and so does the inactive flag. |
i. the mining step is just offloading computation rather than POW, strategic placement is prevented by random allocation, economic disincentives to be quantified forwith |
agree with this, some discussion around implementing binary trie or similar datastructure which will ensure uniform gas usage while providing for the necessary functionality |
very good swip, a few thoughts for discussion and expansion in the doc:
|
Andrew from Shtuka here. This proposal looks like it will have considerable impact on staking dynamics, so it's important for our work on "Swarmonomics" to understand what the economic impact would be and give feedback if it gives cause for concern, or if the proposal lacks details that we would need to make that determination. I came into this assuming that the goal of this SWIP was to help the node population achieve a uniform distribution across neighbourhoods at the depth reported in the Redistribution game. Let's call this number I am having difficulty understanding how deregistrations are handled, especially when a deregistration would result in a decrease in
Suppose we start at
Here it seems to me that allowing What about this one?
In this case, the scheme of assigning only empty depth MigrationUpon migration, should nodes get to keep their existing overlay addresses or do they need to mine new ones according to the new assignment scheme? If the latter, what happens to all the data currently stored in the reserves of staking nodes? GameabilityClearly, if the automatic randomised assignment of neighbourhoods is meant to control how many nodes end up in each neighbourhood, we need to consider whether it might be gamed by operators who have preferences about their address that differ from what they would be assigned randomly. There are a few reasons that mining a prefix may be desirable:
If mining prefixes is valuable, it may be necessary to make it more costly by adding more financial or time costs to the entry or exit procedure. If stake is instantly withdrawable, the cost of "rerolling" one's neighbourhood assignment as described in this proposal is a couple of transactions and a 1 block wait (for obtaining randomness). By itself, this is not very costly. For example, rerolling until a desired 10 bit prefix is achieved, provided it is actually in the pool of assignable neighbourhoods, would most likely take just a few hours and cost a negligible amount in gas. If stake is not withdrawable, the minimum stake deposit is added to the cost of rerolling. Allocating addresses from a restricted range may lead to unexpected dynamics. For example, the set of assignable neighbourhoods is smaller when Relation to other proposalsIn the Migration section of the proposal it is mentioned that some simplifications to the staking contract should occur before implementing this SWIP. I'm not sure which simplifications you mean other than fixed stake, but I'll comment on the latter.
Recommendations
|
hi @awmacpherson thanks for this, very detailed and insightful not comprehensive but some responses for you
|
If this is a response to whether "depth" and "level" are interchangeable, then I'm afraid I don't understand. To what extent are they interchangeable? What does "has various levels" mean?
Here "the tree" means the tree of all bitstrings (of length <= 256)? What does it mean for a position in the tree to become available? The way the proposal is written suggests that only bitstrings of length Here is my attempt to make sense of this: given a set Note that assigning addresses uniformly at random already has a weak version of this property, which is that if |
Some semantics, for a term “Ether address” that is mentioned in SWIP multiple times, its technically incorrect term, needs to be “Ethereum address” as Ether is currency and address doesn’t belong to Ether but to network. |
Here is my attempt to make sense of this: given a set S of overlay addresses, each address a has a shortest prefix p ( a ) not shared by any other address in the set. Take the subtree T ( S ) of the tree of all bitstrings spanned by the set of prefixes p ( a ) . Then it makes sense to ask if T ( S ) is balanced as a binary tree. It sounds as though this is the type of "balancing" you might be after. One can then cook up a metric measuring how far T ( S ) is from being balanced and always prefer to assign addresses that reduce this distance.
this is correct i believe. for the second part: yes, but i think it it is too weak and that we must pursue an onboarding/off-boarding queue approach cc: @zelig 👁️ |
this is correct i believe. for the second part: yes, but i think it it is too weak and that we must pursue an onboarding/off-boarding queue approach
|
UPDATE: draft version ready
solidity code is generated and appended to the swip
seriously work in progress
a better solution for the one operator one node in a neighborhood problem than variable stakes