-
Notifications
You must be signed in to change notification settings - Fork 987
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use 16-bit random value in validator filter #4039
base: dev
Are you sure you want to change the base?
Conversation
The change looks good and makes sense to me! 👍 |
Sounds logical... good writeup |
It shouldn't be necessary to add extra tests. There should be several sanity tests which will check this. For example,
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The PR looks good to me and great work on the analysis! 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good to me
See: https://docs.google.com/spreadsheets/d/18VeMVLO1jbP8xHkT4faGEb5SyrWQhQLcU-BtsL9pTss
This fixes an oversight in
compute_proposer_index
andget_next_sync_committee_indices
.Currently, we use an 8-bit random byte to perform an effective balance filter. This was used to penalize validators with effective balances less than 32 ETH. But now that we have raised the maximum effective balance, this mechanism plays a more important role in validator selection.
One might expect a validator with a greater effective balance to have a greater chance of being selected as proposer or sync committee participant, but no. This is because the random byte can only represent 256 "buckets" of effective balance. So for example, currently, a validator with 33 ETH has the same likelihood of being selected as a validator with 40 ETH, because these validators are in the same bucket.
The solution is to use a bigger random value to allow more precision. I propose we use a 16-bit random value instead which provides enough precision (~65k buckets). Originally I wanted to use 32-bit values but it would require more complex changes because there would be an integer overflow when multiplying the max values together.