-
Notifications
You must be signed in to change notification settings - Fork 184
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
implement a mempool for the sequencer #2341
base: main
Are you sure you want to change the base?
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2341 +/- ##
==========================================
- Coverage 74.62% 74.60% -0.03%
==========================================
Files 110 112 +2
Lines 12024 12252 +228
==========================================
+ Hits 8973 9140 +167
- Misses 2354 2403 +49
- Partials 697 709 +12 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Half review! Will continue in January!
From my understanding, the current implementation seems to work as such:
This architecture is simple and straightforward, but I'm not sure how effective it would work it would be in real life. Here are a few things we need to consider:
There should also be a separation between pending txs and ready txs. Ready tx means tx that has higher nonce than the next immediate nonce of an account. Ready tx means the next immediate one and it's ready to be executed. Mempool design and architecture could get complex, as it can be exploited for MEV. I'm not sure how far we want to take it at the current stage, but I'll suggest to look into existing architecture like Erigon's mempool design and go-ethereum one and see how they do certain things. |
4f1107c
to
0cf2454
Compare
5badae0
to
70db337
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Haven't checked functionality. Will do that on a second round. I am sharing now a bit of stylistic changes which I think will improve the code. Let me know what you think
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Adding some final comments
mempool/mempool_test.go
Outdated
// push multiple to non empty (push 4,5. now have 3,4,5) | ||
for i := uint64(4); i < 6; i++ { | ||
senderAddress := new(felt.Felt).SetUint64(i) | ||
state.EXPECT().ContractNonce(senderAddress).Return(new(felt.Felt).SetUint64(0), nil) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe SetUint64(0)
is not necessary. If you want to be explicit, I think returning &felt.Zero
is a better option
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've updated it to &felt.Zero, but just out of curiosity, why do you think new(felt).SetUint64(0)
is suboptimal? Just because it might get allocated on the heap?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, in this instance it was from a legibility point of view. It is easier to read that your returning zero than creating a new felt and setting it's value to zero
fd3959d
to
6ff70d9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
This PR implements a mempool required for the sequencer.