Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add chunked concurrency to avoid thread signalling #2637

Closed
wants to merge 10 commits into from

Conversation

prasannavl
Copy link
Member

Summary

  • Chunks up the concurrency to avoid thread signalling.
  • Yet to measure perf. difference

Implications

  • Storage

    • Database reindex required
    • Database reindex optional
    • Database reindex not required
    • None
  • Consensus

    • Network upgrade required
    • Includes backward compatible changes
    • Includes consensus workarounds
    • Includes consensus refactors
    • None

Bushstar
Bushstar previously approved these changes Oct 30, 2023
@prasannavl
Copy link
Member Author

Sample runs on testnet:

Till block: 1176419

Current Master: Fresh sync: 13:24:47 -- 15:03:45 = 99 mins

1149999 - 1176419
Master: Rewinded: 17:47:31 - 18:13:17 = 27 mins
Master-Eager (without WaitForCompletion): Rewinded: TODO 


1149999 - 1176419
Chunked: Rewinded: 00:27:25 - 01:07:14 = 37 mins
Chunked-Eager (without WaitForCompletion): Rewinded: Running

@@ -2991,6 +2992,10 @@ bool CChainState::ConnectBlock(const CBlock &block,
auto isEvmEnabledForBlock = blockCtx.GetEVMEnabledForBlock();
auto &evmTemplate = blockCtx.GetEVMTemplate();

std::vector<CEvmTxMessage> evmTxMsgs;
std::vector<std::vector<CEvmTxMessage>> evmTxMsgsPools;
TaskGroup evmEccPreCacheTaskPool;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we define this in ConnectTip so we can call MarkCancelAndWaitForCompletion there instead of having to call it on every failure or success in ConnectBlock.

@prasannavl
Copy link
Member Author

prasannavl commented Nov 2, 2023

#2639 provided better performance for now.

Closing as more granular per task optimisation at a later stage is a better direction than this workaround,

@prasannavl prasannavl closed this Nov 2, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants