Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Probabilistic queue priorization: do not starve bulk requests on very fast connections #995

Merged
merged 9 commits into from
Nov 29, 2024

Conversation

ArneBab
Copy link
Contributor

@ArneBab ArneBab commented Nov 12, 2024

No description provided.

This is intended to avoid starving the bulk queue when realtime
requests are received faster than it takes to resolve one.
@ArneBab ArneBab force-pushed the probabilistic-queue-priorization branch from fe30c65 to 0e4d506 Compare November 12, 2024 21:40
Copy link
Contributor

@bertm bertm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the choice of 90% is backed by data or measurements, please add that justification the Javadoc.

Otherwise fine, just a few code style issues.

src/freenet/node/PeerMessageQueue.java Outdated Show resolved Hide resolved
src/freenet/node/PeerMessageQueue.java Outdated Show resolved Hide resolved
src/freenet/node/PeerMessageQueue.java Outdated Show resolved Hide resolved
test/freenet/node/NewPacketFormatTest.java Outdated Show resolved Hide resolved
@ArneBab
Copy link
Contributor Author

ArneBab commented Nov 13, 2024

@bertm there isn’t an empiric base for the 10% — it’s just some small number to break out of possible starving. It may be sufficient to go to 1%.

@ArneBab ArneBab merged commit c873c45 into hyphanet:next Nov 29, 2024
1 check passed
@ArneBab
Copy link
Contributor Author

ArneBab commented Nov 29, 2024

merged — thank you for the reviews!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants