Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

upd/HRW library #2629

Merged
merged 2 commits into from
Nov 15, 2023
Merged

upd/HRW library #2629

merged 2 commits into from
Nov 15, 2023

Conversation

carpawell
Copy link
Member

No description provided.

@@ -96,11 +96,6 @@ func (e *StorageEngine) Evacuate(prm EvacuateShardPrm) (EvacuateShardRes, error)
}
e.mtx.RUnlock()

weights := make([]float64, 0, len(shards))
for i := range shards {
weights = append(weights, e.shardWeight(shards[i].Shard))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if we have different capacity for every shard?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can add weight to sort it more "honestly" and will not know what weight to use when a Get request is here

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We better keep the functionality as is then and create an issue to solve it in the future. We will need this, shard can be put into/retrieved from the metabase.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shard can be put into/retrieved from the metabase.

if meta is not per shard (not the current case)

Copy link
Member

@roman-khimov roman-khimov Nov 2, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, sure. But the weighting itself should be kept.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But the weighting itself should be kept.

you mean this PR? this code is currently broken for sure: it always uses 0 and confuses. i suggest returning weighting (reworking in fact, the current version can't be used at all) when we need it and when we are ready for it

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you think it needs a complete rework?

Copy link
Member Author

@carpawell carpawell Nov 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. it is not used and not usable. i mean totally. it is commented as Amount of free disk space. Measured in kilobytes. and this field is never changed (always zero). and never was changed from the beginnings
  2. shard currently does not know how much it stores. just no code uses it, nobody ever asked about it. some estimation metrics per container, some calculations inside bbolt's transactions, some internal changes by background workers, and so on. all of it does not use this field (see 1.)
  3. if we want it to be used as a weight, it will be the first storage use case, so meta per shard should be reworked

i do not mean weighting is hard to implement, but some things should be done first, and currently it just does wrong things

@carpawell carpawell force-pushed the upd/hrw-library branch 2 times, most recently from 088f469 to 3c49f54 Compare November 14, 2023 15:59
@carpawell carpawell marked this pull request as ready for review November 14, 2023 15:59
Signed-off-by: Pavel Karpy <[email protected]>
Every shard has a zero weight. It is useless and worked only because of the
falling back to the simple sorting in the HRW library.

Signed-off-by: Pavel Karpy <[email protected]>
Copy link

codecov bot commented Nov 15, 2023

Codecov Report

Attention: 37 lines in your changes are missing coverage. Please review.

Comparison is base (59a8198) 28.67% compared to head (9f01e39) 28.67%.
Report is 5 commits behind head on master.

❗ Current head 9f01e39 differs from pull request most recent head a1cbaac. Consider uploading reports for the commit a1cbaac to get more accurate results

Files Patch % Lines
pkg/local_object_storage/blobstor/fstree/fstree.go 51.51% 14 Missing and 2 partials ⚠️
...ct_storage/blobstor/fstree/fstree_write_generic.go 0.00% 14 Missing ⚠️
...ject_storage/blobstor/fstree/fstree_write_linux.go 16.66% 4 Missing and 1 partial ⚠️
...kg/local_object_storage/blobstor/fstree/control.go 0.00% 1 Missing ⚠️
pkg/local_object_storage/blobstor/put.go 87.50% 1 Missing ⚠️
Additional details and impacted files
@@           Coverage Diff           @@
##           master    #2629   +/-   ##
=======================================
  Coverage   28.67%   28.67%           
=======================================
  Files         415      414    -1     
  Lines       32234    32251   +17     
=======================================
+ Hits         9244     9249    +5     
- Misses      22195    22205   +10     
- Partials      795      797    +2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@roman-khimov roman-khimov merged commit f865d17 into master Nov 15, 2023
7 of 8 checks passed
@roman-khimov roman-khimov deleted the upd/hrw-library branch November 15, 2023 18:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants