feat: add basic sp selection score modelling #298
Draft
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Getting this work up in a Draft for now so I can pause and think about something else for a little while.
This is an attempt to create a simulation of a set of retrievals in such a way that we can tweak the scoring mechanism (weights etc.) to see how/if they make a difference.
This works, (run main in
pkg/session/model/cmd
), but the averaging of provider behaviour across runs is causing any tweaking to get lost in the wash. I need to set up a static list of providers with fixed(ish?) characteristics up front so they can be sorted in more meaningful ways and their score comparisons have more impact.