-
Notifications
You must be signed in to change notification settings - Fork 428
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add benchmark for high pin_subpackage
count recipe
#5246
Conversation
83d87f3
to
13ce323
Compare
CodSpeed Performance ReportMerging #5246 will not alter performanceFalling back to comparing Summary
Benchmarks breakdown
|
74d9e5a
to
3d79e09
Compare
@mbargull this looks promising! is this ready for review? |
In principle yes. |
for version_count, package_count in [(1, 4), (4, 3), (4, 3)]: | ||
zipped = [] | ||
for package, versions in islice(packages, package_count): | ||
zipped.append(package) | ||
variant[package] = list(islice(versions, version_count)) | ||
variant["zip_keys"].append(zipped) | ||
# for version_count, package_count in [(3, 1), (2, 4), (1, 327)]: | ||
for version_count, package_count in [(3, 1), (2, 4), (1, 33)]: | ||
for package, versions in islice(packages, package_count): | ||
variant[package] = list(islice(versions, version_count)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To simplify things, it would probably suffice only add the single-entry keys since their number is far greater than the other entries (and zip_keys
).
As is, these numbers just closely recreate those from conda-forge's cbc.yaml.
(Overall, the impact on the variant
size mostly affects the .config.Config.copy
impact on the overall run time, I'd assume.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I say we leave this for now, we can always simplify it later once we've achieved some speedup.
This is a performance regression benchmark for conda#5224 Signed-off-by: Marcel Bargull <[email protected]>
Signed-off-by: Marcel Bargull <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As long as the entire benchmark job completes under 30 min I'm ok with proceeding
I think there's value in getting to see the speed delta for this benchmark in the other PRs (we can always disable this once we've made the improvements)
for version_count, package_count in [(1, 4), (4, 3), (4, 3)]: | ||
zipped = [] | ||
for package, versions in islice(packages, package_count): | ||
zipped.append(package) | ||
variant[package] = list(islice(versions, version_count)) | ||
variant["zip_keys"].append(zipped) | ||
# for version_count, package_count in [(3, 1), (2, 4), (1, 327)]: | ||
for version_count, package_count in [(3, 1), (2, 4), (1, 33)]: | ||
for package, versions in islice(packages, package_count): | ||
variant[package] = list(islice(versions, version_count)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I say we leave this for now, we can always simplify it later once we've achieved some speedup.
Signed-off-by: Marcel Bargull <[email protected]>
3d79e09
to
60cce81
Compare
Thanks for the review/merge 🙇 ! |
Oh, one thing I kind of forgot was that by stripping the recipe down this much, I got rid of all of the selectors 🤭 . |
Description
This is a performance regression benchmark for gh-5224.
Checklist - did you ...
news
directory (using the template) for the next release's release notes?[ ] Add / update outdated documentation?