-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add wheel tests to CI #1416
Add wheel tests to CI #1416
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks @gforsyth !
/ok to test |
@gforsyth I've triggered CI manually now but FYI going forward you'll need to setup GitHub commit signing to ensure CI runs automatically. |
3ef9326
to
76f1886
Compare
9ab6a88
to
4ab1ed0
Compare
We don't currently use tags/keyword in the PR titles, so as a nit I just changed its title to contain only the message, this is really just a nit, hope you don't mind Gil. 🙂 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for working on this! Left some suggestions.
Can you please also add a similar entry in https://github.com/rapidsai/dask-cuda/blob/branch-25.02/.github/workflows/test.yaml? You can follow the patterns from other repos, like https://github.com/rapidsai/cudf/blob/branch-25.02/.github/workflows/test.yaml
Those .github/workflows/test.yaml
workflow configs define which tests run in 3 scenarios:
- on the target branch (e.g.
branch-25.02
) after merges - nightly
- manually triggered in the UI via workflow dispatch
@
me if you want some helping resolving the CI failures.
- ucx-py-cu12==0.42.*,>=0.0.0a0 | ||
- matrix: | ||
cuda: "11.*" | ||
cuda_suffixed: "true" | ||
packages: | ||
- cudf-cu11==25.2.*,>=0.0.0a0 | ||
- dask-cudf-cu11==25.2.*,>=0.0.0a0 | ||
- kvikio-cu11==25.2.*,>=0.0.0a0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh nice, thank you!
Hey @jameslamb -- back to this after a brief interlude. Could you help me understand the dependency miss in the wheel tests? pip isn't finding the (as yet unpublished) |
Ah yep! I can help with this:
When we publish The problem here is that the This is an interesting one ... since we only publish I can think of 2 ways to address this:
I think we should try just directly installing them in CI. That'd mean:
That's already happening in this job. RAPIDS pre-release packages are published at https://pypi.anaconda.org/rapidsai-wheels-nightly/simple/, and the docker images we run wheel CI in have configuration pointing there: https://github.com/rapidsai/ci-imgs/blob/39aa63d9799dbc9b8268cbbffb2f4d0b39192e67/context/pip.conf#L5 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! Left one suggestion for your consideration.
ci/test_wheel.sh
Outdated
python -m pip install -v --prefere-binary -r /tmp/requirements-test.txt | ||
|
||
# echo to expand wildcard before adding `[extra]` requires for pip | ||
python -m pip install $(echo ./dist/dask_cuda*.whl) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
python -m pip install -v --prefere-binary -r /tmp/requirements-test.txt | |
# echo to expand wildcard before adding `[extra]` requires for pip | |
python -m pip install $(echo ./dist/dask_cuda*.whl) | |
# echo to expand wildcard before adding `[extra]` requires for pip | |
python -m pip install \ | |
-r /tmp/requirements-test.txt \ | |
./dist/dask_cuda*.whl |
I think we should group these pip install
calls together.
That'll reduce the risk of creating an inconsistent environment, and give us a better chance of catching packaging issues.
For example, imagine if dask-cuda
had a runtime dependency on numpy<2
but we had a pin of numpy>=2
in testing dependencies. With separate pip install
calls, pip
would happily downgrade numpy
. With them combined, we'll get an informative, loud error saying "hey these requirements are incompatible".
6a613ce
to
af1fc7b
Compare
Thanks for all the pointers, @jameslamb ! Still waiting on a few conda jobs, but the wheel tests are green, so this is probably ready for another look! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🙌🏻 looking pretty good!
Last suggestion... can you please pull the latest changes from branch-25.02
in here? I think that's a useful practice in general, but especially relevant for this PR because this PR is missing some changes related to dependency versions (#1419).
If we see everything pass and no issues in logs after that, I support merging this.
# echo to expand wildcard | ||
python -m pip install -v --prefer-binary -r /tmp/requirements-test.txt $(echo ./dist/dask_cuda*.whl) | ||
|
||
python -m pytest ./dask_cuda/tests -k "not ucxx" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems ok to me... looks like the changes from #1406 doing the same thing in the conda Python test jobs hasn't been reverted yet.
Adding this now that wheels are available
We want a name like: `dask_cuda_wheel_python_dask_cuda_py312_x86_64.tar.gz` from these environment variables
Co-authored-by: James Lamb <[email protected]>
77da80d
to
f5ff692
Compare
Hey @jameslamb -- rebased and green |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great, thanks for working through this!
/merge |
Adding this now that wheels are available
Resolves #1344