-
Notifications
You must be signed in to change notification settings - Fork 113
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make tests robust and independent #543
Conversation
* Split up test_close to receive only clean stdout
* Adapt expected values to new scenario names
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #543 +/- ##
=====================================
Coverage 98.8% 98.8%
=====================================
Files 44 44
Lines 4813 4819 +6
=====================================
+ Hits 4756 4765 +9
+ Misses 57 54 -3
|
I haven't seen |
I'm just noticing: the windows tests consume much more time than all others. This is not specific to this PR; it was true also for recent dependabot PRs. Not sure if that is worth investigating, definitely not high in priority, I'm afraid. |
I'm rerunning these tests a few times now to see if any flakiness comes up. If it doesn't, I'm happy with the current state of the PR. Just to be sure: my current solution for the flakiness of the tutorial tests is to run these tests on the same worker, which might run counter to our desire to distribute the runs using |
The second rerun already revealed something interesting: On windows-py3.11, both of the first tutorial tests timed out when trying to reach the |
a494ad5
to
918298f
Compare
|
#467 is also still open, despite there being a workaround in our code base: the tutorial tests are skipped on ubuntu because they're taking too long. Not sure if we need to keep this issue open, I'm not working on running the tutorials on ubuntu with this PR. We could still use it as a tracking issue if we want to remove the |
I've run these tests a number of times now and the only tests I saw failing (but twice, unfortunately) were the first parts of the tutorial tests (i.e. the tests not called |
|
We discussed |
I have rerun the tests now four times and did not observe any flaky behaviour, not even reruns due to flakiness. One time, the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the thorough effort here.
Some comments inline, and I pushed one commit to satisfy mypy in .testing.data. Also:
- R_transport_scenario.ipynb appears to have added cell metadata that are editor-specific. I'd prefer we not commit these unless they are needed for a specific reason.
Aside from those items, good to go.
@@ -7,53 +7,73 @@ | |||
|
|||
from ixmp.testing import get_cell_output, run_notebook | |||
|
|||
group_base_name = platform.system() + platform.python_version() | |||
|
|||
GHA = "GITHUB_ACTIONS" in os.environ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typically we'd define this in ixmp.testing
. If it's only used here for now, fine to leave it; but if it's to be re-used, we can move it there.
This is a continuation from #521 that puts the branch in the upstream repo rather than my fork to enable windows test runners to access the
GAMS_LICENSE
secret. Please see the original PR for a detailed description, to-dos, etc.PR checklist
[ ] Add, expand, or update documentation.Just CI tests.[ ] Update release notes.Just CI tests.