-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Set up tox-conda/pytest #1
Conversation
- Set build-backend so isolated pip install works - Add pychop as testing dependency - Configure tox-conda - Create __init__.py files under tests/ so that all tests are found by pytest We need to use conda to install Mantid and check results against Abins functions. Tox-conda is stuck on tox v3; there is a branch to support v4 but it's not clear if/when this will be finished. But we also might be able to drop the conda component of these tests once we are satisfied with validation against Mantid: it can be replaced with tests against reference values. Perhaps we don't need the test dependencies at all in pyproject.toml, now this is all defined in tox.ini. But they could be useful for running tests without tox if desired.
This is a decent start, the tests seem to be running correctly. Lots are failing, but we expected that. |
- Use reservoir sampling to efficiently grab a subset of Fermi chopper frequency combinations - Simplify error matching with match= argument - Get id on the fly with a function rather than precomputing
- defer string ids to a function run by test - replace explicit list-extending with itertools.chain
The sweep of incident energies is cut down by some more (seeded) random sampling. At the moment we have some cases with invalid chopper frequencies; make sure the error is checked robustly when handling these. Ideally we should have smarter test-case generation and check the errors in a separate test
These aren't currently being checked in CI because tox takes care of it, so can easily become stale. There must be a better way.
There's more work to do on test coverage / legibility / efficiency but this at least gets some CI up and running without too much complexity or runtime. Would be good to fix the failing tests next! |
The MAPS and MARI integration tests against AbINS that are failing in CI, on my local they are all passing. The tests seem to be failing due to
Concerning the failing Lagrange tests, those are due to the issue in AbINS where the cut-off values have a bug in the unit conversion, resulting in the low-energy values being incorrect. |
Presumably that means there is no LinAlgError first. LinAlgError error comes from numpy, perhaps it depends on the Numpy/Scipy version? I get the same errors when running with I have tried Mantid versions 6.9, 6.10, 6.11; 6.9 segfaults (probably numpy trouble) and the two recent versions give the same MAPS/MARI errors |
I get this in tests without the except block (i.e.
|
We are seeing some interesting inconsistencies in test results between Rastislav's local Windows installation and my Linux/tox setup.
Could indeed be a Windows issue (e.g. this numpy problem caused by a bug in Windows, which should be resolved, but there could be something happening again?). In either case, we could add a check for nans to the
|
We know some tests are failing regardless, but want to see what happens on Windows!
Whether it is a "bug" or not, on our side it is probably safest not to assume that NaN values will always be treated the same way across math libraries and operating systems. |
Yup, on Windows there are only the Lagrange failures so it is taking different paths depending on platform/libraries 😭 |
When testing on Linux, Abins returns NaN rather than raise LinAlgError when the chopper settings are out of range. Our test should detect both cases and ensure appropriate behaviour in corresponding situation. (i.e. raise NoTransmissionError)
PyChop issues a warning (
|
NaN check is working, will give warning a try and see which is nicer |
This is a bit cleaner than having separate checks for different OS/Library situations
There is a unit-conversion bug in the low-frequency limit so this doesn't kick in at the right place in Abins. When that is fixed we can de-skip the test and verify it here.
Warning-based logic looks a bit cleaner to me and seems to work across platforms 👍 I have set the low-frequency Abins-Lagrange comparison to skip, as we know that one is suspect. Higher frequencies are ok but I needed to slightly loosen the I'd still like to see some of these parameter sweeps collapsed into single test cases, but this seems useful enough to merge now? |
__init__.py
files under tests/ so that all tests are found by pytestWe need to use conda to install Mantid and check results against Abins functions. Tox-conda is stuck on tox v3; there is a branch to support v4 but it's not clear if/when this will be finished. But we also might be able to drop the conda component of these tests once we are satisfied with validation against Mantid: it can be replaced with tests against reference values.
Perhaps we don't need the test dependencies at all in pyproject.toml, now this is all defined in tox.ini. But they could be useful for running tests without tox if desired.