Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mock-code: improve error message for missing test data #43

Closed
ltalirz opened this issue Nov 13, 2020 · 8 comments
Closed

mock-code: improve error message for missing test data #43

ltalirz opened this issue Nov 13, 2020 · 8 comments
Labels
topic/mock-code Concerns the code mocking feature type/feature request status undecided

Comments

@ltalirz
Copy link
Member

ltalirz commented Nov 13, 2020

Here is an example of a traceback resulting from missing test data on CI.

The error displayed is that some node that was expected from the calculation was not created, but this could have many reasons, which makes it unnecessarily hard to debug.

Of course, the calculation class itself could be more clever about parsing the outputs but ideally we (=aiida-testing) would be able to communicate to pytest that the mock code did encounter an input it did not yet know.

I'm not quite sure about the best way to accomplish this.
One could perhaps try to monkey-patch the calculation class of the input plugin when setting up the mock code Code instance, such that it does some extra check after the original _prepare_for_submission but perhaps there is also a less invasive way?

Any ideas @greschd ?

@ltalirz ltalirz added topic/mock-code Concerns the code mocking feature type/feature request status undecided labels Nov 13, 2020
@greschd
Copy link
Member

greschd commented Nov 13, 2020

Hmm.. in that example, isn't the main problem that the exception printed by the mock code doesn't make its way into the pytest log?

In principle we can maybe find a way to forward the error up to pytest, but the same problem also occurs when the calculation itself fails, right? As in, you'd also need to dig into the exception that was the root cause of the test failing.

@ltalirz
Copy link
Member Author

ltalirz commented Nov 13, 2020

Hmm.. in that example, isn't the main problem that the exception printed by the mock code doesn't make its way into the pytest log?

Are you saying that you believe this should happen already or are you wondering how to implement this?
It is not clear to me how the exception of the mock executable can be forwarded to pytest.

In principle we can maybe find a way to forward the error up to pytest, but the same problem also occurs when the calculation itself fails, right? As in, you'd also need to dig into the exception that was the root cause of the test failing.

Well, the difference here is that we can know in advance that mock-code will fail.
mock-code is anyhow written in python - we can make sure (in python) that the required test data is available before running the executable.

@greschd
Copy link
Member

greschd commented Nov 13, 2020

Are you saying that you believe this should happen already or are you wondering how to implement this?
It is not clear to me how the exception of the mock executable can be forwarded to pytest.

Neither - I was sort of thinking of a related but somewhat different concern: The test should probably be written in such a way that the report and exit code of the workchain is printed if it fails. Maybe we can add a helper function for that to aiida-testing.
And then the workchain itself needs to make sure that critical errors in a calculation end up in its report. That's probably good practice in any case, but not really something relating to the tests directly.

Well, the difference here is that we can know in advance that mock-code will fail.

Hmm, that would involve adding a piece of code "in front of" the actual execution of the code I guess? Right now the mock executable is a "stand-alone" thing and it's not quite so easy to pass that knowledge back and forth.
But I'm not sure, maybe a more integrated solution is anyway the better design. I could imagine putting the whole mock code execution into a monkeypatched-in step after _prepare_for_submission.

@ltalirz
Copy link
Member Author

ltalirz commented Nov 13, 2020

The test should probably be written in such a way that the report and exit code of the workchain is printed if it fails. Maybe we can add a helper function for that to aiida-testing.
And then the workchain itself needs to make sure that critical errors in a calculation end up in its report. That's probably good practice in any case, but not really something relating to the tests directly.

I agree that this is good practice, but it would be nice if aiida-testing would not have to rely on people following it.

Hmm, that would involve adding a piece of code "in front of" the actual execution of the code I guess? Right now the mock executable is a "stand-alone" thing and it's not quite so easy to pass that knowledge back and forth.

I was indeed thinking simply of running a piece of code after the _prepare_for_submission and before running the executable - it would check the hash of the inputs (and raise an exception if it wasn't found in the test database and if no "fallback" executable is specified).
The mock executable could in principle remain unchanged (resulting in some duplication; one can also move these lines to a central place where mock-code can reuse them), but there would be no need to communicate to/from the executable.

But I'm not sure, maybe a more integrated solution is anyway the better design. I could imagine putting the whole mock code execution into a monkeypatched-in step after _prepare_for_submission.

If that works, it would probably be more elegant.

@greschd
Copy link
Member

greschd commented Nov 14, 2020

Yeah, I think if we're already hooked in before the executable is actually run, it would make sense to do everything there.

Can you try setting up just the monkeypatch code (not necessarily implement any functionality, just a way to inject code)? I think that'll help figure out what is possible.

@ltalirz
Copy link
Member Author

ltalirz commented Nov 20, 2020

After looking a bit into the code, it seems to me that this might be a place to hook in:
https://github.com/aiidateam/aiida-core/blob/02248cf3686a0ab89faf1625e0da24d9e33d8cde/aiida/schedulers/scheduler.py#L384-L391

I'll give it a quick try and will report back here.

@ltalirz
Copy link
Member Author

ltalirz commented Nov 20, 2020

I opened #49 (which so far just illustrates the concept)
Would appreciate help on any of the open points as I cannot spend much more time on this

@ltalirz
Copy link
Member Author

ltalirz commented Nov 24, 2022

This has been addressed by @chrisjsewell in a different way (via surfacing the content of a log file in the test output) in #63

@ltalirz ltalirz closed this as completed Nov 24, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
topic/mock-code Concerns the code mocking feature type/feature request status undecided
Projects
None yet
Development

No branches or pull requests

2 participants