-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mock-code: improve error message for missing test data #43
Comments
Hmm.. in that example, isn't the main problem that the exception printed by the mock code doesn't make its way into the pytest log? In principle we can maybe find a way to forward the error up to pytest, but the same problem also occurs when the calculation itself fails, right? As in, you'd also need to dig into the exception that was the root cause of the test failing. |
Are you saying that you believe this should happen already or are you wondering how to implement this?
Well, the difference here is that we can know in advance that |
Neither - I was sort of thinking of a related but somewhat different concern: The test should probably be written in such a way that the report and exit code of the workchain is printed if it fails. Maybe we can add a helper function for that to
Hmm, that would involve adding a piece of code "in front of" the actual execution of the code I guess? Right now the mock executable is a "stand-alone" thing and it's not quite so easy to pass that knowledge back and forth. |
I agree that this is good practice, but it would be nice if
I was indeed thinking simply of running a piece of code after the
If that works, it would probably be more elegant. |
Yeah, I think if we're already hooked in before the executable is actually run, it would make sense to do everything there. Can you try setting up just the monkeypatch code (not necessarily implement any functionality, just a way to inject code)? I think that'll help figure out what is possible. |
After looking a bit into the code, it seems to me that this might be a place to hook in: I'll give it a quick try and will report back here. |
I opened #49 (which so far just illustrates the concept) |
This has been addressed by @chrisjsewell in a different way (via surfacing the content of a log file in the test output) in #63 |
Here is an example of a traceback resulting from missing test data on CI.
The error displayed is that some node that was expected from the calculation was not created, but this could have many reasons, which makes it unnecessarily hard to debug.
Of course, the calculation class itself could be more clever about parsing the outputs but ideally we (=
aiida-testing
) would be able to communicate topytest
that the mock code did encounter an input it did not yet know.I'm not quite sure about the best way to accomplish this.
One could perhaps try to monkey-patch the calculation class of the input plugin when setting up the mock code
Code
instance, such that it does some extra check after the original_prepare_for_submission
but perhaps there is also a less invasive way?Any ideas @greschd ?
The text was updated successfully, but these errors were encountered: