You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are basic unit tests for various notebooks that are executed in our CI (prow) for each PR. At the moment, the CI happily continues and finish as passed even though some test fails.
Test execution output is saved into the ipynb output file but the content isn’t checked so even if the test fail we don’t know. This design expected that assertion returns non-zero code so the execution of the ipynb file will interrupt in case of a test failure. This doesn't seem to be true.
There are also some other smaller issues we should do as an improvement of this.
Current situation:
example of prow configuration for minimal notebook here and here
CI doesn't present results of the tests in some nice manner per test - e.g. JUnit or XUnit format - so current situation isn't scalable and clear for the test debugging and overview
Unittest can report in JUnit/XUnit format (unittest-xml-reporting package): python -m xmlrunner discover -o junit-xml-output .
We just need to find out how to incorporate these reports into prow
Allow users to easily test their changes - at the moment there are URLs to particular test resources hardcoded and if someone is working with his own fork and branch, these changes there aren't reflected by our CI, e.g.:
We may want to use simple python script for simple tests, like package version assertion and use ipynb format only for functional tests like framework compute tasks
Regarding the *.ipynb script format, we need to run nbformat.normalize and nbformat.validate (docs) on these during the code validation check (PR CI) so these are formatted properly
We should unify the versions of the *.ipynb file format through all tests - I suppose that the version format should match the installed Jupyter notebook framework
Let's review our shell script in the Makefile that executes the test so that it's fail-fast; or we need to check the result at the end - the content of the output files (may not be necessary if JUnit is applied, see main issues)
Ideas for a test coverage (discussion):
Let’s have control on what all RPM packages are installed and in what versions?
Let’s have control on what all pip packages are installed and in what versions?
Set of scripts that utilize the installed packages and libraries in some basic functional manner - ideally hardware agnostic (not needing the necessary special hardware (e.g. GPU, HPU, etc.) at this time)
The text was updated successfully, but these errors were encountered:
There are basic unit tests for various notebooks that are executed in our CI (prow) for each PR. At the moment, the CI happily continues and finish as passed even though some test fails.
Test execution output is saved into the ipynb output file but the content isn’t checked so even if the test fail we don’t know. This design expected that assertion returns non-zero code so the execution of the ipynb file will interrupt in case of a test failure. This doesn't seem to be true.
There are also some other smaller issues we should do as an improvement of this.
Current situation:
Main issues:
unittest-xml-reporting
package):python -m xmlrunner discover -o junit-xml-output .
Possible other improvements:
*.ipynb
script format, we need to runnbformat.normalize
andnbformat.validate
(docs) on these during the code validation check (PR CI) so these are formatted properly*.ipynb
file format through all tests - I suppose that the version format should match the installed Jupyter notebook frameworkIdeas for a test coverage (discussion):
The text was updated successfully, but these errors were encountered: