You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Goal of this ticket is to implement an annotation scheme for each entry, which in the end makes sure that the examples we want to test actually work.
So far, there is a primitive text_examples function here in the main file and some associated attribute for testing is here. This won't work out, because it takes way too much time to run all examples. It would be better to have this wrapped into a persistent session and reset the environment before each example. Maybe it can be done via usual doctest methods, or we have to use pexpect or jupyter kernels. Also, before each run, it needs to be reset. My thoughts are to start a "main session", and for each test we fork off from that process and use pexpect. I don't know if doctests already do this.
Besides that, it also needs to take the "setup:..." code part into account.
Finally, this is only an "opt-in" mechanism. Maybe we want to switch this to "opt-out", such that all examples are run and check if they do not produce any errors -- unless there is a marker set that this example is expected to not work.
The text was updated successfully, but these errors were encountered:
Goal of this ticket is to implement an annotation scheme for each entry, which in the end makes sure that the examples we want to test actually work.
So far, there is a primitive
text_examples
function here in the main file and some associated attribute for testing is here. This won't work out, because it takes way too much time to run all examples. It would be better to have this wrapped into a persistent session and reset the environment before each example. Maybe it can be done via usual doctest methods, or we have to use pexpect or jupyter kernels. Also, before each run, it needs to be reset. My thoughts are to start a "main session", and for each test we fork off from that process and use pexpect. I don't know if doctests already do this.Besides that, it also needs to take the "setup:..." code part into account.
Finally, this is only an "opt-in" mechanism. Maybe we want to switch this to "opt-out", such that all examples are run and check if they do not produce any errors -- unless there is a marker set that this example is expected to not work.
The text was updated successfully, but these errors were encountered: