-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Re-Design of Validation Feature Frontend #148
Comments
Some more thoughts on implementation: Stage runs should become a separate model class. This makes the new API Implementation trivial, since wrappers such as Django REST can directly expose this model part to the validator library. Stage run entries formulate both the job and store the results. They link to their validation script entry and their submission file, which allows further traversing to anything else the validator library might be interested in. If the assignment / course disappears, they are still reachable from the validation script point of view. Submission states are now (in parts) computed from the set of stage run states. To make the implementation more sound, all other submission states such as "grading not finished" could also become pseudo stages from the implementation point of view. Running a validation stage sound really like a Celery task. We would get all the nice things from Celery, such as remote management of workers (that we call executors), automated result serialization, predefined job protocol, and so on. This downside is that Celery wants a dedicated broker running somewhere. |
I would prefer the stages approach since it sounds more flexible to me. (Plus I can see, how proposal 1 could be implemented on top of the stages idea.) @(performance) results: Do you see how this can be implemented so that old setups can be migrated to the new approach? |
I actually wanted to do both things - stages and prepared validation scripts as separate asset. If I get you right, then the idea is to support more than screen output as stage result. For me, this sound like the future validator library gets some functionality to store string (?) results. You can then watch these "result records" in the teacher backend and download them as text file. |
Okay, so a script (=stage) produces text on the std:out and may or may not create a result.txt file. The std:out will be shown in the teachers frontend and the result.txt (if present) can be downloaded (or expanded with an expander button in the UI). Furthermore there is a "download all result.txt files merged" button in the teachers backend that creates a single text file contains all the contents of the individuals text files. |
More general: the result.txt of one stage may even be processed by a later stage, e.g. transformed. |
@ peudo-stage called "deadline" (Stages before "deadline" are visible to students, all afterwards are not.): |
Stages topic Everybody seems to agree that stages are a good idea. The visibility combobox idea is a natural evolution of the current approach, sounds ok for me. There will be additional coding effort to make the email notifications and the status indicators generic, but that is ok. The dynamic determination of user roles (e.g. students being tutors and non-tutors at the same time) already exists in the code, but will be centralized for this. Validation results topic For the validation result management, I would not stretch it too much:
All in all, I would like to keep complex cross-validation logic in the validation scripts. The web application steers the validation process blindly, without any assumption of what is done per stage. Migration topic Result migration should be possible, if it is really worth the effort. It boils done to some movement of data inside the database on update. The real problem is that the old validators will be non-functional, since they rely on the STDOUT model for reporting their results. |
I still don't get what the new result interface for the validation script will look like. |
Yes. And yes. Currently, the result reporting mechanisms are fixed inside the executor implementation. The general concept of the validator library (#124) is to move such things into the validation script. If you want student screen output as result, write a validator that captures it. If you want output files of the student as result, write a validator that checks them and that produces the according result text. The latter points towards the question if we want binary student results to be transportable to the web frontend. This is currently not supported, and I see tons of reasons why we don't want to add that. Restricting everything to textual results generated by the validation script (!) makes the functionality clear and straight-forward. |
The stages solution should consider #152. |
OpenSubmit went
This is a mess, the end user document now reads like a Lisp programmer's guide.
I would propose to perform a radical re-design as part of the executor re-write. If we break all existing validation scripts anyway, we can also do this heavy shift on the front-end side. Here is the first proposal:
make
and reporting the result.*Validation for an assignment is organized in ordered stages (idea by @aibo21).
Compilation is no longer a separate backend concept. If you want to do it, make it part of your validation script.
I don't know where the performance results file fits into this - @Feinbube?
@thehappyhippo should get his Windows
nmake
problem solved, because OpenSubmit is no longer callingmake
on its own.The text was updated successfully, but these errors were encountered: