Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Re-Design of Validation Feature Frontend #148

Open
troeger opened this issue Nov 3, 2016 · 10 comments
Open

Re-Design of Validation Feature Frontend #148

troeger opened this issue Nov 3, 2016 · 10 comments

Comments

@troeger
Copy link
Owner

troeger commented Nov 3, 2016

OpenSubmit went

  • from a single validation script ...
  • ... to multiple validation scripts
  • ... to multiple validation scripts that may be invisible for students
  • ... to multiple validation scripts that may be invisible and that may be archives
  • ... to visible / invisible validation scripts / archives + compile test
  • ... to visible / invisible validation scripts / archives + compile test + support files archive (v0.6.2)

This is a mess, the end user document now reads like a Lisp programmer's guide.

I would propose to perform a radical re-design as part of the executor re-write. If we break all existing validation scripts anyway, we can also do this heavy shift on the front-end side. Here is the first proposal:

  • Validation scripts are a separate asset in the teacher backend.
    • Teachers can add, modify or remove such scripts on installation scope.
    • There is a preinstalled catalog of examples, e.g. for just calling make and reporting the result.
    • Tutors no longer upload validation scripts for assignments, they choose them.
    • Validation scripts are still single Python scripts, but they use the validator library (see Validator library #124) to get submission information and report the results.
    • Each validator script is accompanied by an archive of support files. If the script runs, the files are promised to be there in the same directory.

*Validation for an assignment is organized in ordered stages (idea by @aibo21).

  • Each stage has a name ('compilation', 'validation', 'plagiarism test', ...).
  • Each stage has a chosen validation script from the catalogue.
  • Each stage has a textual result that is shown in the submission details.
  • There is a predefined peudo-stage called "deadline".
  • Stages before "deadline" are visible to students, all afterwards are not.

Compilation is no longer a separate backend concept. If you want to do it, make it part of your validation script.

I don't know where the performance results file fits into this - @Feinbube?

@thehappyhippo should get his Windows nmake problem solved, because OpenSubmit is no longer calling make on its own.

@troeger troeger added this to the Executor Re-Design milestone Nov 3, 2016
@troeger
Copy link
Owner Author

troeger commented Nov 4, 2016

Some more thoughts on implementation:

Stage runs should become a separate model class. This makes the new API Implementation trivial, since wrappers such as Django REST can directly expose this model part to the validator library. Stage run entries formulate both the job and store the results. They link to their validation script entry and their submission file, which allows further traversing to anything else the validator library might be interested in. If the assignment / course disappears, they are still reachable from the validation script point of view.

Submission states are now (in parts) computed from the set of stage run states. To make the implementation more sound, all other submission states such as "grading not finished" could also become pseudo stages from the implementation point of view.

Running a validation stage sound really like a Celery task. We would get all the nice things from Celery, such as remote management of workers (that we call executors), automated result serialization, predefined job protocol, and so on. This downside is that Celery wants a dedicated broker running somewhere.

@Feinbube
Copy link
Collaborator

Feinbube commented Nov 6, 2016

I would prefer the stages approach since it sounds more flexible to me. (Plus I can see, how proposal 1 could be implemented on top of the stages idea.)

@(performance) results:
The easiest way to have a generic solution for this would be a "summary file" for all the individual results produced by the run of a script. (maybe purging empty result strings) This would also allow teachers to add additional stages and get a fast overview of the results. (for various additional checks like syntax checking or plagiarims or sth. like that.)

Do you see how this can be implemented so that old setups can be migrated to the new approach?

@troeger
Copy link
Owner Author

troeger commented Nov 7, 2016

I actually wanted to do both things - stages and prepared validation scripts as separate asset.

If I get you right, then the idea is to support more than screen output as stage result. For me, this sound like the future validator library gets some functionality to store string (?) results. You can then watch these "result records" in the teacher backend and download them as text file.

@Feinbube
Copy link
Collaborator

Feinbube commented Nov 7, 2016

Okay, so a script (=stage) produces text on the std:out and may or may not create a result.txt file. The std:out will be shown in the teachers frontend and the result.txt (if present) can be downloaded (or expanded with an expander button in the UI).

Furthermore there is a "download all result.txt files merged" button in the teachers backend that creates a single text file contains all the contents of the individuals text files.

@werner-matthias
Copy link
Collaborator

More general: the result.txt of one stage may even be processed by a later stage, e.g. transformed.

@Feinbube
Copy link
Collaborator

Feinbube commented Nov 7, 2016

@ peudo-stage called "deadline" (Stages before "deadline" are visible to students, all afterwards are not.):
I don't like this. How about we just add a "visible to students"-checkbox for each stage. Or even better a combobox with options like:
"visible to admin"
"visible to teachers (and admin)"
"visible to students (and everyone else)"

@troeger
Copy link
Owner Author

troeger commented Nov 10, 2016

Stages topic

Everybody seems to agree that stages are a good idea.

The visibility combobox idea is a natural evolution of the current approach, sounds ok for me.

There will be additional coding effort to make the email notifications and the status indicators generic, but that is ok. The dynamic determination of user roles (e.g. students being tutors and non-tutors at the same time) already exists in the code, but will be centralized for this.

Validation results topic

For the validation result management, I would not stretch it too much:

  • Validation scripts can send multi-line result text back to the web server.
  • This result text is independent from screen output on the executor site.
  • There is convenience functionality in the validator library to run a program, capture the screen output and return it as result text in one step.
  • There is also convenience function in the validator library to access result text data for other validation runs.
  • The teacher backend allows to view the result text per submission & validator run and allows the download as a file.
  • There is a separate functionality to download an archive of all such files per assignment. If you want to concat them, do it by yourself after download.

All in all, I would like to keep complex cross-validation logic in the validation scripts. The web application steers the validation process blindly, without any assumption of what is done per stage.

Migration topic

Result migration should be possible, if it is really worth the effort. It boils done to some movement of data inside the database on update. The real problem is that the old validators will be non-functional, since they rely on the STDOUT model for reporting their results.

@Feinbube
Copy link
Collaborator

I still don't get what the new result interface for the validation script will look like.
You are going to ignore the output of the script as well as files it produces, right?

@troeger
Copy link
Owner Author

troeger commented Nov 11, 2016

Yes. And yes.

Currently, the result reporting mechanisms are fixed inside the executor implementation. The general concept of the validator library (#124) is to move such things into the validation script.

If you want student screen output as result, write a validator that captures it.

If you want output files of the student as result, write a validator that checks them and that produces the according result text.

The latter points towards the question if we want binary student results to be transportable to the web frontend. This is currently not supported, and I see tons of reasons why we don't want to add that. Restricting everything to textual results generated by the validation script (!) makes the functionality clear and straight-forward.

@troeger
Copy link
Owner Author

troeger commented Nov 17, 2016

The stages solution should consider #152.

@troeger troeger removed this from the Executor Re-Design milestone Nov 30, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants