-
Notifications
You must be signed in to change notification settings - Fork 78
Cross-check results on fmi-standard.org #129
Comments
I do not think the displayed results are absurd, but perhaps they need to be better explained (and the displayed numbers perhaps reconsidered). Exporting and importing tool vendors can upload results for different versions of their tools (which by the way is very welcome!) E.g. MapleSim has uploaded results for importing for four tool versions for https://github.com/modelica/fmi-cross-check/tree/master/results/2.0/me/win64/MapleSim, but for each exporting tool only results for one tool version are reported. Where specifically do you see the problem? I suggest to add document on the result table pages as https://fmi-standard.org/cross-check/fmi2-me-win64/ on "what do the displayed numbers mean?" |
Thanks @chrbertsch, you are right and I missed that each importing tool is available in different versions. However, that makes the entire table useless. What is the purpose of the table? The table should provide a simple overview how the tools compare to each other and which tools are compatible in terms of import/export. However, you cannot use the current table to compare the numbers of one tool to any other tool, because the number of uploaded versions differ and are not displayed. Any tool can reach any number by simply uploading the same results for different versions. I propose that the table should only display the results of a certain importing tool version (either the recent version, or all versions separately). Today, the provided information are misleading and useless. |
The current provided information is not useless, but has to be better explained. Changing the result display to only latest tool versions has been discussed in #2 This has not been realized yet. Perhaps we could address this with the help the Backoffece (@GallLeo ) |
@chrbertsch I am not arguing agains uploading results for different tool versions. I just state that the results cannot be interpreted giving the provided information. That makes the displayed results indeed useless and, even worse, missleading. If we can agree on that then we can go ahead in a constructive attempt to improve the display of the results. |
@lochel : Can we keep the tone of these discussion here less heated and more civil, please? E.g. the heading of this issue is very close to offensive to those that have worked hard to get XC to where it currently is, whatever flaws it might still have. |
I don’t quite know what you mean. This is a discussion on the issue and nothing else. I am very concerned by the presented results and the process which maintains the cross-check. I raised several issues both publicly and privately to you and @t-sommer but got basically no response on the addressed issues. Regarding the title: @andreas-junghanns, don’t you think that the presented results are indeed very concerning and not reflecting the aim of the cross-check project? I would like to have a discussion on the topic. I would like to find a constructive way forward to improve the current project status and to make the cross-check a fair and valuable tool for all participants. |
I opened two pull requests in order to address the issue: The first one filters out not compliant tests which until now are still listed in the results. The other one is breaking down the importing tools in order to provide a good overview of the results. This way, you can easily follow the progress of all the different tools, and you can compare the tools with each other. |
This is just to illustrate the changes I propose. I selected Dymola as an example, because it supports most of the cross-check and provides results for different versions. The current homepage shows the following numbers, as you can see from my initial post:
Whereas the table with my changes would actually provide much more useful information:
For example the entry Dymola/Dymola didn't make too much sense in the first table. It shows 49 tests, even though the cross-check only contains 32 valid examples. The second table shows actually the same 49 tests, but for the respective Dymola versions. That way, one can easily see what is supported, what got improved, and how things compare to other tools. |
I am very glad to see that #132 was merged. More than 19% of the green badges were wrongly awarded and vanished from the tool page. This is because the numbers of counted tests from the detailed reports dropped considerably:
|
Hi all, Both suggestions discussed so far are ok for me (i.e. list all individual versions or only the latest one). Showing only the most recent version in the basic table probably makes most sense for users, who want a fair comparison between tool capabilities. Also, since tools are generally expected to improve over time, the number of passed tests will increase with newer versions, thus the comparison to other tools will be ok. Feature request: It would be nice to have an additional column showing the number of versions of the software that results were submitted for. Interested users may then click on an underlying link and get a detailed view of the table only for versions of this software. That would be IMHO the best compromise. |
I really hope that I am getting this one completely wrong:
I fetched a simple overview of the number of available me tests for win64 from the repository, and these are the numbers that I got:
Please compare now those numbers to the numbers presented on https://fmi-standard.org/cross-check/fmi2-me-win64/. There are apparently in many cases more verified tests than tests in the repository. What did I get wrong here?
The text was updated successfully, but these errors were encountered: