Skip to content
This repository has been archived by the owner on Feb 10, 2022. It is now read-only.

Cross-check results on fmi-standard.org #129

Open
lochel opened this issue Jul 15, 2021 · 10 comments
Open

Cross-check results on fmi-standard.org #129

lochel opened this issue Jul 15, 2021 · 10 comments

Comments

@lochel
Copy link
Member

lochel commented Jul 15, 2021

I really hope that I am getting this one completely wrong:

I fetched a simple overview of the number of available me tests for win64 from the repository, and these are the numbers that I got:

Exporting tool # provided tests # not compliant tests
CATIA 11 2
DS_FMU_Export_from_Simulink 19 10
Dymola 34 2
FMIToolbox_MATLAB 4 0
FMUSDK 8 0
MapleSim 22 14
Test-FMUs 12 4
... ... ...

Please compare now those numbers to the numbers presented on https://fmi-standard.org/cross-check/fmi2-me-win64/. There are apparently in many cases more verified tests than tests in the repository. What did I get wrong here?

image

@chrbertsch
Copy link
Contributor

I do not think the displayed results are absurd, but perhaps they need to be better explained (and the displayed numbers perhaps reconsidered).

Exporting and importing tool vendors can upload results for different versions of their tools (which by the way is very welcome!)

E.g. MapleSim has uploaded results for importing for four tool versions for https://github.com/modelica/fmi-cross-check/tree/master/results/2.0/me/win64/MapleSim, but for each exporting tool only results for one tool version are reported.

Where specifically do you see the problem?
The scripts to generate the results are free for inspection (https://github.com/modelica/fmi-cross-check/blob/master/result_tables.py) , and constructive suggestions for improvements are welcome.

I suggest to add document on the result table pages as https://fmi-standard.org/cross-check/fmi2-me-win64/ on "what do the displayed numbers mean?"

@lochel
Copy link
Member Author

lochel commented Jul 16, 2021

Thanks @chrbertsch, you are right and I missed that each importing tool is available in different versions. However, that makes the entire table useless.

What is the purpose of the table? The table should provide a simple overview how the tools compare to each other and which tools are compatible in terms of import/export. However, you cannot use the current table to compare the numbers of one tool to any other tool, because the number of uploaded versions differ and are not displayed. Any tool can reach any number by simply uploading the same results for different versions.

I propose that the table should only display the results of a certain importing tool version (either the recent version, or all versions separately). Today, the provided information are misleading and useless.

@chrbertsch
Copy link
Contributor

The current provided information is not useless, but has to be better explained.
It is very beneficial if tool vendors upload results for different (and latest) versions of their tools and this should be honoured and reflected.

Changing the result display to only latest tool versions has been discussed in #2 This has not been realized yet. Perhaps we could address this with the help the Backoffece (@GallLeo )

@lochel
Copy link
Member Author

lochel commented Jul 16, 2021

@chrbertsch I am not arguing agains uploading results for different tool versions. I just state that the results cannot be interpreted giving the provided information. That makes the displayed results indeed useless and, even worse, missleading.

If we can agree on that then we can go ahead in a constructive attempt to improve the display of the results.

@andreas-junghanns
Copy link
Contributor

@lochel : Can we keep the tone of these discussion here less heated and more civil, please? E.g. the heading of this issue is very close to offensive to those that have worked hard to get XC to where it currently is, whatever flaws it might still have.

@lochel
Copy link
Member Author

lochel commented Jul 16, 2021

I don’t quite know what you mean. This is a discussion on the issue and nothing else. I am very concerned by the presented results and the process which maintains the cross-check. I raised several issues both publicly and privately to you and @t-sommer but got basically no response on the addressed issues.

Regarding the title: @andreas-junghanns, don’t you think that the presented results are indeed very concerning and not reflecting the aim of the cross-check project? I would like to have a discussion on the topic.

I would like to find a constructive way forward to improve the current project status and to make the cross-check a fair and valuable tool for all participants.

@lochel lochel changed the title Absurd cross-check results on fmi-standard.org 😱 Cross-check results on fmi-standard.org Jul 16, 2021
@lochel
Copy link
Member Author

lochel commented Jul 16, 2021

I opened two pull requests in order to address the issue: The first one filters out not compliant tests which until now are still listed in the results. The other one is breaking down the importing tools in order to provide a good overview of the results. This way, you can easily follow the progress of all the different tools, and you can compare the tools with each other.

@lochel
Copy link
Member Author

lochel commented Jul 16, 2021

This is just to illustrate the changes I propose. I selected Dymola as an example, because it supports most of the cross-check and provides results for different versions.

The current homepage shows the following numbers, as you can see from my initial post:

importing tool CATIA DS - FMU Export from Simulink Dymola FMI Toolbox for MATLAB/Simulink FMUSDK MapleSim Test-FMUs
Dymola 3 24 49 3 24 0 0

Whereas the table with my changes would actually provide much more useful information:

importing tool CATIA DS - FMU Export from Simulink Dymola FMI Toolbox for MATLAB/Simulink FMUSDK MapleSim Test-FMUs
Dymola (2015FD01) 0 3 3 0 3 0 0
Dymola (2016) 0 5 9 0 3 0 0
Dymola (2016FD01) 0 7 15 0 6 0 0
Dymola (2017) 3 9 22 3 6 0 0

For example the entry Dymola/Dymola didn't make too much sense in the first table. It shows 49 tests, even though the cross-check only contains 32 valid examples. The second table shows actually the same 49 tests, but for the respective Dymola versions. That way, one can easily see what is supported, what got improved, and how things compare to other tools.

@lochel
Copy link
Member Author

lochel commented Jul 23, 2021

I am very glad to see that #132 was merged. More than 19% of the green badges were wrongly awarded and vanished from the tool page. This is because the numbers of counted tests from the detailed reports dropped considerably:

fmi2-me-win64:   14% wrongly counted results (previously counted:  730, actually valid:  640)
fmi2-me-linux64: 23% wrongly counted results (previously counted:   21, actually valid:   17)
fmi2-cs-win64:   17% wrongly counted results (previously counted: 1258, actually valid: 1073)
fmi2-cs-linux64: 26% wrongly counted results (previously counted:   82, actually valid:   65)
...

@ghorwin
Copy link
Contributor

ghorwin commented Jul 23, 2021

Hi all,
it's been a while since I've worked on this issue, so please excuse my late comment on the matter. If I recall correctly, I had pretty much the same concerns mentioned towards Torsten some 2-3 years ago. In my opinion, merging "passed" counts from different versions of a software is indeed misleading. While I see Torsten's argument, that actively participating tool venders who frequently update their results should be rewarded in some way, this should taint the message of the comparison table.

Both suggestions discussed so far are ok for me (i.e. list all individual versions or only the latest one). Showing only the most recent version in the basic table probably makes most sense for users, who want a fair comparison between tool capabilities. Also, since tools are generally expected to improve over time, the number of passed tests will increase with newer versions, thus the comparison to other tools will be ok.

Feature request: It would be nice to have an additional column showing the number of versions of the software that results were submitted for. Interested users may then click on an underlying link and get a detailed view of the table only for versions of this software. That would be IMHO the best compromise.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants