Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is py-motmetrics missing any metrics/datasets? #84

Open
cheind opened this issue Feb 4, 2020 · 9 comments
Open

Is py-motmetrics missing any metrics/datasets? #84

cheind opened this issue Feb 4, 2020 · 9 comments

Comments

@cheind
Copy link
Owner

cheind commented Feb 4, 2020

Hey,

this is a community question: are you aware of any metrics/datasets that py-motmetrics does not support out of the box, but that you wish it did? If yes, please let us know by citing the related publication.

@fguney
Copy link

fguney commented Feb 18, 2020

DETRAC: https://arxiv.org/pdf/1511.04136.pdf

@cheind
Copy link
Owner Author

cheind commented Jul 28, 2020

@fguney I've scrolled through the paper. It does not seem to propose other metrics than py-motmetrics already implements. As far as the file format goes, we now support this via #53

@agirbau
Copy link

agirbau commented Feb 7, 2021

The problem with the metrics on UA-DETRAC is that they use "ignore zones", which are not annotated, and are a source of false positives (as many cars can be detected there). I think this is not taken into account in the current motmetrics, right?

@jvlmdr
Copy link
Collaborator

jvlmdr commented Feb 8, 2021

I think that, at the moment, ignore zones should be handles by the user before calling the toolkit functions.

There is an exception: For the MOT challenge datasets, there seems to be a second script (apps/evaluateTracking.py instead of apps/eval_motchallenge.py) that calls a different function (utils.CLEAR_MOT_M() instead of utils. compare_to_groundtruth()), which uses preprocess.preprocessResult() to remove predictions that are matched to "ignore" classes in MOT. I think, however, that this is not identical to an ignore zone though, because each ignore zone can only eliminate one detection.

If this needs to be implemented for detrac, you could modify apps/eval_detrac.py to take care of this? (and make a PR?)

@agirbau
Copy link

agirbau commented Feb 8, 2021

I generated a PR to integrate the ignore zones for the UA-DETRAC dataset. The main problem is the speed, as it loops over all the detections / ground truth and compares it to every ignore zone. I think it could be highly parallelizable though.

@michih8
Copy link

michih8 commented Oct 24, 2021

How about the AOGM measure for cell tracking? It is used in the Cell Tracking Challenge. https://cbia.fi.muni.cz/software/aogm-measure.html

@cheind
Copy link
Owner Author

cheind commented Oct 28, 2021

@michih8 any chance you could provide a PR?

@ahrnbom
Copy link

ahrnbom commented Nov 22, 2021

HOTA would be nice to include. It's now the default tracking metric in KITTI and the motivations for this metric seem reasonable in general, not just for the KITTI dataset.

The reference implementation of HOTA is here: https://github.com/JonathonLuiten/TrackEval or more specifically here https://github.com/JonathonLuiten/TrackEval/blob/master/trackeval/metrics/hota.py

@cheind
Copy link
Owner Author

cheind commented Nov 22, 2021

@ahrnbom, yes that seems like a good idea. Could you provide a PR?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants