Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation using RRCNet det results #4

Open
Kay1794 opened this issue Jan 18, 2020 · 6 comments
Open

Evaluation using RRCNet det results #4

Kay1794 opened this issue Jan 18, 2020 · 6 comments

Comments

@Kay1794
Copy link

Kay1794 commented Jan 18, 2020

Hello! Thank you for sharing your work.
I have did the evaluation using PointPillars detection results and it aligned with yours. However, when I tried to use rrc results, I couldn't get the evaluation properly.
I was wondering if you have any idea about how to modify the code for rrc det (2D det). I checked the config file for rrc, the det type is still '3D'. I have tried changed it to 2D but the problem still exists.

@Kay1794 Kay1794 changed the title Result apply RRCNet Evaluation using RRCNet det results Jan 18, 2020
@ZwwWayne
Copy link
Owner

Hi @Kay1794 ,
I do not quite understand your question. Do you meet bugs or something else?
The config should work, and you need to check the Tensorboard to see when the model performs best on Val set.

@Kay1794
Copy link
Author

Kay1794 commented Jan 20, 2020

Hi @ZwwWayne . Sorry for the confusion. The problem here is when I run experiment "rrc_pfv_40e_subabs_dualadd_C" using the RCC detection results you provided, I got 0 results for all evaluation metrics ( see figures below).

image
image

I should mention that I used the same logic to modify the config file for pp results and it went well. I thought there might be bugs in rcc evaluation part.

@ZwwWayne
Copy link
Owner

Hi @Kay1794 ,
This is because the code does not pass the results to the pymotmetrics for evaluation.
After the refactoring, the code use the KITTI's evaluation metric rather than pymot.
You should check the results below the Processing Results for KITTI Tracking Benchmark.

@Kay1794
Copy link
Author

Kay1794 commented Jan 23, 2020

Hi @ZwwWayne

Here is the screenshot below the Processing Results for KITTI Tracking Benchmark.
image

I checked the code and found that if we use 2D detector, we need to generate sampled point cloud and save it to velodyne_reduced folder. Since I didn't see the data preparation instruction for this part so I am not sure if it would be the problem.
image
I have another question is that where I can find the model that the paper used for 2D detecctor (the one you used for KITTI benchmark)

Thank you for your time and Happy Chinese New Year!

@ZwwWayne
Copy link
Owner

ZwwWayne commented Jan 28, 2020

  1. This is weird, I need to check it further.
  2. The codebase has been modified to directly use the point cloud data and reduce it in data pre-processing [here[(https://github.com/ZwwWayne/mmMOT/blob/master/point_cloud/preprocess.py#L64), this should not be a problem since new code has been tested.
  3. We use this repo to train a detector on the 3D detection dataset and produce the results of tracking data. To do that, the format of tracking data should be converted to the 3D detection format first, you need to modify the scripts in the second.pytorch repo a little bit.

@dmatos2012
Copy link

@Kay1794 were you able to solve this issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants