Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why do I get negative MOTA? #3

Open
SamGhK opened this issue Jan 10, 2020 · 18 comments
Open

Why do I get negative MOTA? #3

SamGhK opened this issue Jan 10, 2020 · 18 comments

Comments

@SamGhK
Copy link

SamGhK commented Jan 10, 2020

Hello,

many thanks for your work and making your code public. Are my results correct? I have not change anything. I have cloned the repository, put the data under right folders and executed "pp_pv_40e_mul_C/eval.sh"

I look forward to hearing from you.

Best regards

==========================tracking evaluation summary===========================
Multiple Object Tracking Accuracy (MOTA) -0.044093
Multiple Object Tracking Precision (MOTP) 0.854040
Multiple Object Tracking Accuracy (MOTAL) 0.789259
Multiple Object Detection Accuracy (MODA) 0.789616
Multiple Object Detection Precision (MODP) 0.890653

Recall 0.891036
Precision 0.922639
F1 0.906562
False Alarm Rate 0.236567

Mostly Tracked 0.773148
Partly Tracked 0.203704
Mostly Lost 0.023148

True Positives 11342
Ignored True Positives 1616
False Positives 951
False Negatives 1387
Ignored False Negatives 1192
Missed Targets 1387
ID-switches 9265
Fragmentations 9295

Ground Truth Objects (Total) 13921
Ignored Ground Truth Objects 2808
Ground Truth Trajectories 240

Tracker Objects (Total) 13168
Ignored Tracker Objects 875
Tracker Trajectories 13130

@ZwwWayne
Copy link
Owner

Hi @SamGhK ,
Thanks for your feedback, this is unusual because the metrics except the MOTA seems to be normal.
Can you provide more details about your running environment? Especially the version of motmetrics etc. It would do some help to debug.

@SamGhK
Copy link
Author

SamGhK commented Jan 10, 2020

Many thanks for your fast response. The running environment is as follow:
I am not using conda environment and srun. The required packages are install via:
pip3 install -r requirements.txt
pip3 install numba scikit-image scipy pillow
OS: Ubuntu 18.04 LTS
Python: 3.6.9
torch: 1.3.1 (built from source)
torchvision: 0.4.2
cuda: 10.2
cuDNN: 7.6.5
motmetrics: 1.1.3
cython: 0.29.14
numba: 0.47.0
opencv: 3.4.8
pyproj: 2.4.2.post1
ortools:7.4.7247
munkres: 1.1.2

samghk_folder_structur

Looking forward to hearing from you.

Best regards

@ZwwWayne
Copy link
Owner

Hi @SamGhK ,
I evaluated the model again and got the same normal results as before. I thought the problem is the version of pyproj, because I met the problem on pyproj before the code release, but now I am using the same pyproj package as you do.
For now, I suggest you follow the conda installation process and try again, the conda installation should guarantee success. In the meantime, I will also try to find out the problem.

@SamGhK
Copy link
Author

SamGhK commented Jan 10, 2020

Hello @ZwwWayne,

Thanks for your support. Although I don't like conda, but I will follow the conda installation. I have other questions. how can I force the model to not use the LiDar module or force it to use only 2D images? I am trying to replace the VGG-Net with a Densnet for better 2D-BB estimation. Is it possible to do that?I have seen that the model is capable or different ResNets, so I had the Idea that it should be possible to replace the VGG Net. Thanks in advance.

Looking forward to hearing from you.

Best regards

@ZwwWayne
Copy link
Owner

ZwwWayne commented Jan 10, 2020

It is possible to do that.
You need to set point_len: 512 to point_len: 0 to avoid using LiDAR modules.
And then you need to set score_fusion_arch: ~, test_mode: 0 to avoid use fusion module.
You might also need to check the score_fusion_arch to set fusion_module in this line.

@ZwwWayne
Copy link
Owner

Hi @SamGhK ,
The eval results of pp_pv_40e_mul_C should be as the following:
image
You can check it with your evaluation results.

@SamGhK
Copy link
Author

SamGhK commented Jan 13, 2020

Hi,
I checked your model with conda env.

Conda installation:
curl -o ~/miniconda.sh -O https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh && \ chmod +x ~/miniconda.sh && \ ~/miniconda.sh -b -p /opt/conda && \ rm ~/miniconda.sh && \

**Dependencies: **
conda update -n base -c defaults conda && \ conda install pytorch torchvision -c pytorch && \ conda install numba cython && \ conda install -c conda-forge opencv && \ pip install motmetrics PyYAML easydict munkres ortools pyproj && pip uninstall -y pillow && pip install pillow==5.2.0

I am not getting any negative results as follow:
Eval after training:
eval_after_training_in_conda_env

Eval:
evalution_conda_env

But my MOTA result does not match yours.
I execute experiments/pp_pv_40e_mul_C/eval.sh and have noticed the following point at the start of process:

=> no checkpoint found at '.data/pretrain_models/pp_pv_40e_mul_C-gpu.pth'
Building dataset using dets file ./data/pretrain_models/pp_train_dets.pkl
Detect [ 16258] cars in [3365/3975] images
Add [0] cars in [0/3975] images
Building dataset using dets file ./data/pretrain_models/pp_val_dets.pkl
Detect [ 13170] cars in [3475/3945] images
Add [0] cars in [0/3945] images
[2020-01-13 17:33:34,592][eval_seq.py][line: 64][ INFO] args: Namespace(config='experiments/pp_pv_40e_mul_C/config.yaml', evaluate=False, load_path='.data/pretrain_models/pp_pv_40e_mul_C-gpu.pth', memory=False, recover=False, result_path='experiments/pp_pv_40e_mul_C/results', result_sha='all')
[2020-01-13 17:33:34,593][eval_seq.py][line: 65][ INFO] config: {'augmentation': {'input_size': 224, 'test_resize': 224},

Do you have any suggestion, why am I getting worse results?

Best regards and thanks in advance,
Sam

@SamGhK
Copy link
Author

SamGhK commented Jan 13, 2020

Hi @ZwwWayne ,
I have an update on my problem. I debugged my env and landed in box_np_ops.py @line 725, function
def corner_to_surfaces_3d_jit(corners):

I have changed
@numba.njit(nopython=True) to @numba.njit(parallel=True, fastmath=True)

Now, in my original env (no conda) I am getting positive results, but still can't get your result.

eval_updated_env_no_conda

But I can't figure it out what is no functioning correctly. I would appreciate if you could give me a feedback/suggestion.

Look forward to it.

Best regards,
Sam

@ZwwWayne
Copy link
Owner

ZwwWayne commented Jan 14, 2020

=> no checkpoint found at '.data/pretrain_models/pp_pv_40e_mul_C-gpu.pth in the log means the model is not loaded successfully.
You might need to check whether the path of the pre-trained model is correct.

@SamGhK
Copy link
Author

SamGhK commented Jan 14, 2020

Hi @ZwwWayne ,

many thanks for your feedback. You were right. it was because this line.

--load-path=./pretrain_models/pp_pv_40e_mul_C-gpu.pth \

I removed the point from the beginning of the pretrain_models - path and it worked. Now, I get the same result like yours. I guess the problem was

  1. Not being able to load the pretrained model
  2. the Numba.jit nopython mode @numba.jit(nopython=True) @ following line
    @numba.jit(nopython=True)

eval_corrected_no_conda.
I became suspicious after I decided to train the model myself and got the following results during training:
train_eval_noconda

But then it got frustrating again because my training was interrupted because of the following error:
train_cancelled_error

Do you have any idea, what the problem could be, that interrupted the training @ 87200th iteration?

I look forward to hearing from you.

Best regards,
Sam

@ZwwWayne
Copy link
Owner

  1. The evaluation at the iteration of 83875 seems to be fine.
  2. It seems the id_offset and dets_out[i]['id'] have different types, you can print them out and check what happened.

@simaker
Copy link

simaker commented Feb 26, 2020

It is possible to do that.
You need to set point_len: 512 to point_len: 0 to avoid using LiDAR modules.
And then you need to set score_fusion_arch: ~, test_mode: 0 to avoid use fusion module.
You might also need to check the score_fusion_arch to set fusion_module in this line.

It is possible to do that.
You need to set point_len: 512 to point_len: 0 to avoid using LiDAR modules.
And then you need to set score_fusion_arch: ~, test_mode: 0 to avoid use fusion module.
You might also need to check the score_fusion_arch to set fusion_module in this line.

and which steps are necessary to use LiDAR only?

@ZwwWayne
Copy link
Owner

Keep point_len: 512 and set appear_len: 512 to appear_len: 0.
And then set score_fusion_arch: ~, test_mode: 0 to avoid use fusion module.

@simaker
Copy link

simaker commented Feb 27, 2020

but why do i have to set test_mode: 0 and not 1?
in the config it says:
test_mode: 1 #0:image;1:LiDAR;2:fusion

@ZwwWayne
Copy link
Owner

Because when only using a single modality, the batch size dimension is always 1.
Using 0, 1, 2 for different modalities is only applicable to fusion methods.

@simaker
Copy link

simaker commented Feb 27, 2020

okay thank you. And how do i have to change this line? I don't know what you mean by

check the score_fusion_arch to set fusion_module

@llllliu1
Copy link

llllliu1 commented Dec 2, 2020

Many thanks for your fast response. The running environment is as follow:
I am not using conda environment and srun. The required packages are install via:
pip3 install -r requirements.txt
pip3 install numba scikit-image scipy pillow
OS: Ubuntu 18.04 LTS
Python: 3.6.9
torch: 1.3.1 (built from source)
torchvision: 0.4.2
cuda: 10.2
cuDNN: 7.6.5
motmetrics: 1.1.3
cython: 0.29.14
numba: 0.47.0
opencv: 3.4.8
pyproj: 2.4.2.post1
ortools:7.4.7247
munkres: 1.1.2

samghk_folder_structur

Looking forward to hearing from you.

Best regards
Hello, excuse me. I'm not clear about what directories are in kitti_t_o/, could you please tell me,thank you!

@ghost
Copy link

ghost commented Feb 4, 2021

You need to download all the data and unzip it from the KITTI Tracking Benchmark

Download left color images of tracking data set (15 GB) and unzip it to 'image_02' folder
Download Velodyne point clouds, if you want to use laser information (35 GB) and unzip it to 'velodyne' folder
Download GPS/IMU data, if you want to use map information (8 MB) and unzip it to 'oxts' folder
Download camera calibration matrices of tracking data set (1 MB) and unzip it to 'calib' folder
Download training labels of tracking data set (9 MB) and unzip it to 'label_02' folder
Download L-SVM reference detections for training and unzip it to 'lsvm' folder
Download Regionlet reference detections for training and unzip it to 'regionlets' folder
Download tracking development kit (1 MB) and unzip it to 'devkit' folder

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants