Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue in reproducing results on Semantic 3D dataset #123

Open
saba155 opened this issue May 10, 2019 · 21 comments
Open

Issue in reproducing results on Semantic 3D dataset #123

saba155 opened this issue May 10, 2019 · 21 comments

Comments

@saba155
Copy link

saba155 commented May 10, 2019

I tried several times to reproduce results as published in paper for Semantic 3d but could not able to produce same. My results are far lagging behind the original results. I have followed all the steps mentioned in readme and uploaded the labels on semantic 3d dataset site.

@loicland
Copy link
Owner

Hi,

Are you using our trained model or training from scratch? What results are you obtaining? On the test set or the validation set?

@saba155
Copy link
Author

saba155 commented May 10, 2019

I am training from scratch and using full test set.

@loicland
Copy link
Owner

So you mean that the server-side evaluation gives you abd results? Can you share them here?

What happens when you use the validation set for testing, do you get similar results?

@saba155
Copy link
Author

saba155 commented May 10, 2019

Thank you for prompt reply. . I am training from scratch and using full test set. I achieved 13% overall accuracy which is too low. I have followed following steps, kindly tell me if I am missing some thins:

  • CUDA_VISIBLE_DEVICES=0 python learning/main.py --dataset sema3d --SEMA3D_PATH $SEMA3D_DIR --db_test_name testfull --db_train_name trainval --epochs -1 --lr_steps '[350, 400, 450]' --test_nth_epoch 100 --model_config 'gru_10,f_8' --ptn_nfeat_stn 11 --nworkers 2 --odir "results/sema3d/trainval_best"

  • python partition/write_Semantic3d.py --SEMA3D_PATH $SEMA3D_DIR --odir "results/sema3d/trainval_best" --db_test_name testred

  • uploaded above generated labels on Semantic 3D evaluation tool.

@loicland
Copy link
Owner

Can you try to visualize the results to see what's wrong? For example with:

python partition/visualize.py --dataset sema3d --ROOT_PATH $SEMA3D_DIR --res_file 'results/sema3d/trainval_best/prediction_testred' --file_path 'test_reduced/MarketplaceFeldkirch_Station4' --output_type ifprs

and then load the results .ply files in meshlab or cloud compare.

@saba155
Copy link
Author

saba155 commented May 10, 2019

Visualization seems fine. I am uploading snapshot of bird fountain station1 point cloud here.
snapshot00

@loicland
Copy link
Owner

Ok that's weird, that is certainly not a 13%. Try to check the consistency between the size of the data files and the number of columns in the .labels files. Check the labels as well to make sure they are consistent with the typology of semantic 3D. In particular, the labels should start at 1 and not 0.

Are you talking about the reduced test set by the way?

Also try to run the code with trained models and compare directly the .labels files.

@saba155
Copy link
Author

saba155 commented May 10, 2019

I am using full test set and my labels starts with different values like 4, 6 etc and have only one column. Total number of lines are same.
image

@loicland
Copy link
Owner

Would you please post here the first 50 or so columns of a .labels file of your choice (along with its name).

I will compare it to mine (but no before Monday I'm afraid).

@saba155
Copy link
Author

saba155 commented May 10, 2019

thanks alot no problem take your time. I am uploading first 200 lines of sg27_station3.
sg27_3.labels.txt

@saba155
Copy link
Author

saba155 commented May 14, 2019

Did u manage to compare .labels?

@loicland
Copy link
Owner

Hi,
I did not have time yet as the upsampling of labels takes a long time.
In case you have it handy, could you post labels from castleblatten_station1 as this one has finished for me?

Have you tried using our pretrained model by the way to compare the labels?

@saba155
Copy link
Author

saba155 commented May 14, 2019

No I do not tried on pre-trained model actually I want to reproduce results from scratch. Here I am uploading the first 200 lines of castleblatten_station1. castleblatten_station1.labels.txt

@loicland
Copy link
Owner

Hi,

So two things:

  • it is not normal to have 0 in your predictions because this is reserved for unannotated datas, and normally never predicted. Something is wrong. Can you post a link (google drive for example) to your predictions_testfull.h5?

  • another thing I just noticed is that the command line you use seems wrong:
    CUDA_VISIBLE_DEVICES=0 python learning/main.py --dataset sema3d --SEMA3D_PATH $SEMA3D_DIR --db_test_name testfull --db_train_name trainval --epochs -1 --lr_steps '[350, 400, 450]' --test_nth_epoch 100 --model_config 'gru_10,f_8' --ptn_nfeat_stn 11 --nworkers 2 --odir "results/sema3d/trainval_best"

if --epochs is set to -1 (inference only) then you should add --resume RESUME in order for the trained model to be loaded. Else it would use random weights, and, yes, have about 13% (1/8) accuracy.
If this is your main training loop then --epochs should be set to at least 200.

@hagianga21
Copy link

Hi, I also have that problems. Although I run this line of code:

CUDA_VISIBLE_DEVICES=2 python learning/main.py --dataset sema3d --SEMA3D_PATH 'datasets/semantic3d' --db_test_name testfull --db_train_name trainval --epochs -1 --lr_steps '[350, 400, 450]' --test_nth_epoch 100 --model_config 'gru_10,f_8' --ptn_nfeat_stn 11 --nworkers 2 --odir "results/bremen/standard" --resume RESUME

There is some 0 labelled points in label files

@loicland
Copy link
Owner

Hi,

I don't have access to my workstation right now. I think there is indeed a bug in write_semantic3d.py. Can you try to add a +1 to line 74:

labels_ups = interpolate_labels_batch(data_file, xyz, labels_full, args.ver_batch) + 1

and see if that fixes the issue?

Many thanks

@hagianga21
Copy link

I fixed this line. However, the result I achieved is still slow, compared to SPG.

image

@loicland
Copy link
Owner

loicland commented May 20, 2019

  • This for the reduced test?
  • Did you train from scratch or use the trained models?
  • What performance do you obtain on the training set?
  • Can you try with the option --pc_attrib xyzrgbelpsv (option I forgot in the README, fixed now)

@hagianga21
Copy link

1/ Yes, it's the reduced test.
2/ I trained it from scratch
3/ In the training set, the performance is:
Train accuracy: 87.58935173773725, Loss: 0.3772576630115509, Test accuracy: 0.8062616161880531, Test oAcc: 0.0, Test avgIoU: nan

4/ You want me to train it again with option: --pc_attrib xyzrgbelpsv ?

@hagianga21
Copy link

hagianga21 commented May 22, 2019

Hi, I trained the network again. Because the last time I submitted with the reduced test set and they said I have to wait 3 days. Thus, I changed to the full test set so that I am able to submit the result. I run the new command line to run the code and I attach the result I achieved. However, it is still slow when compared to your results. What should I do if I want to reproduce your result?

Screen Shot 2019-05-22 at 10 02 05 AM

@loicland
Copy link
Owner

loicland commented May 22, 2019

Hi,

I retrained a classifier from scratch using:

CUDA_VISIBLE_DEVICES=0 python3.6 learning/main.py --dataset sema3d --SEMA3D_PATH 
$SEMA3D_PATH --db_test_name testred --db_train_name trainval --epochs 500 --lr_steps '[350, 400, 450]' 
--test_nth_epoch 1000 --model_config 'gru_10,f_8' --pc_attrib xyzrgbelpsv --ptn_nfeat_stn 11 --nworkers 2 
--odir "results/sema3d/trainval_best2"

and got 85.2% accuracy on the train set (Loss: 0.436) and 92.9% on the reduced set evaluated on the submission server. While it is not as good as the originally submitted results, this is close.

Inherent variability of the results of neural networks are discussed in issue #114. In the absence of the test labels, the Best-of-n approach could only be obtained with visual inspection.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants