Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The training results on the sample dataset did not meet the expectations #8

Open
ShenjwSIPSD opened this issue Sep 26, 2024 · 10 comments

Comments

@ShenjwSIPSD
Copy link

I have downloaded the sample data you provided and followed your training command to complete 300,000 training iterations. However, both the geometry and appearance results are not very good and do not meet my expectations.

In the camera view of your SIBR_viewers:

联想截图_20240926102401

联想截图_20240926102649

联想截图_20240926102748

PLY in SuperSplat is even worse:

联想截图_20240926102946

联想截图_20240926103110

Additionally, I noticed that the training parameters like scaling learning rate and position learning rate mentioned in your paper are inconsistent with the training command in the repository. Could you please help me identify which step I might have done incorrectly?

By the way, how are your depth maps generated? Theoretically, lidar data is sparser than image data. How do you achieve such smooth depth maps?

@zhaofuq
Copy link
Owner

zhaofuq commented Sep 26, 2024

Try command:
python train.py -s
--use_lod
--sh_degree 2
--depths depths \ # use for depth loss if contains depths folder.
--densification_interval 10000
--iterations 300000
--scaling_lr 0.0015
    --position_lr_init 0.000016
--opacity_reset_interval 300000 \
--densify_until_iter 200000
--data_device cpu
-r 1

@wcyjerry
Copy link

@ShenjwSIPSD @zhaofuq
Hi, I also tried the sample data, and get poor result too.
Specially, mixed level is poor than level 0, level 1 and some others.
In my result, level 1 performs best, while still not meet the result in demo.

@samcao0416
Copy link
Collaborator

@ShenjwSIPSD @zhaofuq Hi, I also tried the sample data, and get poor result too. Specially, mixed level is poor than level 0, level 1 and some others. In my result, level 1 performs best, while still not meet the result in demo.

Level 1 can't perform the best. Level 1 is second sparse level. Can you post the images?

@wcyjerry
Copy link

wcyjerry commented Oct 26, 2024 via email

@samcao0416
Copy link
Collaborator

There might be some bug in the main branch. Try depth branch and this command:
python train.py -s
--sh_degree 2
--depths depths
--densification_interval 100
--iterations 90000
--scaling_lr 0.0015
--position_lr_init 0.000016
--opacity_reset_interval 300000
--densify_until_iter 75000
--data_device cpu
-r 1
Besides, when using LOD_viewer, try add --dmax 200 after -m

@cheylong
Copy link

a small diffs between linux and windows, better use windows, and the result did not meet the expectations of paper.

@samcao0416
Copy link
Collaborator

a small diffs between linux and windows, better use windows, and the result did not meet the expectations of paper.

Try depth branch

@Cyberkona
Copy link

hi, the submodules/diff-gaussian-rasterization/third_party/glm may be missing in the depth branch. Just a reminder for others, it can be copied from the main branch

@Cyberkona
Copy link

There might be some bug in the main branch. Try depth branch and this command: python train.py -s --sh_degree 2 --depths depths --densification_interval 100 --iterations 90000 --scaling_lr 0.0015 --position_lr_init 0.000016 --opacity_reset_interval 300000 --densify_until_iter 75000 --data_device cpu -r 1 Besides, when using LOD_viewer, try add --dmax 200 after -m

Hi, thanks for your guidance. I notice that when --densification_interval set to 100, the training process may slow down rapidly, and the expect training time will jump to hundreds of hours. I wonder know if it is normal situation, and how to further increase the training speed. Thanks a lot.

@samcao0416
Copy link
Collaborator

There might be some bug in the main branch. Try depth branch and this command: python train.py -s --sh_degree 2 --depths depths --densification_interval 100 --iterations 90000 --scaling_lr 0.0015 --position_lr_init 0.000016 --opacity_reset_interval 300000 --densify_until_iter 75000 --data_device cpu -r 1 Besides, when using LOD_viewer, try add --dmax 200 after -m

Hi, thanks for your guidance. I notice that when --densification_interval set to 100, the training process may slow down rapidly, and the expect training time will jump to hundreds of hours. I wonder know if it is normal situation, and how to further increase the training speed. Thanks a lot.

thank you for your feedback. We will make some more tests and find out the reason.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants