-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The training results on the sample dataset did not meet the expectations #8
Comments
Try command: |
@ShenjwSIPSD @zhaofuq |
Level 1 can't perform the best. Level 1 is second sparse level. Can you post the images? |
Sure, maybe monday, actually Im visiting shanghai this weekend. hhhh
…------------------ Original ------------------
From: Cao Junming ***@***.***>
Date: Sat,Oct 26,2024 11:13 AM
To: zhaofuq/LOD-3DGS ***@***.***>
Cc: Chenyi Wang ***@***.***>, Comment ***@***.***>
Subject: Re: [zhaofuq/LOD-3DGS] The training results on the sample dataset didnot meet the expectations (Issue #8)
@ShenjwSIPSD @zhaofuq Hi, I also tried the sample data, and get poor result too. Specially, mixed level is poor than level 0, level 1 and some others. In my result, level 1 performs best, while still not meet the result in demo.
Level 1 can't perform the best. Level 1 is second sparse level. Can you post the images?
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***>
|
There might be some bug in the main branch. Try depth branch and this command: |
a small diffs between linux and windows, better use windows, and the result did not meet the expectations of paper. |
Try depth branch |
hi, the submodules/diff-gaussian-rasterization/third_party/glm may be missing in the depth branch. Just a reminder for others, it can be copied from the main branch |
Hi, thanks for your guidance. I notice that when |
thank you for your feedback. We will make some more tests and find out the reason. |
I have downloaded the sample data you provided and followed your training command to complete 300,000 training iterations. However, both the geometry and appearance results are not very good and do not meet my expectations.
In the camera view of your SIBR_viewers:
PLY in SuperSplat is even worse:
Additionally, I noticed that the training parameters like scaling learning rate and position learning rate mentioned in your paper are inconsistent with the training command in the repository. Could you please help me identify which step I might have done incorrectly?
By the way, how are your depth maps generated? Theoretically, lidar data is sparser than image data. How do you achieve such smooth depth maps?
The text was updated successfully, but these errors were encountered: