Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Train-test split #1555

Open
caligraf1 opened this issue Jul 17, 2024 · 4 comments
Open

Train-test split #1555

caligraf1 opened this issue Jul 17, 2024 · 4 comments

Comments

@caligraf1
Copy link

Hello,

How is the train-test split done in Instant NGP? And where is it in the code?

Thank you.

@Tom94
Copy link
Collaborator

Tom94 commented Jul 17, 2024

In the paper, all synthetic NeRF scenes got trained on transforms_train.json and then evaluated on transforms_test.json, same as previous work.

There's no code in this repo to generate the paper figures, but you can run

./instant-ngp path-to-scene/transforms_train.json

if you want to replicate the same training setup. PSNR numbers might be slightly different from the paper because the codebase has evolved since then. You can check out the initial commit of this repo if you want a more faithful reproduction. Also note that the PSNR numbers displayed in the GUI differ slightly from the values reported in the paper -- this is because prior NeRF work has certain objectionable treatment of color spaces (linear vs. sRGB) and their combination with (non-)premultiplied alpha that INGP does not mirror. For the paper, we wrote a separate codepath that exactly replicated the PSNR comutation setup of prior work. Using --nerf_compatibility with ./scripts/run.py enables part of that code path.

(Note that if you run ./instant-ngp path-to-scene it'll grab all the .json files from the folder, which yields a better reconstruction but is not how the paper results were generated.)

@caligraf1
Copy link
Author

What if I train on my own dataset and have just one transforms.json file? How is then the split done?

@Tom94
Copy link
Collaborator

Tom94 commented Jul 17, 2024

Then it's up to you to come up with a split, generate corresponding .json files, and load only the one that you'd like to operate on.

@caligraf1
Copy link
Author

(Note that if you run ./instant-ngp path-to-scene it'll grab all the .json files from the folder, which yields a better reconstruction but is not how the paper results were generated.)

Yes, but how is in this case the training done? Are all images used just for training (which doesn't make sense)?
I've trained the network it on my dataset providing one .json file with poses for all images and calculated the accuracy metrics. How are they being calculated in such case?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants