-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Train-test split #1555
Comments
In the paper, all synthetic NeRF scenes got trained on There's no code in this repo to generate the paper figures, but you can run ./instant-ngp path-to-scene/transforms_train.json if you want to replicate the same training setup. PSNR numbers might be slightly different from the paper because the codebase has evolved since then. You can check out the initial commit of this repo if you want a more faithful reproduction. Also note that the PSNR numbers displayed in the GUI differ slightly from the values reported in the paper -- this is because prior NeRF work has certain objectionable treatment of color spaces (linear vs. sRGB) and their combination with (non-)premultiplied alpha that INGP does not mirror. For the paper, we wrote a separate codepath that exactly replicated the PSNR comutation setup of prior work. Using (Note that if you run |
What if I train on my own dataset and have just one transforms.json file? How is then the split done? |
Then it's up to you to come up with a split, generate corresponding |
Yes, but how is in this case the training done? Are all images used just for training (which doesn't make sense)? |
Hello,
How is the train-test split done in Instant NGP? And where is it in the code?
Thank you.
The text was updated successfully, but these errors were encountered: