-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
running_var in the BatchNorm layers of pretrained weight contains Nan #4
Comments
Thank you a lot for your efford and help! I did download the new weight files that are provided at the same urls:
Unfortunately, the results as shown here #5 did not improve. To exclude another corruption I provide the SHA1 hashes of the files that I used. |
As you pointed out, we did not perform any retraining of our model after submitting the paper. I guess it would be unfair and unethical to upload a newly trained model, as it can mislead our readers. |
Yes, I agree, A retraining should not be done. Can you provide the SHA1 hashes for the weight files; just to exclude that there is something wrong? |
Hi, I downloaded the pretrain QuadBayer parameters and found most value of self.attentionNet.psUpsampling1.upSample[1].running_var is inf and nan.
In eval() model, the nan value will cause the final image to be a nan image. I'm not sure whether there is some problem here.
The text was updated successfully, but these errors were encountered: