Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

running_var in the BatchNorm layers of pretrained weight contains Nan #4

Open
chwahaha opened this issue Mar 3, 2022 · 4 comments
Open

Comments

@chwahaha
Copy link

chwahaha commented Mar 3, 2022

Hi, I downloaded the pretrain QuadBayer parameters and found most value of self.attentionNet.psUpsampling1.upSample[1].running_var is inf and nan.

In eval() model, the nan value will cause the final image to be a nan image. I'm not sure whether there is some problem here.

@sharif-apu
Copy link
Owner

sharif-apu commented May 3, 2022

Hi there!
Thanks for raising such an important issue. Our previously shared weight files were corrupted. However, we have updated the weight files. Here are some sample images obtained from newly uploaded files. If you need any further assistance, please do not hesitate to contact our research group.
Thanks, much.

Quad-Bayer reconstruction at sigma = 5
01690_sigma_5_PIPNet

Quad-Bayer reconstruction at sigma = 10
01690_sigma_10_PIPNet

Quad-Bayer reconstructionat sigma = 15
01690_sigma_15_PIPNet

@DavidLP
Copy link

DavidLP commented May 10, 2022

Thank you a lot for your efford and help! I did download the new weight files that are provided at the same urls:

Unfortunately, the results as shown here #5 did not improve. To exclude another corruption I provide the SHA1 hashes of the files that I used.
It looks like, that the urls in the README still point to old weight files, that were created 2 years ago (see column "zuletzte geändert" = "last changed"):
grafik

@sharif-apu
Copy link
Owner

As you pointed out, we did not perform any retraining of our model after submitting the paper. I guess it would be unfair and unethical to upload a newly trained model, as it can mislead our readers.
Also, we perform the test several times and were able to produce admissible results in every attempt. Could you please check your packages?

@DavidLP
Copy link

DavidLP commented May 11, 2022

Yes, I agree, A retraining should not be done. Can you provide the SHA1 hashes for the weight files; just to exclude that there is something wrong?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants