-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MNIST experiments creating qpth issues #4
Comments
Hi, I just tried running the MNIST experiment and am hitting nans there too. It's been a while since I've ran that example and I've changed the qpth library since the MNIST experiment was last working. It looks like the solver's hitting some nans internally, causing the precision issue and bad gradients. For now you can try reverting to an older commit of qpth, one from around the time I last updated the MNIST example. I'll try to look into the internal solver issues soon. -Brandon. |
Thanks for the quick reply! I will try working with the older commit of qpth. |
Hi, I tried most of the early versions of qpth but none of them works. They fail in various ways, mostly inside qpth. Could you check which version can work? |
Hi Brandon, |
Hi, the nans were coming up in the backwards pass in qpth and I've pushed a fix to it here: locuslab/qpth@e2cac49 Here's the convergence of one of my new runs (I did modify Can you try running the training again with the latest versions of this repo and -Brandon. |
Hi,
I was running the optnet code for MNIST classification with the default configurations for only 10 epochs. In the first couple of epochs I get the warning "qpth warning: Returning an inaccurate and potentially incorrect solutino" and in the subsequent iterations the loss becomes nan. Is there something obviously wrong with my configurations?
The text was updated successfully, but these errors were encountered: