-
Notifications
You must be signed in to change notification settings - Fork 134
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
using any regularization causes ever increasing loss #34
Comments
This is a little weird! Are you able to train the baseline without any regularization? Caffe is relatively old, and you should consider switch to others like pytorch. |
Yeah that's really weird. Training without regularization leads to a useful loss and a kinda good accuracy (~90%). But I honestly don't understand why one of the normal regularization methods would cause this behavior. I am currently fine tuning this baseline with SSL and see where this goes. It also started with a really high loss (e+12) and is currently working its way down (e+10). I know that caffe is getting old but I am currently working on my bachelor thesis where I am comparing interesting sparsification methods and I think SSL is a really interesting approach based on the fact that you don't need specialized hardware to get a acceleration from it. |
Thank you for the reference code! I will look into implementing it myself for tensorflow. But wouldn't I also have to implement sparse convolution ops for tensorflow to also get the speedup on a normal GPU and CPU? |
@aradar if you remove structures (such as filters and channels), then you won't have to. You will just need to create a smaller DNN with the learned structures (such as fewer filters and channels) and initialize them by non-zeros. |
Issue summary
Hi @wenwei202,
I'am currently trying to train a sparse network through SSL. But I have some big issues getting the training to converge. As soon as I add any kind of regularization (L1, L2, your SSL) the loss increases and the training diverges. This even happens if I set the weight_decay to something like 0.0000001.
The following log shows the behavior when trying to train the resnet baseline example from your cifar10 readme.
Do you know by any chance what could cause this behavior? Or how I could fix this?
Steps to reproduce
Training any net with enabled regularization.
Your system configuration
Operating system: Ubuntu 16.04 or Arch
Compiler: gcc5.4 (Ubuntu) and gcc5.5 (Arch)
CUDA version (if applicable): 8.0
CUDNN version (if applicable): 5
BLAS: Atlas
Python or MATLAB version (for pycaffe and matcaffe respectively): 3.5 (Ubuntu) 3.6 (Arch)
The text was updated successfully, but these errors were encountered: