-
Notifications
You must be signed in to change notification settings - Fork 190
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
loss don't go down when finetune and about requires_grad option in backward() #21
Comments
Same thing. But in TF version everything works fine |
I just started to use Torchmoji and had the same problem. |
works for me, I think it's caused by the different version of pytorch (Variables is deprecated from version 0.4) |
I tried finetuning with SS-Youtube dataset on
examples/finetune_youtube_last
. but There are some minor errors in there :(...First. The loss are not decrease well about each epochs when finetune using SS-Youtube. I tried training as setting original val loss 0.01, but Each epochs loss almost same
Second. In finetune.py, There is error
element 0 of tensors does not require grad and does not have a grad_fn
if i not includeloss = Variable(loss, requires_grad=True)
in code.The text was updated successfully, but these errors were encountered: