-
Notifications
You must be signed in to change notification settings - Fork 348
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory Leak #21
Comments
Maybe it is other parts of your code that cause memory leak, rather than the code of calc_gradient_penalty() |
I have the same issue. It is caused by doing this: y_r = critic(_x_r)
y_r.mean().backwards(mone)
# Train with fake.
y_g = critic(_x_g)
y_g.mean().backwards(one)
# Train with gradient penalty.
gp = compute_gradient_penalty(critic, _x_r.data, _x_g.data)
gp.mean().backwards()
optimizer.step() Instead of # Reset the gradients.
critic.zero_grad()
# Train with real.
y_r = critic(_x_r)
# Train with fake.
y_g = critic(_x_g)
# Train with gradient penalty.
gp = compute_gradient_penalty(critic, _x_r.data, _x_g.data)
loss = y_g - y_r + gp
loss.mean().backward()
optimizer.step() Joeri |
I also have a memory leak when implementing, I think it's due to create_graph=True, but without it, the gradient (of the gp part of loss) does not backprop through entire D network. Would be interested in a solution. |
Oh, I now also facing this, did you solve it? |
I have the same problem. |
Hello,
I tried to run the gan_mnist.py file both with the most current master version of pytorch (0.2.0+75bb50b) and with an older commit (0.2.0+c62490b).
With both versions the memory used by the code keeps increasing in each iteration, until the program ends with out of memory error.
When I took only the code of the function calc_gradient_penalty() and integrated it into my code, it caused the same memory leak.
Surprisingly, when a friend took the exact same code and integrated it to cycle gan - it did not cause memory leak.
Do you know what is the problem, or of a specific git revision of pytorch where there is no memory leak?
The text was updated successfully, but these errors were encountered: