You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi thanks for sharing your work.
Just to let you know that your activation maximization examples do not work (using pytorch 1.9):
1st example returns:
RuntimeError: Output 0 of BackwardHookFunctionBackward is a view and is being modified inplace. This view was created inside a custom Function (or because an input was returned as-is) and the autograd logic to handle view+inplace would override the custom backward associated with the custom Function, leading to incorrect gradients. This behavior is forbidden. You can remove this warning by cloning the output of the custom Function.
2nd and 3rd examples return:
RuntimeError: The size of tensor a (3) must match the size of tensor b (64) at non-singleton dimension 1
Your examples were used as is without any modifications.
The text was updated successfully, but these errors were encountered:
Hi thanks for sharing your work.
Just to let you know that your activation maximization examples do not work (using pytorch 1.9):
1st example returns:
RuntimeError: Output 0 of BackwardHookFunctionBackward is a view and is being modified inplace. This view was created inside a custom Function (or because an input was returned as-is) and the autograd logic to handle view+inplace would override the custom backward associated with the custom Function, leading to incorrect gradients. This behavior is forbidden. You can remove this warning by cloning the output of the custom Function.
2nd and 3rd examples return:
RuntimeError: The size of tensor a (3) must match the size of tensor b (64) at non-singleton dimension 1
Your examples were used as is without any modifications.
The text was updated successfully, but these errors were encountered: