-
Notifications
You must be signed in to change notification settings - Fork 252
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prevent none value in gradients when some of the inputs have not impact to the target #987
base: master
Are you sure you want to change the base?
Changes from 2 commits
9eef9c4
c9f653f
408f12d
8dd5883
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -400,6 +400,14 @@ def _gradients_input(model: Union[tf.keras.models.Model], | |
|
||
grads = tape.gradient(preds, x) | ||
|
||
# if there are inputs have not impact to the output, the gradient is None, but we need to return a tensor | ||
if isinstance(x, list): | ||
shape = x[0].shape | ||
else: | ||
shape = x.shape | ||
for idx, grad in enumerate(grads): | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If our input x is not a list, I think tape.gradient may directly output the gradient for x, in which case we may not want to have this enumerate step which seems to assume that the grads is a list of gradient tensors (one for each input). There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. And actually, if our input x isn't a list, would we encounter the None gradients? It seems like this it primarily comes up for us because we have outputs y1, y2, y3 which depend on different subsets of inputs x1, x2, x3. If y1 only depends on x1, then if we try to explain the model, we can run into issues because the gradients for x2 and x3 will be none. But if the input isn't a list and is just x, then it seems like every output would need to depend on the whole input tensor? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If that is the case, maybe we can only do this gradient zero-ing for when x is a list? Something like:
|
||
if grad is None: | ||
grads[idx] = tf.convert_to_tensor(np.zeros(shape), dtype=x[idx].dtype) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think in an earlier commit you had |
||
return grads | ||
|
||
|
||
|
@@ -497,7 +505,14 @@ def wrapper(*args, **kwargs): | |
grads = tape.gradient(preds, layer.inp) | ||
else: | ||
grads = tape.gradient(preds, layer.result) | ||
|
||
# if there are inputs have not impact to the output, the gradient is None, but we need to return a tensor | ||
if isinstance(x, list): | ||
shape = x[0].shape | ||
else: | ||
shape = x.shape | ||
for idx, grad in enumerate(grads): | ||
if grad is None: | ||
grads[idx] = tf.convert_to_tensor(np.zeros(shape), dtype=x[idx].dtype) | ||
delattr(layer, 'inp') | ||
delattr(layer, 'result') | ||
layer.call = orig_call | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Slight nit: Maybe "If certain inputs don't impact the target, the gradient is None, but we need to return a tensor"