Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prevent none value in gradients when some of the inputs have not impact to the target #987

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 16 additions & 1 deletion alibi/explainers/integrated_gradients.py
Original file line number Diff line number Diff line change
Expand Up @@ -400,6 +400,14 @@ def _gradients_input(model: Union[tf.keras.models.Model],

grads = tape.gradient(preds, x)

# if there are inputs have not impact to the output, the gradient is None, but we need to return a tensor
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Slight nit: Maybe "If certain inputs don't impact the target, the gradient is None, but we need to return a tensor"

if isinstance(x, list):
shape = x[0].shape
else:
shape = x.shape
for idx, grad in enumerate(grads):
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If our input x is not a list, I think tape.gradient may directly output the gradient for x, in which case we may not want to have this enumerate step which seems to assume that the grads is a list of gradient tensors (one for each input).

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And actually, if our input x isn't a list, would we encounter the None gradients?

It seems like this it primarily comes up for us because we have outputs y1, y2, y3 which depend on different subsets of inputs x1, x2, x3. If y1 only depends on x1, then if we try to explain the model, we can run into issues because the gradients for x2 and x3 will be none.

But if the input isn't a list and is just x, then it seems like every output would need to depend on the whole input tensor?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If that is the case, maybe we can only do this gradient zero-ing for when x is a list? Something like:

if isinstance(x, list):
    for idx, grad in enumerate(grads):
        if grad is None:
            grads[idx] = tf.convert_to_tensor(np.zeros(shape), dtype=x[idx].dtype)

if grad is None:
grads[idx] = tf.convert_to_tensor(np.zeros(shape), dtype=x[idx].dtype)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think in an earlier commit you had x[idx].shape, which seems to make more sense in case each input has a different shape.

return grads


Expand Down Expand Up @@ -497,7 +505,14 @@ def wrapper(*args, **kwargs):
grads = tape.gradient(preds, layer.inp)
else:
grads = tape.gradient(preds, layer.result)

# if there are inputs have not impact to the output, the gradient is None, but we need to return a tensor
if isinstance(x, list):
shape = x[0].shape
else:
shape = x.shape
for idx, grad in enumerate(grads):
if grad is None:
grads[idx] = tf.convert_to_tensor(np.zeros(shape), dtype=x[idx].dtype)
delattr(layer, 'inp')
delattr(layer, 'result')
layer.call = orig_call
Expand Down