question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

running backward on a loss function which contain explanation

See original GitHub issue

Hello,

I am trying to run a backward pass on an objective function that contains an explanation. The goal is to run a manipulate attack on explanations. However, it is strange that the explanation attained by the attribute method from captum does not require gradient even though the input for which the explanation is being computed requires gradient. For example in the following code snippet:

sm = Saliency(vgg_model)
expl = sm.attribute(x_adv, target=17)

The expl tensor is a leaf tensor and doesn’t require gradient.

There’s further another issue with “LRP” in this application. When computing backward pass, I encounter this error:

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [4096, 1000]], which is output 0 of TBackward, is at version 7; expected version 6 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

I would really appreciate it if you could help me with these issues.

Best, -Ahmad

Issue Analytics

  • State:open
  • Created 2 years ago
  • Reactions:1
  • Comments:6 (2 by maintainers)

github_iconTop GitHub Comments

2reactions
vivekmigcommented, Apr 15, 2021

Hi @ahmadajal , here is a workaround to obtain explanations that require gradient. We essentially need to override the default gradient function, to add an additional parameter create_graph=True, which enables higher order derivatives.

from captum._utils.common import _run_forward
from typing import Any, Callable, Union, Tuple
from torch import Tensor
import torch

# This is the same as the default compute_gradients
# function in captum._utils.gradient, except
# setting create_graph=True when calling
# torch.autograd.grad
def compute_gradients(
    forward_fn: Callable,
    inputs: Union[Tensor, Tuple[Tensor, ...]],
    target_ind = None,
    additional_forward_args: Any = None,
) -> Tuple[Tensor, ...]:
    with torch.autograd.set_grad_enabled(True):
        # runs forward pass
        outputs = _run_forward(forward_fn, inputs, target_ind, additional_forward_args)
        assert outputs[0].numel() == 1, (
            "Target not provided when necessary, cannot"
            " take gradient with respect to multiple outputs."
        )
        grads = torch.autograd.grad(torch.unbind(outputs), inputs, create_graph=True)
    return grads

from captum.attr import Saliency
sal = Saliency(model)
sal.gradient_func = compute_gradients
attr = sal.attribute(inp, target=1)

We will look into approaches to expose this option more easily, but this approach should work in the meantime, with attr requiring gradients here.

We will also look into the LRP issue further, would you be able to provide an example to reproduce this? It seems this may be related to inplace operations, would you also be able to try with replacing any inplace operations (e.g. set inplace to False on ReLUs if applicable) in your model and see if that resolves the issue?

0reactions
tangli0305commented, Sep 15, 2022

Do we have any solution now? I also have exactly the same error when trying backward loss containing LRP explanations.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Loss and Loss Functions for Training Deep Learning Neural ...
Cross-entropy loss is minimized, where smaller values represent a better model than larger values. A model that predicts perfect probabilities ...
Read more >
What does the backward() function do? - PyTorch Forums
The graph is used by loss.backward() to compute gradients. optimizer.zero_grad() and optimizer.step() do not affect the graph of autograd ...
Read more >
pytorch - connection between loss.backward() and optimizer ...
When we do loss. backward() the process of backpropagation starts at the loss and goes through all of its parents all the way...
Read more >
Neural Networks — PyTorch Tutorials 0.2.0_4 documentation
Loss Function​​ MSELoss which computes the mean-squared error between the input and the target. So, when we call loss. backward() , the whole...
Read more >
How does Backward Propagation Work in Neural Networks?
Building on this, the first step in Backward Propagation to calculate the error. In our regression problem, we shall take the loss function...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found