question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. ItĀ collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

LRP throws RuntimeError 'hook 'backward_hook_activation' has changed the size of value'

See original GitHub issue

šŸ› Bug

I am trying to use LRP on a GoogleNet with a modified fc layer. Using the original GoogleNet from the Pytorch models library fails too.

To Reproduce

Steps to reproduce the behavior:

  1. Load GoogleNet Model from Pytorch Model Library and put it in eval mode
  2. Initialize LRP as described in the API reference
  3. Use real or dummy tensor to perform LRP on
  4. Run attribute method with said tensor and an arbitrary target

model_ft = models.googlenet(pretrained=True, transform_input=False) num_ftrs = model_ft.fc.in_features model_ft.fc = nn.Linear(num_ftrs, 20) model_ft.to(device).eval()

lrp = LRP(model_ft)

image_tensor = torch.rand(1,3,224,224)

attributions = lrp.attribute(image_tensor.to(device), target = 1)

RuntimeError: hook ā€˜backward_hook_activation’ has changed the size of value

Expected behavior

No error should be thrown and attributions should be calculated

Environment

Describe the environment used for Captum


 - Pytorch: 1.9.0+cu102
 - Captum: 0.4.0
 - torchvision: 0.10.0+cu102
 - OS (e.g., Linux): Google Collab
 - How you installed Captum / PyTorch (`conda`, `pip`, source): Google Collab
 - Build command you used (if compiling from source):
 - Python version: 3.7.12
 - CUDA/cuDNN version: 11.1.105
 - GPU models and configuration:
 - Any other relevant information:

Issue Analytics

  • State:open
  • Created 2 years ago
  • Comments:7 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
NarineKcommented, Sep 28, 2021

@filiprejmus, can it be that some of the linear activations are being reused in googlenet ? If they are being reused then the hooks don’t work properly. Perhaps you can change it the way that the linear activations or the activation blocks containing them aren’t reused. PyTorch hooks don’t tell us in which order are the hooks executed. If they are reused we can’t exactly tell where are they called from in the execution graph.

0reactions
martynannacommented, Jun 10, 2022

Hello @filiprejmus , were you able to solve the problem? I’m getting the same error with a custom altered VGG16 net

Read more comments on GitHub >

github_iconTop Results From Across the Web

RuntimeError: hook 'backward' has changed the size of value
I'm trying to modify grad_input in my backwards hook function, but I am getting the following error: Traceback (most recent call last): FileĀ ......
Read more >
How to register a dynamic backward hook on tensors in ...
From here it seem like it's possible to register a hook on a tensor with a fixed value (though note that I need...
Read more >
Debugging and Visualisation in PyTorch using Hooks
There is no forward hook for a tensor. grad is basically the value contained in the grad attribute of the tensor after backward...
Read more >
torch.nn.modules.module — DGL 0.9.1post1 documentation
User can either return a tuple or a single modified value in the hook. We will wrap the value into a tuple if...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found