LRP throws RuntimeError 'hook 'backward_hook_activation' has changed the size of value'
See original GitHub issueš Bug
I am trying to use LRP on a GoogleNet with a modified fc layer. Using the original GoogleNet from the Pytorch models library fails too.
To Reproduce
Steps to reproduce the behavior:
- Load GoogleNet Model from Pytorch Model Library and put it in eval mode
- Initialize LRP as described in the API reference
- Use real or dummy tensor to perform LRP on
- Run attribute method with said tensor and an arbitrary target
model_ft = models.googlenet(pretrained=True, transform_input=False)
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs, 20)
model_ft.to(device).eval()
lrp = LRP(model_ft)
image_tensor = torch.rand(1,3,224,224)
attributions = lrp.attribute(image_tensor.to(device), target = 1)
RuntimeError: hook ābackward_hook_activationā has changed the size of value
Expected behavior
No error should be thrown and attributions should be calculated
Environment
Describe the environment used for Captum
- Pytorch: 1.9.0+cu102
- Captum: 0.4.0
- torchvision: 0.10.0+cu102
- OS (e.g., Linux): Google Collab
- How you installed Captum / PyTorch (`conda`, `pip`, source): Google Collab
- Build command you used (if compiling from source):
- Python version: 3.7.12
- CUDA/cuDNN version: 11.1.105
- GPU models and configuration:
- Any other relevant information:
Issue Analytics
- State:
- Created 2 years ago
- Comments:7 (4 by maintainers)
Top Results From Across the Web
RuntimeError: hook 'backward' has changed the size of value
I'm trying to modify grad_input in my backwards hook function, but I am getting the following error: Traceback (most recent call last): FileĀ ......
Read more >How to register a dynamic backward hook on tensors in ...
From here it seem like it's possible to register a hook on a tensor with a fixed value (though note that I need...
Read more >Debugging and Visualisation in PyTorch using Hooks
There is no forward hook for a tensor. grad is basically the value contained in the grad attribute of the tensor after backward...
Read more >torch.nn.modules.module ā DGL 0.9.1post1 documentation
User can either return a tuple or a single modified value in the hook. We will wrap the value into a tuple if...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

@filiprejmus, can it be that some of the linear activations are being reused in googlenet ? If they are being reused then the hooks donāt work properly. Perhaps you can change it the way that the linear activations or the activation blocks containing them arenāt reused. PyTorch hooks donāt tell us in which order are the hooks executed. If they are reused we canāt exactly tell where are they called from in the execution graph.
Hello @filiprejmus , were you able to solve the problem? Iām getting the same error with a custom altered VGG16 net