Use GradCAM with dataloaders
See original GitHub issueHi,
Thank you for this very nice work.
I’ve been trying to encapsulate GradCam into a single wrapper than can be used like any other model for prediction from dataloaders.
Here is what I did :
import numpy as np
from grad_cam import (GradCAM,
BackPropagation)
from tqdm import tqdm
class GradCaMExplainer(torch.nn.Module):
"""
Creates a torch module for grad cam
"""
def __init__(self, model, target_layer="_conv_head", topk=1):
super(GradCaMExplainer, self).__init__()
self.back_propagator = BackPropagation(model=model)
self.grad_cam = GradCAM(model=model)
self.topk = topk
self.target_layer = target_layer
def forward(self, x):
probs, ids = self.back_propagator.forward(x) # sorted
self.back_propagator.remove_hook()
_ = self.grad_cam.forward(x)
self.grad_cam.remove_hook()
for i in range(self.topk):
# Grad-CAM
self.grad_cam.backward(ids=ids[:, [i]])
regions = self.grad_cam.generate(target_layer=self.target_layer)
return regions
def gradcam_explain(grad_explainer, dataloader, with_target=False, device='cuda'):
"""
This outputs explanations for an entire dataloader
"""
res_region = []
res_probs = []
res_ids = []
for batch in tqdm(dataloader):
if with_target:
inputs, targets = batch
else:
inputs = batch
inputs = inputs.to(device)
regions = grad_explainer(inputs)
res_region.append(regions.to("cpu").numpy())
return np.vstack(res_region)
But something strange is happening : imagine I have 119 examples in my dataloader, and set my batch size to 20, then the last batch only has 19 examples however the regions
results from the grad_explainer gives a tensor with 20 examples.
I suspect this might be due to sizes being set a init_time and only once or something like this, so the output is always the size of the first batch but could not find by looking carefully at the code. Am I doing something wrong? Could you please help me with this code?
Thanks a lot!
Issue Analytics
- State:
- Created 3 years ago
- Comments:7 (3 by maintainers)
Top Results From Across the Web
Implementing Grad-CAM in PyTorch - Medium
Find its last convolutional layer; Compute the most probable class; Take the gradient of the class logit with respect to the activation maps...
Read more >Simple implementation of GradCAM - Github-Gist
Simple implementation of GradCAM. ... use the ImageNet transformation. transform = transforms. ... DataLoader(dataset=dataset, shuffle=False, batch_size=1).
Read more >Visualizing CNN Activations - PyTorch - Gradcam - Kaggle
In this notebook, we will be using GradCAM (Gradient-weighted Class Activation Mapping) to visualize the CNN outputs. GradCAM uses the gradients of any ......
Read more >PyTorch: Grad-CAM - CoderzColumn
The grad-CAM algorithm uses the gradients of any target (say 'cat' in a ... We'll be using these data loaders during training.
Read more >PyTorch Implementation of Class Activation Map(CAM)
In this article I am going to implement CAM using PyTorch. ... classes = next(iter(dataloaders['train'])) # make a grid from batch images ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Thanks very much!
I’ve seen the code on the link bellow, I think what got me confused is that are used every where are defined only at line 160 by
probs, ids = bp.forward(images) # sorted
, so I thought you had to use the back propagator.Maybe you could add some comments to make this clearer.
Anyway, thanks very much again! Your help was much appreciated!
Bests
Each wrapper module can be used independently. They have the same pipeline: forward, backward, and generate (shortly explained here).
Please call the function, especially when destructing
GuidedBackPropagation
andDeconvnet
classes. They modify the given model instance with the hook function to filter the ReLU gradients on runtime, which is not required for the other algorithms. Withremove_hooks
, you can restore the model.Or