question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

LRP returns negative values for a simple CNN model.

See original GitHub issue

Hi all,

Thanks for the support of Layer wise propagation. I have a simple CNN model with several convolutional and average pooling layers. From my understanding, LRP should always return positive values and the summation from all pixels should be 1. But I’m getting a negative summation and also lots of negative points. What does the negative value mean? How should I fix it?

Thanks!

class CustomModel(nn.Module):
    
    def __init__(self, model) -> None:
        super(CustomModel, self).__init__()
        self.model = model

    def forward(self, psl):
        psl = psl.reshape(psl.shape[0], psl.shape[1], psl.shape[2], psl.shape[3], n_vars)
        psl = psl[:,:-1]
        out = self.model(psl)
        return out        
custom_model = CustomModel(model)

lrp = LRP(custom_model)

lrps = []
for ind in tqdm(range(test_ds.__len__())):
    sample_lrp = lrp.attribute(test_ds[ind][0].reshape(1, 2, 192, 288, 3).to(device)).to('cpu').data.numpy()
    lrps.append(sample_lrp)

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:9 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
nanohannocommented, Sep 30, 2021

That seems to be true, the documentation in appears to be outdated: https://github.com/pytorch/captum/blob/e1575c52f39e403f71cfda412b758df34d857914/captum/attr/_core/lrp.py#L151-L154 Sorry for not being very responsive atm. 😮‍💨

1reaction
ShihengDuancommented, Sep 17, 2021

Hi @bilalsal

Thanks for your reply. I have tried the following but it still gave me a negative attribution.

lrp = LRP(model)
psl = test_ds[ind][0].reshape(1, 2, 192, 288, 3).to(device)
psl = psl.reshape(psl.shape[0], psl.shape[1], psl.shape[2], psl.shape[3], n_vars)
psl = psl[:,:-1]
sample_lrp = lrp.attribute(psl).to('cpu').data.numpy()
print(sample_lrp.shape)
print(np.min(sample_lrp))
print(np.sum(sample_lrp))

Both the min and sum return negative values.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Explain and improve: LRP-inference fine-tuning for image ...
Fig. 1 shows an example of the explanation results of attention-guided image captioning models. Taking LRP as an example, both positive and negative...
Read more >
LRP-Inference Fine-Tuning for Image Captioning Models - arXiv
Abstract—This paper analyzes the predictions of image cap- tioning models with attention mechanisms beyond visualizing the attention itself.
Read more >
Can a neural network work with negative and zero inputs?
So negative values are not an issue, it is entirely possible that negative values will be transformed to positive values.
Read more >
Negative relevance in regression? #13 - GitHub
In a nutshell, LRP generalizes the interpretability which is inherent in simple linear models, but for deeper, non-linear models.
Read more >
LRP Toolbox for Artificial Neural Networks 1.2.0 – Manual
of decomposing the decision function f(·) of a given neural network model wrt to an input point x in order to compute relevance...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found