question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Multiple differently shaped inputs are crashing Integrated Gradients (embedding layers + other layers)

See original GitHub issue

❓ Questions and Help

Hello. I have developed a model with three inputs types. Image, categorical data and numerical data. For Image data I’ve used ResNet50 for the other two I develop my own network.

class MulticlassClassification(nn.Module):
    def __init__(self, cat_size, num_col, output_size, layers, p=0.4):
        super(MulticlassClassification, self).__init__()
        
        # IMAGE: ResNet
        self.cnn = models.resnet50(pretrained = True)
        for param in self.cnn.parameters():
            param.requires_grad = False
        n_inputs = self.cnn.fc.in_features
        self.cnn.fc = nn.Sequential(
          nn.Linear(n_inputs, 250), 
          nn.ReLU(), 
          nn.Dropout(p),
          nn.Linear(250, output_size),                   
          nn.LogSoftmax(dim=1)
        )
        
        # TABULAR 
        self.all_embeddings = nn.ModuleList(
            [nn.Embedding(categories, size) for categories, size in cat_size]
        )
        self.embedding_dropout = nn.Dropout(p)
        self.batch_norm_num = nn.BatchNorm1d(num_col)

        all_layers = []
        num_cat_col = sum(e.embedding_dim for e in self.all_embeddings)
        input_size = num_cat_col + num_col

        for i in layers:
            all_layers.append(nn.Linear(input_size, i))
            all_layers.append(nn.ReLU(inplace=True))
            all_layers.append(nn.BatchNorm1d(i))
            all_layers.append(nn.Dropout(p))
            input_size = i

        all_layers.append(nn.Linear(layers[-1], output_size))

        self.layers = nn.Sequential(*all_layers)
        
        #combine
        self.combine_fc = nn.Linear(output_size * 2, output_size)

    def forward(self, image, x_categorical, x_numerical):
        embeddings = []
        for i, embedding in enumerate(self.all_embeddings):
            print(x_categorical[:,i])
            embeddings.append(embedding(x_categorical[:,i]))
        x = torch.cat(embeddings, 1)
        x = self.embedding_dropout(x)

        x_numerical = self.batch_norm_num(x_numerical)
        x = torch.cat([x, x_numerical], 1)
        x = self.layers(x)
        
        # img
        x2 = self.cnn(image)
        
        # combine
        x3 = torch.cat([x, x2], 1)
        x3 = F.relu(self.combine_fc(x3))
        
        return x

Now after successful training I would like to calculate integrated gradients

testiter = iter(testloader)
img, stack_cat, stack_num, target = next(testiter)
attributions_ig = ig.attribute(inputs=(img.cuda(), stack_cat.cuda(), stack_num.cuda()), target=target.cuda())

And here I got an Error: RuntimeError: Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int; but got torch.cuda.FloatTensor instead (while checking arguments for embedding) I figured out that captum injects a wrong shaped tensor into my x_categorical input (with the print in my forward method). It seems like captum only sees the first input tensor and uses it’s shape for all other inputs. Is this correct?

I’ve found this the Issue #439 and tried all suggested solutions without success. When I used an Interpretable Embedding for categorical data I got this error: IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

I would be very grateful for any tips and advises how to combine all three inputs and to solve my problem.

Issue Analytics

  • State:open
  • Created a year ago
  • Comments:14 (8 by maintainers)

github_iconTop GitHub Comments

1reaction
code-ksucommented, May 15, 2022

Dear @NarineK I’ve removed unnecessary code lines. I hope this time everything works well. But I still think you may need to copy the files to your drive. I don’t know if my drive authorization works for you. Best regards.

1reaction
NarineKcommented, May 5, 2022

@code-ksu, Integrated Gradients assumes that the first dimension in all inputs passed through inputs arguments is the same and it corresponds to the number of examples (batch size). This is because integrated gradients must scale the inputs based on n_steps argument for the batch dimension for all inputs. Usually we get this type of errors if in the forward method we do not account for the first dimension to be batch size. If you can share a collab notebook I can debug it and tell where exactly the issue is. This is a high level explanations.

Read more comments on GitHub >

github_iconTop Results From Across the Web

How to Use Word Embedding Layers for Deep Learning with ...
The sequences have different lengths and Keras prefers inputs to be vectorized and all inputs to have the same length. We will pad...
Read more >
Problem with multiple embedding layers in inner Keras model
I am trying to construct a Keras model model_B that outputs the output of another Keras model model_A ...
Read more >
captum.attr._core.layer.layer_integrated_gradients
Integrated Gradients is an axiomatic model interpretability algorithm that ... a subset of layers in `layer` cannot produce the inputs for another layer....
Read more >
Models, Preprocessors, and Action Distributions — Ray 2.2.0
"free_log_std": False, # Whether to skip the final linear layer used to ... allow the use of multiple LSTM cells to process different...
Read more >
Step 2: Create a neural network - Apache MXNet
Gluon includes built-in neural network layers in the following two ... First, you create a (10,3) shape random input x and feed the...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found