question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

AssertionError: Cannot choose target column with output shape torch.Size([100])

See original GitHub issue

I don’t understand the error. Is not the output means the model’s output. If so, my predictor’s output shape is (batch_size,1). Here is the complete error trace,

----> 2 interpret_sentence(model, doc)
      3 # for tv_batch in source_test_dataloader:
      4 #     xtv_batch=tv_batch[0]
      5 #     ytv_batch=tv_batch[1]

<ipython-input-46-789ee45a76d0> in interpret_sentence(model, doc)
     42     reference_indices=reference_indices.to(device)
     43     print(reference_indices.shape)
---> 44     attributions_ig, delta = lig.attribute(b_input_ids, reference_indices,target=2 , \
     45                                            n_steps=100, return_convergence_delta=True)
     46     print(attributions_ig.shape)

/opt/conda/lib/python3.8/site-packages/captum/attr/_core/layer/layer_integrated_gradients.py in attribute(self, inputs, baselines, target, additional_forward_args, n_steps, method, internal_batch_size, return_convergence_delta, attribute_to_layer_input)
    350             else inps
    351         )
--> 352         attributions = self.ig.attribute(
    353             inputs_layer,
    354             baselines=baselines_layer,

/opt/conda/lib/python3.8/site-packages/captum/attr/_core/integrated_gradients.py in attribute(self, inputs, baselines, target, additional_forward_args, n_steps, method, internal_batch_size, return_convergence_delta)
    276 
    277         # grads: dim -> (bsz * #steps x inputs[0].shape[1:], ...)
--> 278         grads = _batched_operator(
    279             self.gradient_func,
    280             scaled_features_tpl,

/opt/conda/lib/python3.8/site-packages/captum/attr/_utils/batching.py in _batched_operator(operator, inputs, additional_forward_args, target_ind, internal_batch_size, **kwargs)
    154     of the results of each batch.
    155     """
--> 156     all_outputs = [
    157         operator(
    158             inputs=input,

/opt/conda/lib/python3.8/site-packages/captum/attr/_utils/batching.py in <listcomp>(.0)
    155     """
    156     all_outputs = [
--> 157         operator(
    158             inputs=input,
    159             additional_forward_args=additional,

/opt/conda/lib/python3.8/site-packages/captum/attr/_core/layer/layer_integrated_gradients.py in gradient_func(forward_fn, inputs, target_ind, additional_forward_args)
    331                     hook = self.layer.register_forward_hook(layer_forward_hook)
    332 
--> 333                 output = _run_forward(
    334                     self.forward_func, tuple(), target_ind, additional_forward_args
    335                 )

/opt/conda/lib/python3.8/site-packages/captum/attr/_utils/common.py in _run_forward(forward_func, inputs, target, additional_forward_args)
    503         else inputs
    504     )
--> 505     return _select_targets(output, target)
    506 
    507 

/opt/conda/lib/python3.8/site-packages/captum/attr/_utils/common.py in _select_targets(output, target)
    452     dims = len(output.shape)
    453     if isinstance(target, (int, tuple)):
--> 454         return _verify_select_column(output, target)
    455     elif isinstance(target, torch.Tensor):
    456         if torch.numel(target) == 1 and isinstance(target.item(), int):

/opt/conda/lib/python3.8/site-packages/captum/attr/_utils/common.py in _verify_select_column(output, target)
    439 ) -> Tensor:
    440     target = cast(Tuple[int, ...], (target,) if isinstance(target, int) else target)
--> 441     assert (
    442         len(target) <= len(output.shape) - 1
    443     ), "Cannot choose target column with output shape %r." % (output.shape,)

AssertionError: Cannot choose target column with output shape torch.Size([100]). 

Edit: I just noticed this 100 is coming from “n_step”. If change n_step=200, the error shows 200. Other than this, I have no idea why this error happens and how to solve this. Can anybody help?

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:1
  • Comments:5 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
mainulquraishicommented, Aug 24, 2020

I am doing regression. For regression, should not be the output shape [#example] ?

1reaction
NarineKcommented, Aug 23, 2020

@mainulquraishi , the output of the model should be 2 dimensional for an integer-valued target. Currently, you have one dimensional output. We always assume that the first dimension is the number of examples. You want to make sure that the forward function returns an output in a shape [#example x #classes]

Read more comments on GitHub >

github_iconTop Results From Across the Web

PyTorch - AssertionError: Size mismatch between tensors
However, I now get the error - "UserWarning: Using a target size (torch.Size([10, 1])) that is different to the input size (torch.Size([10])).
Read more >
Tensor target dimension torch.Size([50, 2]) is not valid. torch ...
Hello all, I want to measure the feature importance in a multivariate time series. The model definition is as follows - class Network(nn....
Read more >
torch-summary - PyPI
""" Summarize the given PyTorch model. Summarized information includes: 1) Layer names, 2) input/output shapes, 3) kernel shape, 4 ...
Read more >
Layer Attribution - Captum · Model Interpretability for PyTorch
Output size of attribute matches this layer's input or output dimensions, ... If the network returns a scalar value per example, no target...
Read more >
mmseg.apis — MMSegmentation 0.29.1 documentation
Returns. segmentation weight, shape (N, H, W). Return type. torch.Tensor ... by_epoch (bool) – Determine perform evaluation by epoch or by iteration.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found