question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

AssertionError: Elements to be reduced can only beeither Tensors or tuples containing Tensors.

See original GitHub issue

I have been trying to use captum to interpret my Low-Resource Neural Machine Translation model (specifically, XLM).

I am getting the following error when trying to run IntegratedGradients.attribute function:

---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
<ipython-input-42-3042e38801c6> in <module>
----> 1 interpret_sentence(input_text, ground_truth)

<ipython-input-40-6d6b70739f3f> in interpret_sentence(src, trg)
     35         print(langs.shape)
     36         print(langs)
---> 37         attribution_ig, delta =  ig.attribute(src_embedding, baselines=202, additional_forward_args=(langs, idx, False), target=max_idx, n_steps=50, return_convergence_delta=True)
     38         attribution_igs.append(attribution_ig)
     39 

~/anaconda3/lib/python3.7/site-packages/captum/attr/_core/integrated_gradients.py in attribute(self, inputs, baselines, target, additional_forward_args, n_steps, method, internal_batch_size, return_convergence_delta)
    282             internal_batch_size=internal_batch_size,
    283             forward_fn=self.forward_func,
--> 284             target_ind=expanded_target,
    285         )
    286 

~/anaconda3/lib/python3.7/site-packages/captum/attr/_utils/batching.py in _batched_operator(operator, inputs, additional_forward_args, target_ind, internal_batch_size, **kwargs)
    166     ]
--> 167     return _reduce_list(all_outputs)

~/anaconda3/lib/python3.7/site-packages/captum/attr/_utils/batching.py in _reduce_list(val_list, red_func)
     65         for i in range(len(val_list[0])):
     66             final_out.append(
---> 67                 _reduce_list([val_elem[i] for val_elem in val_list], red_func)
     68             )
     69     else:

~/anaconda3/lib/python3.7/site-packages/captum/attr/_utils/batching.py in _reduce_list(val_list, red_func)
     69     else:
     70         raise AssertionError(
---> 71             "Elements to be reduced can only be"
     72             "either Tensors or tuples containing Tensors."
     73         )

AssertionError: Elements to be reduced can only beeither Tensors or tuples containing Tensors.

I am using the following arguments:

ig.attribute(src_embedding, baselines=1, additional_forward_args=(langs, idx, False), target=max_idx, n_steps=50, return_convergence_delta=True)

I tried printing all_outputs it shows [(None,)]

All my prediction and forward functions are working well and I thoroughly tested them seperately.

Please help.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:11 (5 by maintainers)

github_iconTop GitHub Comments

4reactions
vivekmigcommented, Aug 20, 2020

Hi @RachitBansal , I looked into your code, it seems like the issue might be in the decoder, particularly in this line: tensor = tensor[-1, :, :].data.type_as(src_enc) # (bs, dim) Accessing the data directly no longer maintains the autograd dependency, which is likely causing the error. Removing the .data attribute and accessing the tensor directly should maintain the autograd compute graph appropriately. It seems like this method may be intended primarily for inference and not back-propagation, so alternatively, you may be able to use the same decoder method used in training (seems like it may use forward directly rather than the generate method) to fix the issue.

1reaction
RachitBansalcommented, Aug 17, 2020

zip notebook

Can you please try now?

Also, the problem might be in the fwd function inside the TransformerModel class in the translation.XLM.XLM.src.model.transformerIn python file.

Read more comments on GitHub >

github_iconTop Results From Across the Web

AssertionError on identical tuples (Tensorflow keras)
The tuples have different types. ... tensor.shape.as_list() will give you shape in list format. assert tuple(z.as_list()) == y.
Read more >
fairscale.nn.data_parallel.fully_sharded_data_parallel
Large gradient tensors are directly reduced without using the buffers. ... The type for the process_group_reduce_scatter only can be either ProcessGroup or ...
Read more >
Source code for captum.attr._core.lime
It is recommended to only provide a single example as input (tensors with ... can be either: - A single tuple, which contains...
Read more >
phi.math API documentation
Raises an AssertionError if the values of this tensor are not within tolerance ... of an existing Shape or creates a new Shape...
Read more >
tf.distribute.Strategy | TensorFlow v2.11.0
reduce ) to convert the resulting "per-replica" values into ordinary Tensor s. A custom training loop can be as simple as: with my_strategy ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found