question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

BART + ONNX torch.jit error iterabletree cannot be used as a value

See original GitHub issue

Environment info

onnx 1.10.2 onnxruntime 1.9.0

  • transformers version: transformers 4.13.0.dev0
  • Platform: Ubuntu 18.4
  • Python version: 3.8
  • PyTorch version (GPU?): torch 1.8.0 gpu
  • Using GPU in script?: yes
  • Using distributed or parallel set-up in script?: no

Who can help

@fatcat-z @mfuntowicz @sgugger, @patil-suraj

Information

Model I am using: BartForConditionalGeneration

The problem arises when using:

To reproduce

Steps to reproduce the behavior:

python3.8 run_onnx_exporter.py --model_name_or_path facebook/bart-base

2021-11-22 17:34:47 | INFO | __main__ |  [run_onnx_exporter.py:224] Exporting model to ONNX
/home/pverzun/.local/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py:217: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len):
/home/pverzun/.local/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py:223: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if attention_mask.size() != (bsz, 1, tgt_len, src_len):
/home/pverzun/.local/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py:254: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim):
/home/pverzun/.local/lib/python3.8/site-packages/transformers/models/bart/modeling_bart.py:888: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if input_shape[-1] > 1:
/home/pverzun/.local/lib/python3.8/site-packages/torch/jit/_trace.py:934: TracerWarning: Encountering a list at the output of the tracer might cause the trace to be incorrect, this is only valid if the container structure does not change based on the module's inputs. Consider using a constant container instead (e.g. for `list`, use a `tuple` instead. for `dict`, use a `NamedTuple` instead). If you absolutely need this and know the side effects, pass strict=False to trace() to allow this behavior.
  module._c._create_method_from_trace(
/home/pverzun/.local/lib/python3.8/site-packages/torch/jit/_trace.py:152: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations.
  if a.grad is not None:

Traceback (most recent call last):
  File "run_onnx_exporter.py", line 229, in <module>
    main()
  File "run_onnx_exporter.py", line 225, in main
    export_and_validate_model(model, tokenizer, output_name, num_beams, max_length)
  File "run_onnx_exporter.py", line 116, in export_and_validate_model
    **bart_script_model = torch.jit.script(BARTBeamSearchGenerator(model))**
  File "/home/pverzun/.local/lib/python3.8/site-packages/torch/jit/_script.py", line 942, in script
    return torch.jit._recursive.create_script_module(
  File "/home/pverzun/.local/lib/python3.8/site-packages/torch/jit/_recursive.py", line 391, in create_script_module
    return create_script_module_impl(nn_module, concrete_type, stubs_fn)
  File "/home/pverzun/.local/lib/python3.8/site-packages/torch/jit/_recursive.py", line 448, in create_script_module_impl
    script_module = torch.jit.RecursiveScriptModule._construct(cpp_module, init_fn)
  File "/home/pverzun/.local/lib/python3.8/site-packages/torch/jit/_script.py", line 391, in _construct
    init_fn(script_module)
  File "/home/pverzun/.local/lib/python3.8/site-packages/torch/jit/_recursive.py", line 428, in init_fn
    scripted = create_script_module_impl(orig_value, sub_concrete_type, stubs_fn)
  File "/home/pverzun/.local/lib/python3.8/site-packages/torch/jit/_recursive.py", line 452, in create_script_module_impl
    create_methods_and_properties_from_stubs(concrete_type, method_stubs, property_stubs)
  File "/home/pverzun/.local/lib/python3.8/site-packages/torch/jit/_recursive.py", line 335, in create_methods_and_properties_from_stubs
    concrete_type._create_methods_and_properties(property_defs, property_rcbs, method_defs, method_rcbs, method_defaults)
**RuntimeError: 
iterabletree cannot be used as a value:
  File "/home/pverzun/.local/lib/python3.8/site-packages/transformers/configuration_utils.py", line 387
        if not hasattr(self, "id2label") or self.id2label is None or len(self.id2label) != num_labels:
            self.id2label = {i: f"LABEL_{i}" for i in range(num_labels)}
            self.label2id = dict(zip(self.id2label.values(), self.id2label.keys()))**
                                 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE

Expected behavior

BART is converted to onnx with no issues

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:11 (7 by maintainers)

github_iconTop GitHub Comments

1reaction
polly-morphismcommented, Dec 8, 2021

Hey @polly-morphism @diruoshui, given the PyTorch version fix in #14310 can we now close this issue?

Yes, thank you!

1reaction
fatcat-zcommented, Nov 27, 2021

This is was designed as an example of showing how to export BART + Beam Search to ONNX successfully. It doesn’t cover all of scenarios. Your PR is appreciated to make it better. Thanks!

Read more comments on GitHub >

github_iconTop Results From Across the Web

torch.onnx — PyTorch 1.13 documentation
The torch.onnx module can export PyTorch models to ONNX. ... If the passed-in model is not already a ScriptModule , export() will use...
Read more >
Exporting transformers models - Hugging Face
Starting from transformers v2.10.0 we partnered with ONNX Runtime to provide an easy ... Here we explain how to export and use our...
Read more >
torch.onnx — PyTorch master documentation
This mode can be used to export any operator (ATen or non-ATen) that is not registered and supported in ONNX. Exported falls through...
Read more >
Convert Transformers to ONNX with Hugging Face Optimum
Machine learning engineers and students conducting those experiments use a variety of frameworks like PyTorch, TensorFlow/Keras, or others.
Read more >
How to fix the RuntimeError: builtin cannot be used as a value ...
In my code I compare my value to a dict and it throws an error. ... /coq_gym/lib/python3.7/site-packages/torch/jit/_recursive.py", line 680, ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found