question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. ItĀ collects links to all the places you might be looking at while hunting down a tough bug.

And, if youā€™re still stuck at the end, weā€™re happy to hop on a call to see how we can help out.

Can't reproduce export to onnx with custom bert model

See original GitHub issue

šŸ› Bug

I try to run onnx export on a custom bert model, but during inference I get the following error. I share a google colab with the minimum changes to reproduce. All changes are marked with a # CHANGE comment. https://colab.research.google.com/drive/1eiqyQmvhwGih6IHrOg7MkLSc2q0zMHmH?usp=sharing

InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Non-zero status code returned while running Gather node. Name:'Gather_32' Status Message: indices element out of data bounds, idx=1 must be within the inclusive range [-1,0]

Information

Model I am using (Bert, XLNet ā€¦): Bert

Language I am using the model on (English, Chinese ā€¦): None, Iā€™m using a custom bert model, and for this bug report Iā€™m using a random bert model.

The problem arises when using: The official example notebook: https://github.com/huggingface/transformers/blob/master/notebooks/04-onnx-export.ipynb

To reproduce

Steps to reproduce the behavior:

Run the convert to onxx script with a custom bert model. Iā€™ve made a copy of the official notebook with the minimum changes required to illustrate the problem here: https://colab.research.google.com/drive/1eiqyQmvhwGih6IHrOg7MkLSc2q0zMHmH?usp=sharing

---------------------------------------------------------------------------

InvalidArgument                           Traceback (most recent call last)

<ipython-input-12-1d032f1e9ad0> in <module>()
      9 
     10 # Run the model (None = get all the outputs)
---> 11 sequence, pooled = cpu_model.run(None, inputs_onnx)
     12 
     13 # Print information about outputs

/usr/local/lib/python3.6/dist-packages/onnxruntime/capi/session.py in run(self, output_names, input_feed, run_options)
    109             output_names = [output.name for output in self._outputs_meta]
    110         try:
--> 111             return self._sess.run(output_names, input_feed, run_options)
    112         except C.EPFail as err:
    113             if self._enable_fallback:

InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Non-zero status code returned while running Gather node. Name:'Gather_32' Status Message: indices element out of data bounds, idx=1 must be within the inclusive range [-1,0]

Expected behavior

Get pooled and sequence output of bert model.

Environment info

  • transformers version: 2.10.0
  • Platform: Linux-4.19.104Ā±x86_64-with-Ubuntu-18.04-bionic
  • Python version: 3.6.9
  • PyTorch version (GPU?): 1.5.0+cu101 (True)
  • Tensorflow version (GPU?): 2.2.0 (True)
  • Using GPU in script?: yes
  • Using distributed or parallel set-up in script?: no

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:13 (12 by maintainers)

github_iconTop GitHub Comments

1reaction
mfuntowiczcommented, May 27, 2020

@RensDimmendaal I think your suggestion is the way to go, do you mind submitting a PR and assigning me as a reviewer ? šŸ‘

1reaction
tianleiwucommented, May 27, 2020

@mfuntowicz,

I run the notebook in my local machine, and look at the onnx model after export (and before optimization). I found that the exported onnx model has switched the position of ā€œattention_maskā€ with ā€œtoken_type_idsā€:

image

The above is a snapshot of embedding layer in exported graph. The ā€œattention_maskā€ in the graph shall be named as ā€œtoken_type_idsā€ since it is used to look up segment embeddings.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Export to ONNX - Transformers - Hugging Face
In this guide, we'll show you how to export Transformers models to ONNX (Open Neural Network eXchange). Once exported, a model can be...
Read more >
Converting Models to #ONNX Format - YouTube
In this video we show you how to convert a model from PyTorch, TensorFlow, S... ... Your browser can't play this video.
Read more >
How to convert almost any PyTorch model to ONNX and serve ...
I will be converting the # BERT sentiment model ... ... Your browser can't play this video. ... 17K views 2 years ago...
Read more >
NVIDIA Deep Learning TensorRT Documentation
A good first step after exporting a model to ONNX is to run constant folding using ... The build will fail if TensorRT...
Read more >
Unanswered 'onnx' Questions - Stack Overflow
I am trying to export a pytorch text detection model to onnx format. The model uses grid_sample in the code of one module...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found