question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Device error on TokenClassificationPipeline

See original GitHub issue

Environment info

  • transformers version: 4.11.0
  • Platform: Linux-5.14.8-arch1-1-x86_64-with-arch
  • Python version: 3.7.11
  • PyTorch version (GPU?): 1.9.1+cu102 (True)
  • Tensorflow version (GPU?): not installed (NA)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Using GPU in script?: True
  • Using distributed or parallel set-up in script?: False

Who can help

Library:

Information

Model I am using (Bert, XLNet …):

The problem arises when using:

  • the official example scripts: (give details below)

The tasks I am working on is:

  • an official GLUE/SQUaD task: (give the name)

To reproduce

Steps to reproduce the behavior:

  1. Create a pipe = TokenClassificationPipeline(model=DistilBertForTokenClassification.from_pretrained("PATH"))
  2. Pipe some text in pipe(["My", "text", "tokens"])
  3. Get a TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

Expected behavior

Be able to run the pipeline

The pipeline should bring data to gpu/cpu or model to gpu/cpu and vice versa.

The traceback

In .venv/lib/python3.7/site-packages/transformers/pipelines/token_classification.py:209 in _forward                                                                                
    206 β”‚   β”‚   if self.framework == "tf":                                                         
    207 β”‚   β”‚   β”‚   outputs = self.model(model_inputs.data)[0][0].numpy()                          
    208 β”‚   β”‚   else:                                                                              
 ❱ 209 β”‚   β”‚   β”‚   outputs = self.model(**model_inputs)[0][0].numpy()   <== HERE
    210 β”‚   β”‚   return {                                                                           
    211 β”‚   β”‚   β”‚   "outputs": outputs,                                                            
    212 β”‚   β”‚   β”‚   "special_tokens_mask": special_tokens_mask,                                    

Placing a .cpu() would solve the problem

Thanks in advance for any help Have a wonderful day

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:6 (5 by maintainers)

github_iconTop GitHub Comments

2reactions
LysandreJikcommented, Sep 30, 2021

Nice catch! Would you like to open a PR with the fix?

1reaction
mallorbccommented, Sep 30, 2021

similar issue later in the file, line 223

 220 β”‚   β”‚   sentence = model_outputs["sentence"]                           β”‚
β”‚   221 β”‚   β”‚   input_ids = model_outputs["input_ids"][0]                      β”‚
β”‚   222 β”‚   β”‚   offset_mapping = model_outputs["offset_mapping"][0] if model_o β”‚
β”‚ ❱ 223 β”‚   β”‚   special_tokens_mask = model_outputs["special_tokens_mask"][0].numpy() β”‚
β”‚   224 β”‚   β”‚                                                                  β”‚
β”‚   225 β”‚   β”‚   scores = np.exp(outputs) / np.exp(outputs).sum(-1, keepdims=Tr β”‚
β”‚   226 β”‚   β”‚   pre_entities = self.gather_pre_entities(                       
Read more comments on GitHub >

github_iconTop Results From Across the Web

Pipelines
Pipelines. The pipelines are a great and easy way to use models for inference. These pipelines are objects that abstract most of the...
Read more >
python - How to pass arguments to HuggingFace ...
I've finetuned a Huggingface BERT model for Named Entity Recognition. Everything is working as it should. Now I've setup a pipeline for token ......
Read more >
Inside the Token classification pipeline (TensorFlow) - YouTube
What happens inside the token classification pipeline, and how do we go from logits to entity labels? This video will show you.
Read more >
Transformers: State-of-the-Art Natural Language Processing
... make contributing pipelines much simpler, and much less error-prone. ... device=0) dataset = datasets.load_dataset("superb", name="asr", ...
Read more >
Using data collators for training and error analysis
label predicted_label loss 147 1 0 3.477502 143 1 0 2.925410 57 0 1 2.873445
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found