Device error on TokenClassificationPipeline
See original GitHub issueEnvironment info
transformers
version: 4.11.0- Platform: Linux-5.14.8-arch1-1-x86_64-with-arch
- Python version: 3.7.11
- PyTorch version (GPU?): 1.9.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
Who can help
Library:
- pipelines: @LysandreJik
Information
Model I am using (Bert, XLNet β¦):
The problem arises when using:
- the official example scripts: (give details below)
The tasks I am working on is:
- an official GLUE/SQUaD task: (give the name)
To reproduce
Steps to reproduce the behavior:
- Create a
pipe = TokenClassificationPipeline(model=DistilBertForTokenClassification.from_pretrained("PATH"))
- Pipe some text in
pipe(["My", "text", "tokens"])
- Get a
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
Expected behavior
Be able to run the pipeline
The pipeline should bring data to gpu/cpu or model to gpu/cpu and vice versa.
The traceback
In .venv/lib/python3.7/site-packages/transformers/pipelines/token_classification.py:209 in _forward
206 β β if self.framework == "tf":
207 β β β outputs = self.model(model_inputs.data)[0][0].numpy()
208 β β else:
β± 209 β β β outputs = self.model(**model_inputs)[0][0].numpy() <== HERE
210 β β return {
211 β β β "outputs": outputs,
212 β β β "special_tokens_mask": special_tokens_mask,
Placing a .cpu()
would solve the problem
Thanks in advance for any help Have a wonderful day
Issue Analytics
- State:
- Created 2 years ago
- Comments:6 (5 by maintainers)
Top Results From Across the Web
Pipelines
Pipelines. The pipelines are a great and easy way to use models for inference. These pipelines are objects that abstract most of the...
Read more >python - How to pass arguments to HuggingFace ...
I've finetuned a Huggingface BERT model for Named Entity Recognition. Everything is working as it should. Now I've setup a pipeline for token ......
Read more >Inside the Token classification pipeline (TensorFlow) - YouTube
What happens inside the token classification pipeline, and how do we go from logits to entity labels? This video will show you.
Read more >Transformers: State-of-the-Art Natural Language Processing
... make contributing pipelines much simpler, and much less error-prone. ... device=0) dataset = datasets.load_dataset("superb", name="asr", ...
Read more >Using data collators for training and error analysis
label predicted_label loss
147 1 0 3.477502
143 1 0 2.925410
57 0 1 2.873445
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Nice catch! Would you like to open a PR with the fix?
similar issue later in the file, line 223