question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Can’t run Parallel inference

See original GitHub issue
  • transformers version: 4.12.3
  • Platform: Darwin-20.6.0-x86_64-i386-64bit
  • Python version: 3.7.0
  • PyTorch version (GPU?): 1.10.0 (False)
  • Tensorflow version (GPU?): not installed (NA)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Using GPU in script?: No
  • Using distributed or parallel set-up in script?: parallel

Hi, I have warning message [W ParallelNative.cpp:214] Warning: Cannot set number of intraop threads after parallel work has started or after set_num_threads call when using native parallel backend (function set_num_threads) while using pipeline

pipe = pipeline("sentiment-analysis", model=model, padding=True, max_length=200, truncation=True)
results = pipe(texts)

It happened on both models:

  • “distilbert-base-uncased-finetuned-sst-2-english”
  • “cardiffnlp/twitter-roberta-base-sentiment”

Only one cpu is executed!

Any suggestions ? Thanks

@Narsil @LysandreJik

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:15 (8 by maintainers)

github_iconTop GitHub Comments

1reaction
Narsilcommented, Dec 2, 2021

Okay, I tried every single thing I could but I am unable to reproduce.

I am guessing the issues lies in Darwin at that point. I looked into pytorch issues which seem relevant (but cannot try to confirm at this time) https://github.com/pytorch/pytorch/issues/58585 https://github.com/pytorch/pytorch/issues/46409

0reactions
github-actions[bot]commented, Dec 30, 2021

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Unable to do inference of multiple engines in parallel
I have an input stream and I want to run two engines on it for processing. One engine takes 3sec for processing and...
Read more >
Parallel Inference - MLServer Documentation - Read the Docs
Its main purpose is to lock Python's execution so that it only runs on a single processor at the same time. This simplifies...
Read more >
How do I run Inference in parallel? - PyTorch Forums
Since parallel inference does not need any communication among different processes, I think you can use any utility you mentioned to launch ...
Read more >
Running multiple inferences in parallel on a GPU - DeepSpeech
We are running into an issue with trying to run multiple inferences in parallel on a GPU. By using torch multiprocessing we have...
Read more >
Speeding up inference with parallel model runs | by Rafael Iriya
When deploying a real-world application, accuracy is not everything for a deep learning model. Many edge applications require processing ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found