Can’t run Parallel inference
See original GitHub issuetransformers
version: 4.12.3- Platform: Darwin-20.6.0-x86_64-i386-64bit
- Python version: 3.7.0
- PyTorch version (GPU?): 1.10.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: parallel
Hi,
I have warning message
[W ParallelNative.cpp:214] Warning: Cannot set number of intraop threads after parallel work has started or after set_num_threads call when using native parallel backend (function set_num_threads)
while using pipeline
pipe = pipeline("sentiment-analysis", model=model, padding=True, max_length=200, truncation=True)
results = pipe(texts)
It happened on both models:
- “distilbert-base-uncased-finetuned-sst-2-english”
- “cardiffnlp/twitter-roberta-base-sentiment”
Only one cpu is executed!
Any suggestions ? Thanks
Issue Analytics
- State:
- Created 2 years ago
- Comments:15 (8 by maintainers)
Top Results From Across the Web
Unable to do inference of multiple engines in parallel
I have an input stream and I want to run two engines on it for processing. One engine takes 3sec for processing and...
Read more >Parallel Inference - MLServer Documentation - Read the Docs
Its main purpose is to lock Python's execution so that it only runs on a single processor at the same time. This simplifies...
Read more >How do I run Inference in parallel? - PyTorch Forums
Since parallel inference does not need any communication among different processes, I think you can use any utility you mentioned to launch ...
Read more >Running multiple inferences in parallel on a GPU - DeepSpeech
We are running into an issue with trying to run multiple inferences in parallel on a GPU. By using torch multiprocessing we have...
Read more >Speeding up inference with parallel model runs | by Rafael Iriya
When deploying a real-world application, accuracy is not everything for a deep learning model. Many edge applications require processing ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Okay, I tried every single thing I could but I am unable to reproduce.
I am guessing the issues lies in
Darwin
at that point. I looked into pytorch issues which seem relevant (but cannot try to confirm at this time) https://github.com/pytorch/pytorch/issues/58585 https://github.com/pytorch/pytorch/issues/46409This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.