question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

ClassificationModel: predict() hangs forever in uwsgi worker

See original GitHub issue

Describe the bug

When model.predict is invoked in a uwsgi worker, it never resolves (hangs on the line outputs = model(**inputs) )

To Reproduce Steps to reproduce the behavior:

  • Train a roberta-base model with simpletransformers 0.48.9
  • Run a uwsgi + flask server that loads the model with {"use_multiprocessing": False} before spawning workers, and then runs model.predict() when it receives a request (I used the docker image tiangolo/uwsgi-nginx-flask as a base, and install transformers, pytorch and simpletransformers)
  • Emit a request, it hangs on the line outputs = model(**inputs)
  • However, if model.predict() is called on the same server before the uwsgi workers are spawn (when the server loads, as opposed to when responding to a request), it returns normally with the expected result.
  • Another way for predict() to return normally is to load the model inside each worker, meaning the first request handled by each worker is delayed by the loading of the model.

Desktop (please complete the following information):

  • Docker image with Debian Buster + python 3.8 + flask + nginx + uwsgi
  • transformers version 3.3.1
  • simpletransformers version 0.48.9
  • torch version 1.6.0
  • uwsgi: tested with versions 2.0.17, 2.0.18, 2.0.19, 2.0.19.1

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:19 (4 by maintainers)

github_iconTop GitHub Comments

10reactions
sukrubezencommented, Nov 6, 2021

I had the same problem and now I solved it.

My args dict is like below.

args={"use_multiprocessing": False, "use_multiprocessing_for_evaluation": False, "process_count": 1}

3reactions
ThilinaRajapaksecommented, Oct 12, 2020

Setting use_multiprocessing=False should fix it.

Read more comments on GitHub >

github_iconTop Results From Across the Web

keras prediction gets stuck when deployed using uwsgi in a ...
I have a keras model that works perfectly in unit tests and in local flask app (flask run). However, the moment I launch...
Read more >
Model.predict hangs for Keras model in Flask with uwsgi (repost)
It turns out that the Keras model's predict() method hangs, but only when running it in production with Flask and uwsgi.
Read more >
Turning Data into Insight with IBM Machine Learning for z/OS
6.2.5 Saving the predictions of the affinity model. ... ability to work directly with data that originates from IBM Z while keeping that ......
Read more >
The uWSGI cheaper subsystem – adaptive process spawning
If the app is idle uWSGI will stop workers but it will always leave at least 2 of them running. With cheaper-initial you...
Read more >
Simple index - piwheels
... calcmkm ergo fileencryptornm tourbillon-uwsgi downloader-exporter omiedata django-postgres-setfield musamusa-fal python-glimmer inputkit jennifer-python ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found