question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

multiple workers with gunicorn still synchronous

See original GitHub issue

First Check

  • I added a very descriptive title to this issue.
  • I used the GitHub search to find a similar issue and didn’t find it.
  • I searched the FastAPI documentation, with the integrated search.
  • I already searched in Google “How to X in FastAPI” and didn’t find any information.
  • I already read and followed all the tutorial in the docs and didn’t find an answer.
  • I already checked if it is not related to FastAPI but to Pydantic.
  • I already checked if it is not related to FastAPI but to Swagger UI.
  • I already checked if it is not related to FastAPI but to ReDoc.

Commit to Help

  • I commit to help with one of those options 👆

Example Code

gunicorn main:app --workers 8 --worker-class uvicorn.workers.UvicornWorker --bind 0.0.0.0:8000

Description

Hello everyone. According to this official fastapi article I started server with this command:

 % gunicorn main:app --workers 8 --worker-class uvicorn.workers.UvicornWorker --bind 0.0.0.0:8000

and got the output, seems everything fine.

[2022-01-26 16:59:54 +0300] [105927] [INFO] Starting gunicorn 20.1.0
[2022-01-26 16:59:54 +0300] [105927] [INFO] Listening at: http://0.0.0.0:8000 (105927)
[2022-01-26 16:59:54 +0300] [105927] [INFO] Using worker: uvicorn.workers.UvicornWorker
[2022-01-26 16:59:54 +0300] [105928] [INFO] Booting worker with pid: 105928
[2022-01-26 16:59:54 +0300] [105929] [INFO] Booting worker with pid: 105929
[2022-01-26 16:59:54 +0300] [105930] [INFO] Booting worker with pid: 105930
[2022-01-26 16:59:54 +0300] [105931] [INFO] Booting worker with pid: 105931
[2022-01-26 16:59:54 +0300] [105932] [INFO] Booting worker with pid: 105932
[2022-01-26 16:59:54 +0300] [105933] [INFO] Booting worker with pid: 105933
[2022-01-26 16:59:55 +0300] [105934] [INFO] Booting worker with pid: 105934
[2022-01-26 16:59:55 +0300] [105935] [INFO] Booting worker with pid: 105935
[2022-01-26 16:59:55 +0300] [105928] [INFO] Started server process [105928]
[2022-01-26 16:59:55 +0300] [105928] [INFO] Waiting for application startup.
[2022-01-26 16:59:55 +0300] [105928] [INFO] Application startup complete.
[2022-01-26 16:59:56 +0300] [105931] [INFO] Started server process [105931]
[2022-01-26 16:59:56 +0300] [105931] [INFO] Waiting for application startup.
[2022-01-26 16:59:56 +0300] [105931] [INFO] Application startup complete.
...

then I opened 4 terminal windows and asking the same request each window. The response appear one by one in each terminal window and the same thing happen even if I use two computers for test, so It’s not work as expected (in parallel). What am I doing wrong? image

Operating System

Linux

Operating System Details

No response

FastAPI Version

0.63.0

Python Version

3.9.10

Additional Context

I have 4 cpu and 8 threads image

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:6 (1 by maintainers)

github_iconTop GitHub Comments

1reaction
wotoricommented, Feb 17, 2022

I thought when I use workers it’s a completely separate process and they should work in parallel and gunicorn just manage requests and point them to free instances, Isn’t It correct? Yes, this is correct. According to your setup, you should be able to handle 8 calls in parallel, even if your endpoints are blocking. What the purpose of workers If they don’t work without async/await? They do work without async/await.

Try this code below with your gunicorn/uvicorn setup. It should allow you to make one call per second on the block endpoint. Both should have latency at roughly 1,000 milliseconds.

import asyncio
import time

from fastapi import FastAPI

app = FastAPI()


@app.get("/blocking")
def blocking_route():
    time.sleep(1)


@app.get("/nonBlocking")
async def blocking_route():
    await asyncio.sleep(1)

You should be able to run apache benchmark to get latency of around 1,000ms:

ab -n 80 -c 8 http://localhost:8000/blocking

Got your point, thanks for share the Idea! I did asyncio In a wrong way, now everything works.

0reactions
raphaelauvcommented, Jan 31, 2022

if you have a question or issue with gunicorn -> https://github.com/benoitc/gunicorn

or uvicorn -> https://github.com/encode/uvicorn

it does not look like a fastapi issue

Read more comments on GitHub >

github_iconTop Results From Across the Web

Design — Gunicorn 20.1.0 documentation
The asynchronous workers available are based on Greenlets (via Eventlet and Gevent). Greenlets are an implementation of cooperative multi-threading for ...
Read more >
Why You Should (Almost) Always Choose Sync Gunicorn ...
Sync workers are largely simplified and works well in most of the cases. My advice is you start with sync workers and try...
Read more >
Gunicorn Sync Workers - Joel Sleppy
Gunicorn always runs one master process and one or more worker processes. When we run Gunicorn with a given worker class, we're selecting...
Read more >
Gunicorn Workers and Threads
Gunicorn will ensure that the master can then send more than one requests to ... Sync, non threaded workers are the best option...
Read more >
Brief introduction about the types of worker in gunicorn and ...
In Python 2.7, Gunciorn provides serval types of worker: sync, gthread, eventlet, gevent and tornado. Below is a snip code with two simple...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found