question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. ItΒ collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

FastAPI application is processing same request multiple times in Backend

See original GitHub issue

First Check

  • I added a very descriptive title to this issue.
  • I used the GitHub search to find a similar issue and didn’t find it.
  • I searched the FastAPI documentation, with the integrated search.
  • I already searched in Google β€œHow to X in FastAPI” and didn’t find any information.
  • I already read and followed all the tutorial in the docs and didn’t find an answer.
  • I already checked if it is not related to FastAPI but to Pydantic.
  • I already checked if it is not related to FastAPI but to Swagger UI.
  • I already checked if it is not related to FastAPI but to ReDoc.

Commit to Help

  • I commit to help with one of those options πŸ‘†

Example Code

###This API takes id in request , creates temp path in container , searches path for this id in database and copies file from AWS s3b for this id in temp created path, does ML processing and deletes the temp path then returns the predicted data

from datetime import datetime
import uvicorn
from fastapi import Depends, FastAPI, HTTPException, Request
from starlette.middleware.cors import CORSMiddleware
# initializing FastAPI
from fastapi import FastAPI, Response, status
from fastapi.responses import JSONResponse
from pydantic import BaseModel
from fastapi import Security ,HTTPException
from fastapi.security.api_key import APIKeyHeader
import pathlib
temp = pathlib.PosixPath
#pathlib.PosixPath = pathlib.WindowsPath

import torch
# Logging
import logging, logging.config

logging.config.dictConfig(
    {
        'version': 1,
        'disable_existing_loggers': True,
    }
)
global logger
# logging.basicConfig(filename='app.log', filemode='w',format='%(asctime)s:%(levelname)s:%(message)s', level=logging.DEBUG)

app = FastAPI(docs_url="/", redoc_url=None,
              title="First ML API",
              description="First ML API",
              version="V 1.0",
              )
# FileDownload()
# SMBConnectionDownload()
config = readconfig()

API_KEY = config.API_KEY
API_KEY_NAME = config.API_KEY_NAME

api_key_header_auth = APIKeyHeader(name=API_KEY_NAME, auto_error=True)

async def get_api_key(api_key_header: str = Security(api_key_header_auth)):
    if api_key_header != API_KEY:
        raise HTTPException(
            status_code=status.HTTP_401_UNAUTHORIZED,
            detail="Invalid API Key",
        )


origins = [
    "http://localhost.tiangolo.com",
    "https://localhost.tiangolo.com",
    "http://localhost",
    "http://localhost:8080",
    "http://localhost:4200",
]

app.add_middleware(
    CORSMiddleware,
    allow_origins=origins,
    allow_credentials=True,
    allow_methods=["*"],
    allow_headers=["*"],
)



class Item(BaseModel):
    id : str



logging.config.dictConfig(
    {
        'version': 1,
        'disable_existing_loggers': True,
    }
)
global logger

logger = logging.getLogger("main")
logger.setLevel(logging.INFO)
# create the logging file handler
fh = logging.FileHandler("app.log")
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
ch.setFormatter(formatter)
while logger.handlers:
    logger.handlers.pop()
# add handler to logger object
logger.addHandler(fh)
logger.addHandler(ch)
logger.propagate = False



@app.get("/")
async def root():
    return {"message": "Hello Bigger Applications!"}


@app.get("/heartbeat")
def heartbeat():
    return "Success"


@app.get('/predictdata',dependencies=[Security(get_api_key)])
def predictdata(id: str):


    try:
        res = {"id" : id , "prediction": "" , "StatusCode" : "200", "Validations" : []}
        
        if not id.isdigit():
            res["Validations"].append({"Type" : "Validation", "Description" : "id must be a integer"})
            raise TypeError("id must be an integer")
        
        logger.info("*** ML prediction API started for Id : {} ***".format(id))
        logger.info("########################")
        data = ML_Process(id, res)
        res = data
        logger.info(res)

        return JSONResponse(status_code=int(res["StatusCode"]), content=res)

    except Exception as ex:


            traceback.print_exc()
            error_trace = traceback.format_exc()
            res["Validations"].append({"Type" : "Error","Description" : error_trace})
            res["StatusCode"] = str(500)
            logger.exception("Exception in my ML API")
            logger.error(ex,exc_info=True)

    return JSONResponse(status_code=500, content=res)

	
def ML_Process(id, res):
    #get docpath from db by id , copy doc to temp path from s3b and run ML model on the doc
    #after processing it deletes temp path and returns predicted data
    # this is a long running process takes avg 25 second but can take sometime 1-2 minutes.
    # Issue is observed only when it's a long running requests e.g Model takes more time to do inference
  


if __name__ == "__main__":
    uvicorn.run(
        app,
        host="0.0.0.0",
        port=8422
    )


-------------------------------------

Dockerfile

FROM python:3

WORKDIR /app

COPY requirements.txt requirements.txt

RUN pip install -r requirements.txt


RUN apt-get update

RUN apt-get install poppler-utils -y && \
    apt install ghostscript -y && \
    apt-get install -y tesseract-ocr && \


COPY . .

WORKDIR /app

CMD uvicorn app:app --host 0.0.0.0 --port 5057 



-------------------------------------------


Kubernetes pod logs of application which shows same request getting processed multiple times

i sent the request of id - 12 only once through the swagger  and nobody else was using it 

log :-

2022-08-11 04:27:22,070 - main - INFO - *** ML prediction API started for id : 12 ***
2022-08-11 04:27:22,071 - main - INFO - ########################
2022-08-11 04:27:22,072 - main - INFO - Temp folder structure created :  /app/12_20220811042722071864
2022-08-11 04:27:22,104 - main - INFO - File existing in this id folder in S3B folder  .
2022-08-11 04:27:22,105 - main - INFO - File existing in this id folder in S3B folder  ..
2022-08-11 04:27:22,105 - main - INFO - File existing in this id folder in S3B folder  278692642.pdf
2022-08-11 04:27:22,591 - main - INFO - "278692642.pdf" file copied in temp directory
2022-08-11 04:27:27,518 - main - INFO - 278692642.pdf
2022-08-11 04:27:27,706 - main - INFO - /app/12_20220811042722071864/278692642.pdf
2022-08-11 04:28:56,711 - main - INFO - prediction  =  done
2022-08-11 04:28:56,711 - main - INFO - -----------------------------------------------------------------------------
2022-08-11 04:28:22,078 - main - INFO - *** ML prediction API started for id : 12 ***
2022-08-11 04:28:22,080 - main - INFO - ########################
2022-08-11 04:28:22,080 - main - INFO - Temp folder structure created :  /app/12_20220811042822080625
2022-08-11 04:28:22,292 - main - INFO - File existing in this id folder in S3B folder  .
2022-08-11 04:28:22,292 - main - INFO - File existing in this id folder in S3B folder  ..
2022-08-11 04:28:22,292 - main - INFO - File existing in this id folder in S3B folder  278692642.pdf
2022-08-11 04:28:24,784 - main - INFO - "278692642.pdf" file copied in temp directory
2022-08-11 04:28:38,689 - main - INFO - 278692642.pdf
2022-08-11 04:28:39,986 - main - INFO - /app/12_20220811042822080625/278692642.pdf
2022-08-11 04:28:56,711 - main - INFO - prediction  =  done
2022-08-11 04:28:56,711 - main - INFO - -----------------------------------------------------------------------------
β–ˆ

 |----------------------------------------| 0.00% [0/1 00:00<?]

 |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 100.00% [1/1 00:16<00:00]

                                                                     

                                                                     
β–ˆ

 |----------------------------------------| 0.00% [0/1 00:00<?]

 |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 100.00% [1/1 00:14<00:00]

                                                                     

2022-08-11 04:29:22,145 - main - INFO - Deleted Temp folder : /app/12_20220811042722071864
2022-08-11 04:29:22,145 - main - INFO - ML prediction Done!!!
2022-08-11 04:29:22,145 - main - INFO - {"id" : 12,'prediction': 'predicted value here', 'StatusCode': '200', 'Validations': []}       
2022-08-11 04:29:22,077 - main - INFO - *** ML prediction API started for id : 12 ***
2022-08-11 04:29:22,077 - main - INFO - ########################
2022-08-11 04:29:22,077 - main - INFO - Temp folder structure created :  /app/12_20220811042922077785
2022-08-11 04:29:22,100 - main - INFO - File existing in this id folder in S3B folder  .
2022-08-11 04:29:22,101 - main - INFO - File existing in this id folder in S3B folder  ..
2022-08-11 04:29:22,101 - main - INFO - File existing in this id folder in S3B folder  278692642.pdf
2022-08-11 04:29:22,145 - main - INFO - Deleted Temp folder : /app/12_20220811042822080625
2022-08-11 04:29:22,145 - main - INFO - ML prediction Done!!!
2022-08-11 04:29:22,145 - main - INFO - {"id" : 12,'prediction': 'predicted value here', 'StatusCode': '200', 'Validations': []}
2022-08-11 04:28:38,689 - main - INFO - 278692642.pdf
2022-08-11 04:28:39,986 - main - INFO - /app/12_20220811042922077785/278692642.pdf
2022-08-11 04:28:56,711 - main - INFO - prediction  =  done
2022-08-11 04:28:56,711 - main - INFO - -----------------------------------------------------------------------------

----------------------| 0.00% [0/1 00:00<?]

 |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 100.00% [1/1 00:00<00:00]

                                                                     

2022-08-11 04:29:40,316 - main - INFO - Deleted Temp folder : /app/12_20220811042922077785
2022-08-11 04:29:40,316 - main - INFO - ML prediction Done!!!
2022-08-11 04:29:40,318 - main - INFO - {"id" : 12,'prediction': 'predicted value here', 'StatusCode': '200', 'Validations': []}


---- 

here for first time, it took the request id started processing but before completing i.e before deleting temp path, second process/thread is initiated as i see this log again 
 **ML prediction API started for id : 12 
and a new temp path is also created
2022-08-11 04:28:22,080 - main - INFO - Temp folder structure created :  /app/12_20220811042822080625**
and starts processes same request again before first one is complete then again after few seconds third process/thread is initiated and starts processing on same request. it shouldn't process same request again and again. I don't have any multiprocessing or multithreading concept then why its processing same request again and again. plz guide

Description

I don’t have any multiprocessing or multithreading concept in my Fast API Application then why its processing same request again and again. Its a simple ML Processing Application

Operating System

Linux

Operating System Details

No response

FastAPI Version

fastapi-0.68.2

Python Version

3.8.5

Additional Context

No response

Issue Analytics

  • State:open
  • Created a year ago
  • Comments:10 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
JarroVGITcommented, Aug 11, 2022

Could you please look into my API declaration part and route declaration once and see if I have used correctly.

I have and I see nothing wrong with it. To me, it seems that something (a component in your stack) is retrying. There are many components in play when deployed to K8S and it is impossible to say where it could be (because it could be anyone of them) as we don’t know your full stack. All I can say that it is not FastAPI that is causing this, that part is doing its job properly.

This issue is intermittent mostly but it’s always reproducible when I scale down the pods resources i e when I bring down my pod’s cpu and memory limits.

This indicates (but not proves!) that the component that is sending the request to Uvicorn (your container) retries when the container is moved to another resource (node). If you scale up/down the requested resources of a pod, the pod might be rescheduled somewhere else where those resources are available. Again, I am just guessing here as I cannot see what you are doing, how your cluster looks like and what components you have in your entire stack. I hope this gives at least some pointers on what to look for.

As final point: I don’t see any Uvicorn logs in your logs. You could try to find out what is sending the request (is Uvicorn receiving the request twice, or is Uvicorn the part that is sending the request twice (the latter, I don’t think so but best to be sure)).

0reactions
rksingh53commented, Aug 15, 2022

Hi @rksingh53 , have you found the culprit? I am curious what was causing this for you, it’s an interesting case!

No, I tried a lot finding root cause for this but couldn’t get anything…

Read more comments on GitHub >

github_iconTop Results From Across the Web

FastAPI: Optimal way of sending multiple requests from an API
I have no prior experience in async & await so appreciate the validation. from fastapi import FastAPI app = FastAPI() async def send_requests(Β ......
Read more >
Deployments Concepts - FastAPI
With a FastAPI application, using a server program like Uvicorn, running it once in one process can serve multiple clients concurrently. But in...
Read more >
Concurrency with FastAPI - Medium
Threading is a concurrent execution model whereby multiple threads take turns executing tasks. One process can contain multiple threads. It uses pre-emptiveΒ ...
Read more >
5 Advanced Features of FastAPI You Should Try
Mounting applications, API routers, Jinja templates & much more ... In FastAPI, on the abstract level, the process remains the same but with...
Read more >
How to Perform HTTP Requests with Axios – A Complete Guide
The package can be used for your backend server, loaded via a CDN, ... Axios allows us to send multiple requests at the...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found