question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

[FEATURE] Option to infer response_model

See original GitHub issue

Is your feature request related to a problem? Please describe. Primarily for the sake of the generated OpenAPI spec (but also for security/validation reasons), I currently specify a response_model for nearly all of my endpoints, but for the vast majority of my endpoints, the response_model is the same as the type that the endpoint returns.

This means that I am frequently duplicating the response_model in the return annotation.

In order to reduce duplication/room for copying/refactoring mistakes, it would be great if it were possible to (optionally) infer the response_model from the endpoint signature.

It should still be possible to override the value, and it should be ignored if a raw Response class is returned, I would just want the default to be the annotated return type of the endpoint. Note: this implementation would also make it much easier to catch anywhere the response_model differed from the returned value’s type while scanning through code.

To be clear, I don’t think this needs to be the default behavior, I just would like the option to enable it.


I am currently using the following subclass of APIRouter in my code to enable this:

from typing import TYPE_CHECKING, Any, Callable, get_type_hints

from fastapi import APIRouter


class InferringRouter(APIRouter):
    if not TYPE_CHECKING:

        def add_api_route(self, path: str, endpoint: Callable[..., Any], **kwargs: Any) -> None:
            if kwargs.get("response_model") is None:
                kwargs["response_model"] = get_type_hints(endpoint).get("return")
            return super().add_api_route(path, endpoint, **kwargs)

    else:  # pragma: no cover
        pass

I think it would be nice if this capability was upstreamed though, both to ensure it is maintained, but also so others can benefit.


Describe the solution you’d like

Rather than using the above implementation (which has some problems – for example, it isn’t possible to specify None as the response_model with an annotated return type), I think it would be better if there were a boolean-valued keyword argument to the APIRouter initializer that, if enabled, would cause the default behavior to be to have the annotated return type used as the response_model by default.

Issue Analytics

  • State:open
  • Created 4 years ago
  • Reactions:18
  • Comments:8 (3 by maintainers)

github_iconTop GitHub Comments

6reactions
dmontagucommented, Jan 21, 2020

For anyone following this thread, I’ve released fastapi-utils which includes the InferringRouter class above.

You can read the docs here: https://fastapi-utils.davidmontague.xyz/user-guide/inferring-router/

3reactions
acnebscommented, Apr 7, 2020

I think the API introduced by @dmontagu in fastapi_utils makes sense to integrate with FastAPI proper, with the addition of a response_model=None to override the return annotation as @tiangolo suggests.

Seems fairly trivial to handle this using a sentinel value.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Response Model - FastAPI
FastAPI framework, high performance, easy to learn, fast to code, ready for production.
Read more >
infer.pdf
When using the infer package for research, or in other cases when exact reproducibility is a priority, be sure the set the seed...
Read more >
Infer trained model API | Elasticsearch Guide [8.5] | Elastic
Evaluates a trained model. The model may be any supervised model either trained by data frame analytics or imported. For model deployments with...
Read more >
Fit linear models to infer objects
If passed the output of specify() or hypothesize() , the function will fit one model. If passed the output of generate() , it...
Read more >
Configure inference output in generated containers
Inference responses for classification models · predicted_label : The label with the highest probability of predicting the correct label, as determined by ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found