question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Predict request neither returns nor there is a timeout

See original GitHub issue

/kind bug

What steps did you take and what happened: I am implementing serving for CRNN model. This model uses TensorFlow predict method. Prediction works fine without kfserving. When I implement kfserving.KFmodel, predict request neither returns nor there is a time out. The payload is less than 1mb

{
"roi":'base64(bytearray(image.png))'
}

I also have a preprocess method implemented, which does the base 64 decode, converts the image byte array to tensor, resizing, grayscale and padding.

What did you expect to happen: Any suggestion to resolve this issue is appreciated. Is there a reason why there is no time out for the request?

Anything else you would like to add:

Environment:

  • Istio Version: NA
  • Knative Version: NA
  • KFServing Version:0.4.0 python sdk
  • Kubeflow version:NA
  • Kfdef:[k8s_istio/istio_dex/gcp_basic_auth/gcp_iap/aws/aws_cognito/ibm]: NA
  • Minikube version:NA
  • Kubernetes version: (use kubectl version):NA
  • OS (e.g. from /etc/os-release): os x

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:11 (5 by maintainers)

github_iconTop GitHub Comments

1reaction
yuzisuncommented, Aug 24, 2020

@ShilpaGopal I wondering if multi-processing is causing some weirdness in your server, but if you deploy it on KFServing the timeout is enforced on sidecar regardless of how model server behaves. Regarding using transformer and tfserving, can you mock out the call to tfserving by overwriting predict on transformer? in that way you can test transformer separately.

0reactions
k8s-ci-robotcommented, Aug 28, 2020

@yuzisun: Closing this issue.

In response to this:

thanks @ShilpaGopal! let us know your findings.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Predict request neither returns nor there is a timeout #1037
What steps did you take and what happened: I am implementing serving for CRNN model. This model uses TensorFlow predict method. Prediction works ......
Read more >
MXNet timeout on second model prediction - Stack Overflow
I'm setting up a flask server which loads my mxnet model and has a predict-Api-method. While testing the api I noticed, that the...
Read more >
Get predictions from a custom trained model | Vertex AI
This page shows you how to get online (real-time) predictions and batch predictions from your custom trained models using the Google Cloud console...
Read more >
DataRobot Prediction API
This section describes how to use DataRobot's Prediction API to make predictions on a dedicated prediction server.
Read more >
Palo Alto Networks Firewall Session Overview
On Palo Alto Networks firewalls there are two types of sessions: ... Note: Each application's predict session has its own timeout setting.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found