question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

All the examples inference service status stays at False

See original GitHub issue

/kind bug

What steps did you take and what happened: [A clear and concise description of what the bug is.] Followed the steps to reproduce the sklearn/tensorflow examples to run a predictor module.

What did you expect to happen: The predictor inference service to be at True/Ready state.

Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.] The deployment has been done on kubeflow k8s cluster v0.7

Environment:

  • Istio Version: 1.1.7+
  • Knative Version: 0.8.0+
  • KFServing Version:
  • Kubeflow version: 0.7+
  • Minikube version:
  • Kubernetes version: (use kubectl version): 1.13.1+
  • OS (e.g. from /etc/os-release): Linux

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:14 (7 by maintainers)

github_iconTop GitHub Comments

1reaction
wronkcommented, Dec 6, 2019

@capt2101akash, can you give any hints on which versions were compatible with each other? We’re also hitting this issue on default CLI guide for running KF 0.7 on GKE. The config we used was kfctl_gcp_iap.0.7.0.yaml

0reactions
alexwennerbergcommented, Jan 10, 2020

@wronk Thank you 😃

Read more comments on GitHub >

github_iconTop Results From Across the Web

All the examples inference service status stays at False #527
The predictor inference service to be at True/Ready state. Anything else you would like to add: [Miscellaneous information that will assist in ...
Read more >
Deploy ML models to Kubernetes Service with v1 - Azure ...
Use CLI (v1) or SDK (v1) to deploy your Azure Machine Learning models as a web service using Azure Kubernetes Service.
Read more >
All models are wrong
All models are wrong is a common aphorism in statistics; it is often expanded as "All models are wrong, but some are useful"....
Read more >
Machine Learning - Amazon SageMaker FAQs
SageMaker Serverless Inference can scale instantly from tens to thousands of inferences within seconds based on the usage patterns, making it ideal for...
Read more >
Template type checking
In the most basic type-checking mode, with the fullTemplateTypeCheck flag set to false , Angular validates only top-level expressions in a template.
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found