Unable to retrieve SERVICE_HOSTNAME for flowers-sample example
See original GitHub issue/kind bug
What steps did you take and what happened:
I installed Kubeflow on GCP using https://github.com/kubeflow/manifests/blob/master/kfdef/kfctl_gcp_iap.yaml (which works with manifests/master branch), as suggested in 991 I used this kfdef file because I already had the same issue with version 1.0.0 and there is an error 500 when installing with 1.0.1
The installation with this kfdef file doesn’t generate any error.
When trying to deploy the flowers-sample inferenceService, it looks OK :
kubectl apply -f tensorflow.yaml
inferenceservice.serving.kubeflow.org/flowers-sample created
(according to the documentation, the message should be inferenceservice.serving.kubeflow.org/flowers-sample configured instead of created)
when trying to run a prediction I am unable to get the SERVICE_IP if I follow the instructions of the documentation. The command echo $CLUSTER_IP
returns null
MODEL_NAME=flowers-sample
INPUT_PATH=@./input.json
CLUSTER_IP=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo $CLUSTER_IP
If I try with another jsonpath , I get :
CLUSTER_IP=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.clusterIP}')
echo $CLUSTER_IP
10.27.243.240
Is {.spec.clusterIP}') the right jsonpath for retrieving the CLUSTER_IP on GCP ?
Then I try to retrieve the SERVICE_HOSTNAME, the command echo $SERVICE_HOSTNAME
returns null:
SERVICE_HOSTNAME=$(kubectl get inferenceservice ${MODEL_NAME} -o jsonpath='{.status.url}' | cut -d "/" -f 3)
echo $SERVICE_HOSTNAME
When describing the inferenceservice flowers-sample, I see some errors : Message: Ingress has not yet been reconciled. Reason: IngressNotConfigured Status: Unknown … Message: Failed to reconcile predictor Reason: PredictorHostnameUnknown
Did I miss something ?
More details about the inferenceservice flowers-sample below. Please let me know if you need more details:
kubectl describe inferenceservice flowers-sample
Name: flowers-sample
Namespace: kubeflow-jeanarmel-luce
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{“apiVersion”:“serving.kubeflow.org/v1alpha2”,“kind”:“InferenceService”,“metadata”:{“annotations”:{},“name”:“flowers-sample”,“namespace”:"…
API Version: serving.kubeflow.org/v1alpha2
Kind: InferenceService
Metadata:
Creation Timestamp: 2020-03-13T06:22:10Z
Generation: 4
Resource Version: 22777
Self Link: /apis/serving.kubeflow.org/v1alpha2/namespaces/kubeflow-jeanarmel-luce/inferenceservices/flowers-sample
UID: f8f39e4d-64f2-11ea-bde0-42010a840165
Spec:
Default:
Predictor:
Tensorflow:
Resources:
Limits:
Cpu: 1
Memory: 2Gi
Requests:
Cpu: 1
Memory: 2Gi
Runtime Version: 1.14.0
Storage Uri: gs://kfserving-samples/models/tensorflow/flowers
Status:
Canary:
Conditions:
Last Transition Time: 2020-03-13T06:22:53Z
Message: Ingress has not yet been reconciled.
Reason: IngressNotConfigured
Status: Unknown
Type: DefaultPredictorReady
Last Transition Time: 2020-03-13T06:22:11Z
Message: Failed to reconcile predictor
Reason: PredictorHostnameUnknown
Status: False
Type: Ready
Last Transition Time: 2020-03-13T06:22:11Z
Message: Failed to reconcile predictor
Reason: PredictorHostnameUnknown
Status: False
Type: RoutesReady
Default:
Predictor:
Name: flowers-sample-predictor-default-8mpwh
Events: <none>
What did you expect to happen:
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
Environment:
- Istio Version: 1.1.6 (installed by kfctl)
- Knative Version: (installed by kfctl)
- KFServing Version:
- Kubeflow version: 1.0 and master branch
- Minikube version:
- Kubernetes version: (use
kubectl version
): v1.14.10-gke.27 - OS (e.g. from
/etc/os-release
):
Issue Analytics
- State:
- Created 4 years ago
- Reactions:1
- Comments:61 (49 by maintainers)
Top GitHub Comments
path and methods are optional and defaults to
*
, I think I gave you the wrong spec, can you try following instead?I created a new instance with the patched kubeflow v1.01. The good news is that I can now retrieve the CLUSTER_IP But it looks that I am not able to retrieve the SERVICE_HOSTNAME
~/kubeflow/kfserving/kfserving/docs/samples/tensorflow$ MODEL_NAME=flowers-sample ~/kubeflow/kfserving/kfserving/docs/samples/tensorflow$ INPUT_PATH=@./input.json ~/kubeflow/kfserving/kfserving/docs/samples/tensorflow$ CLUSTER_IP=$(kubectl -n istio-system get service kfserving-ingressgateway -o jsonpath=‘{.status.loadBalancer.ingress[0].ip}’) ~/kubeflow/kfserving/kfserving/docs/samples/tensorflow$ echo $CLUSTER_IP 35.205.8.134 ~/kubeflow/kfserving/kfserving/docs/samples/tensorflow$ SERVICE_HOSTNAME=$(kubectl get inferenceservice ${MODEL_NAME} -o jsonpath=‘{.status.url}’ | cut -d “/” -f 3) ~/kubeflow/kfserving/kfserving/docs/samples/tensorflow$ echo $SERVICE_HOSTNAME