Making HTTP calls to local InferenceGraph
See original GitHub issueHi folks,
I’m following this guide to setup the iris sequence inference graph. My graph is running and I can make requests to its inference services succesfully, but I can’t figure out how to make an http request to the graph itself. I am developing locally and would like to just use port forwarding.
The provided guide uses curl http://model-chainer.default.10.166.15.29.sslip.io -d @./iris-input.json
, but I don’t have DNS setup. Could someone point me to how to achieve the same using the Ingress gateway with host header?
I tried the below without success.
curl -v -H "Host: ${SERVICE_HOSTNAME}" http://${INGRESS_HOST}:${INGRESS_PORT} -d @./iris-input.json
Many thanks! Appreciate it.
Issue Analytics
- State:
- Created a year ago
- Reactions:1
- Comments:5 (2 by maintainers)
Top Results From Across the Web
Automatically Generating the Graphs that Live In Developers ...
The API graph for figuring out who is calling `ts_travel_service`. ... can be difficult or impossible to make manifest in a local unit...
Read more >Azure Machine Learning inference HTTP server (Preview)
Learn how to enable local development with Azure machine learning inference http server.
Read more >Using TensorFlow Serving's RESTful API
TensorFlow Serving is great for serving deep learning models across a network, and it's even better with a RESTful API. Here's how to...
Read more >Tensorflow Serving or froze inference graph #1493 - GitHub
We have two solution : one is handle by froze inference graph (.pb file) and another is handle by Tensorflow Serving API.
Read more >How can make inference graph file using models/research ...
But this file not working . This is error message. Traceback (most recent call last): File "export_inference_graph.py", line 211, in ...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
First do port-forwarding like this:
Then, make a call:
curl -v -H "Host: ${IG_NAME}.${NAMESPACE}.svc.cluster.local" http://localhost:8080/v1/inferencegraphs/${IG_NAME} -d @./input.json -v
@rachitchauhan43 Yes, closing!