question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Making HTTP calls to local InferenceGraph

See original GitHub issue

Hi folks,

I’m following this guide to setup the iris sequence inference graph. My graph is running and I can make requests to its inference services succesfully, but I can’t figure out how to make an http request to the graph itself. I am developing locally and would like to just use port forwarding.

The provided guide uses curl http://model-chainer.default.10.166.15.29.sslip.io -d @./iris-input.json, but I don’t have DNS setup. Could someone point me to how to achieve the same using the Ingress gateway with host header?

I tried the below without success. curl -v -H "Host: ${SERVICE_HOSTNAME}" http://${INGRESS_HOST}:${INGRESS_PORT} -d @./iris-input.json

Many thanks! Appreciate it.

Issue Analytics

  • State:closed
  • Created a year ago
  • Reactions:1
  • Comments:5 (2 by maintainers)

github_iconTop GitHub Comments

1reaction
rachitchauhan43commented, Oct 31, 2022

First do port-forwarding like this:

kubectl port-forward --namespace istio-system svc/${INGRESS_GATEWAY_SERVICE} 8080:80
# start another terminal
export INGRESS_HOST=localhost
export INGRESS_PORT=8080

Then, make a call: curl -v -H "Host: ${IG_NAME}.${NAMESPACE}.svc.cluster.local" http://localhost:8080/v1/inferencegraphs/${IG_NAME} -d @./input.json -v

0reactions
MichalPitrcommented, Nov 14, 2022

@rachitchauhan43 Yes, closing!

Read more comments on GitHub >

github_iconTop Results From Across the Web

Automatically Generating the Graphs that Live In Developers ...
The API graph for figuring out who is calling `ts_travel_service`. ... can be difficult or impossible to make manifest in a local unit...
Read more >
Azure Machine Learning inference HTTP server (Preview)
Learn how to enable local development with Azure machine learning inference http server.
Read more >
Using TensorFlow Serving's RESTful API
TensorFlow Serving is great for serving deep learning models across a network, and it's even better with a RESTful API. Here's how to...
Read more >
Tensorflow Serving or froze inference graph #1493 - GitHub
We have two solution : one is handle by froze inference graph (.pb file) and another is handle by Tensorflow Serving API.
Read more >
How can make inference graph file using models/research ...
But this file not working . This is error message. Traceback (most recent call last): File "export_inference_graph.py", line 211, in ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found