Running inference
See original GitHub issueHi, I am trying to run inference on exported model. I am using tensorflow serving to deploy the model. After deploying, when I send the request, I get this error:
out = requests.post('http://localhost:8501/v1/models/t5:predict', json=dict(inputs=['hello']))
Error message:
Can anyone help me what I am doing wrong here? Thanks
Issue Analytics
- State:
- Created 4 years ago
- Comments:15
Top Results From Across the Web
What is Machine Learning Inference? - Hazelcast
Machine learning inference is the process of running live data into a machine learning algorithm to calculate output such as a single numerical...
Read more >Understanding Machine Learning Inference - Run:AI
Machine learning (ML) inference involves applying a machine learning model to a dataset and generating an output or “prediction”.
Read more >Deep Learning Training vs. Inference: What's the Difference?
Machine learning inference is the ability of a system to make predictions from novel data. This can be helpful if you need to...
Read more >Running inference - Infer.NET
Infer.NET is a framework for running Bayesian inference in graphical models. It can be used to solve many different kinds of machine learning...
Read more >BigQuery ML model inference overview - Google Cloud
Machine learning inference is the process of running data points into a machine learning model to calculate an output such as a single...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
I’m working on an example at the end of the colab. I’ll add it n the next few hours.
Thanks a lot, it worked. I could not pass batch size to model.export but when I set manually model.batch_size = 1 and exported again, it worked both on CPU and GPU.