question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Running inference

See original GitHub issue

Hi, I am trying to run inference on exported model. I am using tensorflow serving to deploy the model. After deploying, when I send the request, I get this error:

out = requests.post('http://localhost:8501/v1/models/t5:predict', json=dict(inputs=['hello']))

Error message:

image

Can anyone help me what I am doing wrong here? Thanks

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:15

github_iconTop GitHub Comments

1reaction
adarobcommented, Feb 21, 2020

I’m working on an example at the end of the colab. I’ll add it n the next few hours.

0reactions
NaxAlphacommented, Feb 22, 2020

Thanks a lot, it worked. I could not pass batch size to model.export but when I set manually model.batch_size = 1 and exported again, it worked both on CPU and GPU.

Read more comments on GitHub >

github_iconTop Results From Across the Web

What is Machine Learning Inference? - Hazelcast
Machine learning inference is the process of running live data into a machine learning algorithm to calculate output such as a single numerical...
Read more >
Understanding Machine Learning Inference - Run:AI
Machine learning (ML) inference involves applying a machine learning model to a dataset and generating an output or “prediction”.
Read more >
Deep Learning Training vs. Inference: What's the Difference?
Machine learning inference is the ability of a system to make predictions from novel data. This can be helpful if you need to...
Read more >
Running inference - Infer.NET
Infer.NET is a framework for running Bayesian inference in graphical models. It can be used to solve many different kinds of machine learning...
Read more >
BigQuery ML model inference overview - Google Cloud
Machine learning inference is the process of running data points into a machine learning model to calculate an output such as a single...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found