Predict from stdin or via the programmatic api
See original GitHub issueAs soon as my brand new model is ready, I would like ludwig
the enable predictions via the command line, so that one could do like
echo "call me at +3912345679 for offers" | ludwig predict --only_prediction --model_path /path/to/model -
where the -
may indicate the stdin
(as an example) and get the model predictions directly. This would help to integrated ludwing
command in a inference pipeline.
If I have understood well the api (my model is not ready yet…) it should be possibile via the programmatic api like that:
from ludwig import LudwigModel
# load a model
model = LudwigModel.load(model_path)
# obtain predictions
myDict = { 'text': ['call me at +3912345679 for offers'] }
predictions = model.predict(data_dict=myDict,
return_type=dict
batch_size=128,
gpus=None,
gpu_fraction=1,
logging_level=logging.DEBUG)
# close model (eventually)
model.close()
Is that correct?
Issue Analytics
- State:
- Created 5 years ago
- Reactions:2
- Comments:10
Top Results From Across the Web
[LUIS] Getting error while exporting from remote to local #1389
Getting below exception while exporting from remote,. The versions - Export application version Operation under LUIS Programmatic APIs v2.0 API ...
Read more >gcloud ai-platform predict | Google Cloud CLI Documentation
API -first integration to connect existing data and applications. ... Solution to bridge existing care systems and apps on Google Cloud. ... No-code...
Read more >Why is it possible to write() to STDIN? - Stack Overflow
Yes. It's just that most people seem surprised by my example code having fprintf(stderr, ...) scattered everywhere, outputting human-readable ...
Read more >Read from Stdin (QtmhRdStin) API - IBM
This API reads what the server has generated as input for the CGI program. Important: CGI input data is only available from standard...
Read more >E2E Deep Learning: Serverless Image Classification
You can see the prediction result as the content received after making the API call POST request. Figure 29. Make a prediction through...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I added it to the list of enhancements! Stay tuned.
Got it, that seems like a great idea. What i was planning to have a
ludwig serve
command that would start a REST server, maybe I can either create another one or put an option if to start the server or if to listen to stdin. The reason for not usingpredict
for this purpose is because it will be nice to expose alsomodel.train()
andmodel.train_online()
and basically any other function of the API. The only difficulty that I can imagine being there with the stdin reading is that the input should be in kind of a tabular encoding, because methods need to know which feature the value belongs to, so for instance your example should bestdin.write(text: call me at +3912345679 for offers, other_feature: other_value\r\n)
. Plus more than one sample may be passed at the same time, which means that probably you want to encode it in something like JSON:stdin.write({"text": ["call me at +3912345679 for offers", "text_2"], "other_feature": [other_value_1, other_value_2]})
. I think that would work, what do you think? Maybe JSON will make escaping characters a bit of mess…