Predict from in-memory dataset
See original GitHub issueHi, currently the predict_input_fn in the demo loads the file to be scored form disc (using the specified path); however, in a real life scenario (e.g. web search) this is not realistic - the prediction must be made from a dataset already loaded into memory.
How could the demo code be modified to create a dataset on the fly from a context (query) and a set of examples (documents, which can be loaded from disc beforehand), and then use this in-memory dataset to make a prediction? Thanks.
Issue Analytics
- State:
- Created 3 years ago
- Comments:5 (3 by maintainers)
Top Results From Across the Web
Effective data prediction method for in-memory database ...
To mitigate these problems, we propose a hybrid main memory structure based on DRAM and NAND flash that is cheaper and consumes less...
Read more >Score and Predict Large Datasets - Dask Examples
Sometimes you'll train on a smaller dataset that fits in memory, but need to predict or score for a much larger (possibly larger...
Read more >Repeatedly calling model.predict(...) results in memory leak
I have been increasing memory on my compute node just to get some results but this leak is just too big for large...
Read more >Training models when data doesn't fit in memory
We have two datasets: a big train.csv (80% of records) for model training ... The data invites a binary classification task to predict...
Read more >tf.keras model.predict results in memory leak - Stack Overflow
I've found a fix for the memory leak. While K.clear_session() doesn't do anything in my case, adding a garbage collection after each call ......
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

Thanks for the tips. Does the second option (
predictions = self._estimator.predict(input_fn=lambda: (features, None))) also require to rewrite the tutorial’s estimator? If so, how? And how can the features be converted into the correct format (the example shows a dictionary of arrays of floating numbers but in the Antique dataset the data (answers) are stored as protobuffers).@davidmosca, have you considered export the model to SavedModel and use the TensorFlow Serving to do the prediction?
The estimator.predict is mainly for offline analysis or debugging purpose, as far as I can tell. For production, you need to export the model and serve it outside of estimator. See https://www.tensorflow.org/tfx/tutorials/serving/rest_simple#serve_your_model_with_tensorflow_serving.