TensorFlow model cannot be parsed within the memory limit
See original GitHub issueHello,
I’m trying to upload a model generated with TFRanking (32Mb) to BigQuery which I saved like this:
signatures = {
'serving_default':
make_keras_tft_serving_fn(
ranker,
tf_transform_output,
context_cols,
example_cols
).get_concrete_function(
tf.TensorSpec(
shape=[None],
dtype=tf.string,
name='examples'
)
),
}
ranker.load_weights(checkpoint)
ranker.save(model_dir, save_format='tf', signatures=signatures)
but I got this error:
Error while reading data, error message: TensorFlow model cannot be parsed within the memory limit; try reducing the model size

Previously I managed to upload bigger model (>200Mb) created with regular TF 1.13 code, so I don’t understand the message.
Did someone already encounter this ?
Thanks
On Ubuntu 18.04, Python 3.7.3
tensorflow==2.4.1
tensorflow-addons==0.12.1
tensorflow-datasets==4.2.0
tensorflow-estimator==2.4.0
tensorflow-hub==0.11.0
tensorflow-metadata==0.29.0
tensorflow-model-optimization==0.5.0
tensorflow-ranking==0.3.3
tensorflow-serving-api==2.4.1
tensorflow-transform==0.29.0
Issue Analytics
- State:
- Created 2 years ago
- Comments:9
Top Results From Across the Web
Tensorflow Serving: Large model, protobuf error - c++
I read through a few related issues on Github from a few years ago, but ultimately it turned unrelated, since Serving is using...
Read more >tf.data: Build TensorFlow input pipelines
This works well for a small dataset, but wastes memory---because the contents of the array will be copied multiple times---and can run into...
Read more >NVIDIA Deep Learning TensorRT Documentation
Depending on the nature of the data, this may be in either CPU or GPU memory. If not obvious based on your model,...
Read more >Debugging a Machine Learning model written in TensorFlow ...
sparse_tensor_to_dense). The label is simply 0 or 1. Then, I take the two parsed images and stack them together: stacked = tf.concat([ ...
Read more >Model Optimizer Frequently Asked Questions
Q43. What does the message “Cannot pre-process TensorFlow graph after reading from model file. File is corrupt or has unsupported format” mean?
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

We think we found a workaround but we’re still not sure it’s viable: we tried transforming our keras models to the old (tf1 format) frozen model that we then re attach to a savedmodel. It seems to reduce the ram.
take a look at this: https://leimao.github.io/blog/Save-Load-Inference-From-TF2-Frozen-Graph/
and then when you have your concrete function reattach it to a tf.Module.
again we’re still working on the topic so we dont have any long term view of the solution: there might be an issue somewhere…
If you find an issue or have a better idea please tell us !
and some links: https://github.com/search?q=convert_variables_to_constants_v2&type=code
We still use the solution above this + some Grappler optims fixed the issue for most domains. You’re using a model that cannot be freeze ?