Table not initialized when serving model
See original GitHub issuePosting for @awadalaa
We are blocked on experimenting with a new Tensorflow model in production because it fails to inference with this error:
tensorflow.python.framework.errors_impl.FailedPreconditionError: Table not initialized.
We have narrowed down the issue to a bit of our code that applies a bm25 transformation in a Tensorflow-Transform job. As part of applying that transformation, it learns and applies a vocabulary however when we inference the model it fails to initialize the table from that vocabulary file on this line. Here is the BM25 code we are using and the line where it fails: https://gist.github.com/awadalaa/e9290cf6674884d8e197fe315ed7d832#file-gistfile1-txt-L176-L177
More background: We run a Tensorflow-Transform Beam/Dataflow job that executes this transformation and saves the transform graph. Later when we train our model, we save it with a signature that applies the TFT layer: transformed_features = model.tft_layer(parsed_features). We noticed that the exported model/assets directory does not include the intermediate vocabulary used by the above BM25 transformation although it does include every other vocabulary file learned in the TFT job. Any ideas why the above transformation would fail to export the vocabulary assets for a saved model?
Stack trace here:
Traceback (most recent call last):
File "/Users/aawad/Desktop/keras_predict.py", line 174, in <module>
print("prediction_output", predict(inference_data))
File "/usr/local/opt/python@3.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 1655, in __call__
return self._call_impl(args, kwargs)
File "/usr/local/opt/python@3.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 1673, in _call_impl
return self._call_with_flat_signature(args, kwargs, cancellation_manager)
File "/usr/local/opt/python@3.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 1722, in _call_with_flat_signature
return self._call_flat(args, self.captured_inputs, cancellation_manager)
File "/usr/local/opt/python@3.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/saved_model/load.py", line 106, in _call_flat
cancellation_manager)
File "/usr/local/opt/python@3.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 1924, in _call_flat
ctx, args, cancellation_manager=cancellation_manager))
File "/usr/local/opt/python@3.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 550, in call
ctx=ctx)
File "/usr/local/opt/python@3.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/eager/execute.py", line 60, in quick_execute
inputs, attrs, num_outputs)
tensorflow.python.framework.errors_impl.FailedPreconditionError: Table not initialized.
[[{{node StatefulPartitionedCall/StatefulPartitionedCall/transform_features_layer/StatefulPartitionedCall/transform/apply_haystack_vocabulary_query_ngram_substrings_tags_ngram_substrings/hash_table_Lookup/LookupTableFindV2}}]] [Op:__inference_signature_wrapper_23443]
Function call stack: signature_wrapper
Issue Analytics
- State:
- Created 2 years ago
- Comments:11 (5 by maintainers)
My understanding is that Keras expects that all resources that need to be tracked are tracked by the main object that is being saved (in this case the full_model). I suspect it isn’t common that the signatures are on a model different from the one being saved. I will try and verify this and get back to you.
thank you @rcrowe-google and @varshaan! Attaching the tft_layer to the full_model does unblock us!
I’m not sure if the issue should be closed though. It was unexpected because the
tft_layer
was attached through the prediction signature and the predictions failed when using the signature. I would have expected that failure mode if I had made the predictions using themodel.predict
ormodel.__call__
explicitly but not when using the prediction signature. Any reason why the full_model needs to track the tft_layer here rather than rely on the prediction signatures tft_layer?