efficientnetBx model.save() fails due to serialization problem with tf2.10.0
See original GitHub issueSystem information.
- Have I written custom code: derived from Keras image_classification_efficientnet_fine_tuning
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Win10Pro
- TensorFlow installed from (source or binary): binary
- TensorFlow version (use command below): tf2.10.0
- Python version: 3.10.2
- Bazel version (if compiling from source):
- GPU model and memory: NVIDIA RTX TITAN 24Gb
- Exact command to reproduce: model.save()
Describe the problem clearly here. Be sure to convey here why it’s a bug in Keras or why the requested feature is needed.
Describe the current behavior. model save() fails and reports a serialization problem.
Describe the expected behavior. saving keras model without error.
- Do you want to contribute a PR? (yes/no): no
Source code / logs.
WARNING:absl:Found untraced functions such as _jit_compiled_convolution_op, _jit_compiled_convolution_op, _jit_compiled_convolution_op, _jit_compiled_convolution_op, _jit_compiled_convolution_op while saving (showing 5 of 273). These functions will not be directly callable after loading. INFO:tensorflow:Assets written to: ./models/EfficientNetB7_Naiads.h5py\assets INFO:tensorflow:Assets written to: ./models/EfficientNetB7_Naiads.h5py\assets Output exceeds the size limit. Open the full output data in a text editor
TypeError Traceback (most recent call last) Cell In [31], line 1 ----> 1 model.save(‘./models/EfficientNetB7_Naiads.h5py’)
File e:\02- Vision Projects\01- Naiads Projects\notebooks.venv\lib\site-packages\keras\utils\traceback_utils.py:70, in filter_traceback.<locals>.error_handler(*args, **kwargs)
67 filtered_tb = _process_traceback_frames(e.traceback)
68 # To get the full stack trace, call:
69 # tf.debugging.disable_traceback_filtering()
—> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb
File C:\Python310\lib\json\encoder.py:199, in JSONEncoder.encode(self, o) 195 return encode_basestring(o) 196 # This doesn’t pass the iterator directly to ‘’.join() because the 197 # exceptions aren’t as detailed. The list call should be roughly 198 # equivalent to the PySequence_Fast that ‘’.join() would do. –> 199 chunks = self.iterencode(o, _one_shot=True) 200 if not isinstance(chunks, (list, tuple)): 201 chunks = list(chunks)
File C:\Python310\lib\json\encoder.py:257, in JSONEncoder.iterencode(self, o, _one_shot) 252 else: 253 _iterencode = _make_iterencode( … 255 self.key_separator, self.item_separator, self.sort_keys, 256 self.skipkeys, _one_shot) –> 257 return _iterencode(o, 0)
TypeError: Unable to serialize [2.0897 2.1129 2.1082] to JSON. Unrecognized type <class ‘tensorflow.python.framework.ops.EagerTensor’>.
Issue Analytics
- State:
- Created a year ago
- Reactions:3
- Comments:11 (1 by maintainers)
Top GitHub Comments
NB if useful:
This issue seems to be triggered by the fact the hard-baked normalisation constants are getting evaluated to an EagerTensor before the scaling layer is built - see here
I fixed this locally by moving the logic into python:
At the top:
Then on build just do:
x = layers.Rescaling(IMAGENET_STDDEV_RGB)(x)
Don’t have time to a raise a PR rn /w failure test cases (also suspect this isn’t the most elegant solution), but thought at least a guide to hotfix might help for anyone that does!
Same problem here:
->
Here is a
Dockerfile
to easily reproduce it: