Getting an error when creating the .tflite file
See original GitHub issueHello,
I am getting an error while creating the tflite file in the following line of the code:
tflite_model = converter.convert()
The error is: tensorflow/lite/kernels/quantize.cc:110 affine_quantization->scale->size == 1 was not true.Node number 0 (QUANTIZE) failed to prepare.
When saving the Keras model, there is an input layer created, which does not have the scale and offset for quantizing, I have tried different methods but it is generated automatically. Any help is much appreciated.
TensorFlow Model Optimization version (installed from source or binary): ‘2.3.0-dev20200528’
Python version: 3
Best, Marjan
Issue Analytics
- State:
- Created 3 years ago
- Comments:10 (4 by maintainers)
Top Results From Across the Web
Object Detection Android App creates error with model from ...
Hi there, I am creating a custom Android App for object detection. Therefore, I use the Tensorflow Object Detection Android App from here: ......
Read more >Failed to run the tflite model on Interpreter due to Internal Error
Hi Anurag. You say that your model works perfectly on Jupyter notebook...and you have created successfully the .tflite file....Have you loaded ...
Read more >While trying 'Validate on Desktop' for tflite model, we are ...
First, the raised error is due to the version of the TFLite interpreter which is embedded in the X-CUBE-AI pack to perform the...
Read more >Build Android app for custom object detection (TensorFlow 2.x)
Training a Deep Learning model for custom object detection using TensorFlow 2.x in Google Colab and converting it to a TFLite model for...
Read more >Open VINO support for TensorFlow Lite - Intel Communities
tflite to saved_model.pb. But I am getting a new kind of error now. I created the original model using TensorFlow 2 and then...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found

@nutsiepully I am just using the keras model for tflite converter as:
converter=tf.lite.TFLiteConverter.from_keras_model(model).Following shows the keras model, and as you can see there are two cells as InputLayer and QuantizeLayer which do not have the quantization values. I know that the issue is comming from here, but not sure how to prevent this. The model seems right to me and I don’t get the reason of first 2 cells being created.The model is build as the following code:
model= Sequential([ quantize.quantize_annotate_layer(Conv2D(64,1,input_shape=(55,40,1),use_bias=True,activation="relu")), MaxPooling2D(2,2), Conv2D(64,(1,1),use_bias=True,activation="relu"), Flatten(), Dense(2,use_bias=True), Softmax() ])model = quantize.quantize_apply(model)@alanchiao As I had mentioned before my TF version is : 2.3.0
Hi @marjanemd , closing this as it seems to be resolved. please reopen or file a new one if this is still a problem.