question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Convert keras model to quantized tflite lost precision

See original GitHub issue

I’m trying to convert my keras model into tflite quantized model so that I can run my model on coral TPU, but the output of my keras model and tflite model are significantly different.

The red points are quantized tflite model output, and blue points are original keras model output.

img is here

Here is my code to convert keras model to quantized tflite model :

quant = True
gc.collect()
import tensorflow as tf
import numpy as np
import pathlib
print(tf.__version__)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
if quant:
    print("Converting quant....")
    sample_size = 200
    rdm_idx = np.random.choice(range(len(X_test)),sample_size)
    rep_data = tf.cast(X_train[rdm_idx], tf.float32) / 255.0
    dataset = tf.data.Dataset.from_tensor_slices(rep_data).batch(1)

    def representative_data_gen():
        for input_value in dataset.take(sample_size):
            yield [input_value]

    converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
    converter.representative_dataset = representative_data_gen
    converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
    converter.inference_input_type = tf.uint8
    converter.inference_output_type = tf.uint8
    tflite_model_quant = converter.convert()
    open("MaskedLandMarkDetction_MobileNetV2_quant_fromKeras_v5.tflite", "wb").write(tflite_model_quant)

    print("Write quantization tflite done.")
else:
    print("Converting normal....")
    tflite_model = converter.convert()
    open("MaskedLandMarkDetction_MobileNetV2_fromKeras.tflite", "wb").write(tflite_model)
    print("Write tflite done.") 

X_train is my training data, and I will scale input images value from 0 to 1 by divided 255., so I do the same in representative_data_gen functions.

Any assistance you can provide would be greatly appreciated.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:5 (1 by maintainers)

github_iconTop GitHub Comments

1reaction
Namburgercommented, May 28, 2020

@ChiHangChen Your usage definitely looks correct, so I believe this is a bug with tflite conversion. Unfortunately, this issue is out of our hand, please open an issue here. My suggestion is to add some extra calibration steps also?

    calibration_steps = 200
    def representative_data_gen():
        for i in range(calibration_steps):
            for input_value in dataset.take(sample_size):
                yield [input_value]
0reactions
manoj7410commented, Jul 5, 2021

Feel free to reopen if this issue still persists.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Convert keras model to quantized tflite lost precision
quant = True gc.collect() import tensorflow as tf import numpy as np import pathlib print(tf.__version__) converter = tf.lite.TFLiteConverter.
Read more >
Post-training quantization | TensorFlow Lite
You can quantize an already-trained float TensorFlow model when you convert it to TensorFlow Lite format using the TensorFlow Lite Converter.
Read more >
Convert Keras Model To Quantized Tflite Lost ... - ADocLib
Post-training quantization is a conversion technique that can reduce model size while only the weights from floating point to integer, which has 8-bits...
Read more >
Quantization of Keras model with Tensorflow | by Sonali Medani
In this article, we will learn about different ways of quantization on keras models using Tensorflow framework. Link to the jupyter notebook ...
Read more >
Model Quantization Methods In TensorFlow Lite
TensorFlow Lite provides one of the most popular model optimization techniques is called quantization. Quantization used to reduce the precision of the ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found