Quantization not supported for tensorflow.python.keras.layers.wrappers.Bidirectional
See original GitHub issueDescribe the bug When doing quantization of the model in order to run on Edge_TPU an error message is presented.
System information
TensorFlow installed from (source or binary): binary
TensorFlow version: 2.1
TensorFlow Model Optimization version: 0.3.0 (according to pip, I could not run tfmot.__version__)
Python version: 3.7.7
Describe the expected behavior The full quantization should proceed in order to allow executing the model on the edge_tpu.
Describe the current behavior the quantization gives a runtime error (see below).
Code to reproduce the issue Provide a reproducible code that is the bare minimum necessary to generate the problem.
import sys
import numpy as np
import tensorflow as tf
import tensorflow.keras as keras
from tensorflow.keras import metrics
from tensorflow.keras.layers import (GRU, LSTM, Activation, Dense, Input,
SimpleRNN)
from tensorflow.keras.models import Model
import tensorflow_model_optimization as tfmot
print(sys.version)
print("Tensor Flow:", tf.__version__)
print("Keras: ", keras.__version__)
print("Numpy: ", np.__version__)
tf.random.set_seed(12345)
from tensorflow.keras.layers import Input
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, Activation
from tensorflow.keras.layers import SimpleRNN
from tensorflow.keras.layers import GRU
from tensorflow.keras.layers import LSTM
from tensorflow.keras.layers import Reshape
from tensorflow.keras.layers import Dropout
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import TimeDistributed
from tensorflow.keras.layers import Bidirectional
from tensorflow.keras.layers import Conv1D
from tensorflow.keras import regularizers
from tensorflow.keras import metrics
def define_model(length_of_sequences, batch_size=None, neurons=5, modelName="nonamegiven"):
inp = Input(batch_shape=(
batch_size, length_of_sequences, 1), name="inputs")
lstmM = Bidirectional(LSTM(50, name="lstm_m", return_sequences=True))(inp)
flat = Flatten()(lstmM)
convM = Conv1D(25, 5, activation="relu") (inp)
flatc = Flatten()(convM)
firstflat = tf.keras.layers.concatenate([flat, flatc])
denseM = Dense(2048, kernel_regularizer=regularizers.l2(0.0001))(firstflat)
denseM = Dense(1024, kernel_regularizer=regularizers.l2(0.0001))(denseM)
denseM = Dense(512, kernel_regularizer=regularizers.l2(0.0001))(denseM)
denseM = Dense(256, kernel_regularizer=regularizers.l2(0.0001))(denseM)
denseM = Dense(128, kernel_regularizer=regularizers.l2(0.0001))(denseM)
denseM = Dense(50, kernel_regularizer=regularizers.l2(0.0001))(denseM)
reshapeM = Reshape((50, 1))(denseM)
denseM = TimeDistributed(
Dense(1, kernel_regularizer=regularizers.l2(
0.0001), bias_initializer='zeros'),
input_shape=(50, 1))(reshapeM)
out_M = Reshape((50, 1), name="om")(denseM)
denseF = Dense(2048, kernel_regularizer=regularizers.l2(0.0001))(firstflat)
denseF = Dense(1024, kernel_regularizer=regularizers.l2(0.0001))(denseF)
denseF = Dense(512, kernel_regularizer=regularizers.l2(0.0001))(denseF)
denseF = Dense(256, kernel_regularizer=regularizers.l2(0.0001))(denseF)
denseF = Dense(128, kernel_regularizer=regularizers.l2(0.0001))(denseF)
denseF = Dense(50, kernel_regularizer=regularizers.l2(0.0001))(denseF)
denseF = Reshape((50, 1))(denseF)
merger = tf.keras.layers.concatenate([out_M, denseF])
flatfi = Flatten()(merger)
denseF = Dense(100, kernel_regularizer=regularizers.l2(0.0001))(flatfi)
denseF = Dense(100, kernel_regularizer=regularizers.l2(0.0001))(denseF)
denseF = Dense(50, kernel_regularizer=regularizers.l2(0.0001))(denseF)
denseF = Dense(50, kernel_regularizer=regularizers.l2(
0.0001), bias_initializer='zeros')(denseF)
reshapeF = Reshape((50, 1))(denseF)
denseF = TimeDistributed(
Dense(1, kernel_regularizer=regularizers.l2(
0.0001), bias_initializer='zeros'),
input_shape=(50, 1))(reshapeF)
out_F = Reshape((50, 1), name="of")(denseF)
model = Model(inputs=[inp], outputs=[out_M, out_F], name=modelName)
model.compile(
loss={"om": "mean_squared_error", "of": "mean_squared_error"},
optimizer=keras.optimizers.RMSprop(learning_rate=0.001),
metrics=['mae'])
return model
k_model = define_model(
length_of_sequences=250, neurons=5, modelName="TFSupport02")
k_model.summary()
q_model= tfmot.quantization.keras.quantize_model(k_model)
q_model.summary()
I obtain the following error:
RuntimeError: Layer bidirectional_1:<class ‘tensorflow.python.keras.layers.wrappers.Bidirectional’> is not supported. You can quantize this layer by passing a tfmot.quantization.keras.QuantizeConfig instance to the quantize_annotate_layer API.
Additional context please let me know if any further information is needed.
while reading the comprehensive guide it was not clear to me how to proceed to pass an adequate QuantizeConfig for the quantization, if possible please give some more indication.
It seems similar to issue #372 .
Issue Analytics
- State:
- Created 3 years ago
- Comments:15 (2 by maintainers)

Top Related StackOverflow Question
even I have the same issue with time disturbed
RuntimeError: Layer time_distributed:<class ‘tensorflow.python.keras.layers.wrappers.TimeDistributed’> is not supported. You can quantize this layer by passing a
tfmot.quantization.keras.QuantizeConfiginstance to thequantize_annotate_layerAPI.@ericqu no. I am having the same issue.