question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Quantization not supported for tensorflow.python.keras.layers.wrappers.Bidirectional

See original GitHub issue

Describe the bug When doing quantization of the model in order to run on Edge_TPU an error message is presented.

System information

TensorFlow installed from (source or binary): binary

TensorFlow version: 2.1

TensorFlow Model Optimization version: 0.3.0 (according to pip, I could not run tfmot.__version__)

Python version: 3.7.7

Describe the expected behavior The full quantization should proceed in order to allow executing the model on the edge_tpu.

Describe the current behavior the quantization gives a runtime error (see below).

Code to reproduce the issue Provide a reproducible code that is the bare minimum necessary to generate the problem.

import sys

import numpy as np
import tensorflow as tf
import tensorflow.keras as keras
from tensorflow.keras import metrics
from tensorflow.keras.layers import (GRU, LSTM, Activation, Dense, Input,
                                     SimpleRNN)
from tensorflow.keras.models import Model
import tensorflow_model_optimization as tfmot

print(sys.version)
print("Tensor Flow:",  tf.__version__)
print("Keras: ", keras.__version__)
print("Numpy: ", np.__version__)


tf.random.set_seed(12345)
from tensorflow.keras.layers import Input
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, Activation
from tensorflow.keras.layers import SimpleRNN
from tensorflow.keras.layers import GRU
from tensorflow.keras.layers import LSTM
from tensorflow.keras.layers import Reshape
from tensorflow.keras.layers import Dropout
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import TimeDistributed
from tensorflow.keras.layers import Bidirectional
from tensorflow.keras.layers import Conv1D
from tensorflow.keras import regularizers
from tensorflow.keras import metrics


def define_model(length_of_sequences, batch_size=None, neurons=5, modelName="nonamegiven"):

    inp = Input(batch_shape=(
        batch_size, length_of_sequences, 1), name="inputs")
    lstmM = Bidirectional(LSTM(50, name="lstm_m", return_sequences=True))(inp)
    flat = Flatten()(lstmM)
    convM = Conv1D(25, 5, activation="relu") (inp)
    flatc = Flatten()(convM)
    firstflat = tf.keras.layers.concatenate([flat, flatc])
    denseM = Dense(2048, kernel_regularizer=regularizers.l2(0.0001))(firstflat)
    denseM = Dense(1024, kernel_regularizer=regularizers.l2(0.0001))(denseM)
    denseM = Dense(512, kernel_regularizer=regularizers.l2(0.0001))(denseM)
    denseM = Dense(256, kernel_regularizer=regularizers.l2(0.0001))(denseM)
    denseM = Dense(128, kernel_regularizer=regularizers.l2(0.0001))(denseM)
    denseM = Dense(50, kernel_regularizer=regularizers.l2(0.0001))(denseM)
    reshapeM = Reshape((50, 1))(denseM)
    denseM = TimeDistributed(
        Dense(1, kernel_regularizer=regularizers.l2(
            0.0001), bias_initializer='zeros'),
        input_shape=(50, 1))(reshapeM)
    out_M = Reshape((50, 1), name="om")(denseM)
    denseF = Dense(2048, kernel_regularizer=regularizers.l2(0.0001))(firstflat)
    denseF = Dense(1024, kernel_regularizer=regularizers.l2(0.0001))(denseF)
    denseF = Dense(512, kernel_regularizer=regularizers.l2(0.0001))(denseF)
    denseF = Dense(256, kernel_regularizer=regularizers.l2(0.0001))(denseF)
    denseF = Dense(128, kernel_regularizer=regularizers.l2(0.0001))(denseF)
    denseF = Dense(50, kernel_regularizer=regularizers.l2(0.0001))(denseF)
    denseF = Reshape((50, 1))(denseF)
    merger = tf.keras.layers.concatenate([out_M, denseF])
    flatfi = Flatten()(merger)
    denseF = Dense(100, kernel_regularizer=regularizers.l2(0.0001))(flatfi)
    denseF = Dense(100, kernel_regularizer=regularizers.l2(0.0001))(denseF)
    denseF = Dense(50, kernel_regularizer=regularizers.l2(0.0001))(denseF)
    denseF = Dense(50, kernel_regularizer=regularizers.l2(
        0.0001), bias_initializer='zeros')(denseF)
    reshapeF = Reshape((50, 1))(denseF)
    denseF = TimeDistributed(
        Dense(1, kernel_regularizer=regularizers.l2(
            0.0001), bias_initializer='zeros'),
        input_shape=(50, 1))(reshapeF)
    out_F = Reshape((50, 1), name="of")(denseF)

    model = Model(inputs=[inp], outputs=[out_M, out_F], name=modelName)

    model.compile(
        loss={"om": "mean_squared_error", "of": "mean_squared_error"},
        optimizer=keras.optimizers.RMSprop(learning_rate=0.001),
        metrics=['mae'])
    return model


k_model = define_model(
    length_of_sequences=250, neurons=5, modelName="TFSupport02")
k_model.summary()

q_model= tfmot.quantization.keras.quantize_model(k_model)
q_model.summary()

I obtain the following error: RuntimeError: Layer bidirectional_1:<class ‘tensorflow.python.keras.layers.wrappers.Bidirectional’> is not supported. You can quantize this layer by passing a tfmot.quantization.keras.QuantizeConfig instance to the quantize_annotate_layer API.

Additional context please let me know if any further information is needed.

while reading the comprehensive guide it was not clear to me how to proceed to pass an adequate QuantizeConfig for the quantization, if possible please give some more indication.

It seems similar to issue #372 .

Issue Analytics

  • State:open
  • Created 3 years ago
  • Comments:15 (2 by maintainers)

github_iconTop GitHub Comments

2reactions
OriAlphacommented, May 10, 2020

even I have the same issue with time disturbed

RuntimeError: Layer time_distributed:<class ‘tensorflow.python.keras.layers.wrappers.TimeDistributed’> is not supported. You can quantize this layer by passing a tfmot.quantization.keras.QuantizeConfig instance to the quantize_annotate_layer API.

0reactions
marbortolicommented, Jul 22, 2021

@ericqu no. I am having the same issue.

Read more comments on GitHub >

github_iconTop Results From Across the Web

tf.keras.layers.Bidirectional | TensorFlow v2.11.0
Layer instance to be used to handle backwards input processing. If backward_layer is not provided, the layer instance passed as the layer ......
Read more >
tf.quantization.quantize | TensorFlow v2.11.0
The 'mode' attribute controls exactly which calculations are used to convert the float values to their quantized equivalents.
Read more >
tf.keras.layers.Layer | TensorFlow v2.11.0
A layer is a callable object that takes as input one or more tensors and that outputs one or more tensors. It involves...
Read more >
tf.keras.layers.Embedding | TensorFlow v2.11.0
You may be using an optimizer that does not support sparse GPU kernels. In this case you will see an error upon training...
Read more >
tf.keras.layers.Wrapper | TensorFlow v2.11.0
Do not use this class as a layer, it is only an abstract base class. Two usable wrappers are the TimeDistributed and Bidirectional...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found