question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Add QAT support for Concatenate layer

See original GitHub issue

Referencing @nutsiepully in #372

We are ramping up support for layers based on feedback from users. So thank you for that. Most of these layers are quite simple, and haven’t been added due to conversion support.

We’re using the NoOpQuantizeConfig workaround for now, but in the future we would love to see native support for the Keras Concatenate layer. Thanks!

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:6 (4 by maintainers)

github_iconTop GitHub Comments

1reaction
nutsiepullycommented, Jun 12, 2020

@willbattel - Since support for Concatenate is too broad, please add your use case here which doesn’t work once you get a chance and re-open the bug. I’ll close it for now.

0reactions
HOLYlmxcommented, Oct 28, 2021

@nutsiepully Hi, I have also get the error that

to_annotate can only be a tf.keras.layers.Layer instance. You passed an instance of type: Tensor’',

and my code is set as your instructions that:

tf.keras.layers.concatenate([conv2,up1])(conv3) returns a tensor. tf.keras.layers.concatenate([conv2,up1]) returns a layer.

The code is as follows:

class NoOpQuantizeConfig(quantize_config.QuantizeConfig):
    """QuantizeConfig which does not quantize any part of the layer."""
    def get_weights_and_quantizers(self, layer):
        return []
    def get_activations_and_quantizers(self, layer):
        return []
    def set_quantize_weights(self, layer, quantize_weights):
        pass
    def set_quantize_activations(self, layer, quantize_activations):
        pass
    def get_output_quantizers(self, layer):
        return []
    def get_config(self):
        return {}
quantize_config = NoOpQuantizeConfig()

x2 = tf.keras.Input(shape=(28,28))
uu = tf.keras.layers.Reshape((28,28,1))(x2)
uu = tf.keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation='relu')(uu)
uu = tf.keras.layers.Conv2D(filters=12, kernel_size=(3, 3))(uu)

z=tfmot.quantization.keras.quantize_annotate_layer(to_annotate=tf.keras.layers.concatenate([uu,tf.zeros((0,24,24,12))]),quantize_config=quantize_config)

uu = tf.keras.layers.ReLU()(z)
uu = tfmot.quantization.keras.quantize_annotate_layer(tf.keras.layers.UpSampling2D((2,2),interpolation='bilinear'), quantize_config=quantize_config)(uu)
uu = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(uu)
uu = tf.keras.layers.MaxPooling2D(pool_size=(2, 2))(uu)
uu = tf.keras.layers.Flatten()(uu)
out = tf.keras.layers.Dense(10)(uu)

model = Model(x2,out)

The code was going well when I didnt put the "tf.keras.layer.concatenate()"layer. Meanwhile, I haven’t fount the “concatenate_conifg” parameter mentioned above. Hoping for your help, thank you so much^^

Read more comments on GitHub >

github_iconTop Results From Across the Web

Add New Layer Support — TensorFlow 2.x Quantization ...
This toolkit uses a TensorFlow Keras wrapper layer to insert QDQ nodes before quantizable layers. Supported Layers¶. The following matrix shows the layers...
Read more >
tfmot.quantization.keras.default_8bit.default_8bit_transforms
Module containing 8bit default transforms. Classes. class ConcatTransform : Transform for Concatenate. Quantize only after concatenation.
Read more >
Concatenate layer - Keras
Layer that concatenates a list of inputs. It takes as input a list of tensors, all of the same shape except for the...
Read more >
vai_q_tensorflow2 Supported Operations and APIs - 2.0 English
The following table lists the supported operations and APIs for vai_q_tensorflow2. Table 1. vai_q_tensorflow2 Supported Layers Layer Types Supported Layers ...
Read more >
AI:Deep Quantized Neural Network support - stm32mcu
This article provides the documentation related to the support for Deep Quantized Neural Network (DQNN) in X-CUBE-AI. The documentation is also provided ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found