question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Layer up_sampling2d_36:<class 'tensorflow.python.keras.layers.convolutional.UpSampling2D'> is not supported. ou can quantize this layer by passing a `tfmot.quantization.keras.QuantizeConfig` instance to the `quantize_annotate_layer` API.

See original GitHub issue

Describe the bug I am trying to use quantize_model() to optimise UNET model, which contains UpSampling2D layer and conversion of this layer is not supported by tensorflow_model_optimization right now.

System information MacOS Catalina Version: 10.15.2

TensorFlow installed from (source or binary): binary

TensorFlow version: 2.1.0

TensorFlow Model Optimization version: 0.3.0

Python version: 3.7.4

Describe the expected behavior Successfully quantize UpSampling2D layer.

Describe the current behavior No support right now to quantize UpSampling2D layer.

Code to reproduce the issue Provide a reproducible code that is the bare minimum necessary to generate the problem.

def unet(pretrained_weights = None,input_size = (256,256,1)):

    inputs = Input(shape=input_size)
    conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(inputs)
    conv1 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv1)

    pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
    conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool1)
    conv2 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv2)

    pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
    conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool2)
    conv3 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv3)

    pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)
    conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool3)
    conv4 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv4)
    drop4 = Dropout(0.5)(conv4)

    pool4 = MaxPooling2D(pool_size=(2, 2))(drop4)
    conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool4)
    conv5 = Conv2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv5)
    drop5 = Dropout(0.5)(conv5)

    up6 = Conv2D(512, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(drop5))
    merge6 = concatenate([drop4,up6], axis = 3)
    conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge6)
    conv6 = Conv2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv6)

    up7 = Conv2D(256, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv6))
    merge7 = concatenate([conv3,up7], axis = 3)
    conv7 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge7)
    conv7 = Conv2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv7)

    up8 = Conv2D(128, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv7))
    merge8 = concatenate([conv2,up8], axis = 3)
    conv8 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge8)
    conv8 = Conv2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv8)

    up9 = Conv2D(64, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv8))
    merge9 = concatenate([conv1,up9], axis = 3)
    conv9 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge9)
    conv9 = Conv2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9)
    conv9 = Conv2D(2, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9)
    conv10 = Conv2D(1, 1, activation = 'sigmoid')(conv9)

    model = Model(inputs = inputs, outputs = conv10)
    model.compile(optimizer = Adam(lr = 1e-4), loss = dice_coef_loss,metrics = [dice_coef])
    
    #model.summary()

    if(pretrained_weights):
    	model.load_weights(pretrained_weights)

    return model

model = unet()
quantize_model = tfmot.quantization.keras.quantize_model
q_aware_model = quantize_model(model)

Additional context

Error produced by this file line no 372.

RuntimeError: Layer up_sampling2d_40:<class ‘tensorflow.python.keras.layers.convolutional.UpSampling2D’> is not supported. You can quantize this layer by passing a tfmot.quantization.keras.QuantizeConfig instance to the quantize_annotate_layer API.

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Reactions:8
  • Comments:37 (7 by maintainers)

github_iconTop GitHub Comments

6reactions
sunzhe09commented, Oct 14, 2020

I met the same problem

6reactions
Lotte1990commented, Aug 19, 2020

I would also like to quantize the UpSampling2D layer. @nutsiepully Any updates on this?

Read more comments on GitHub >

github_iconTop Results From Across the Web

tfmot.quantization.keras.QuantizeConfig - TensorFlow
QuantizeConfig encapsulates all the information needed by the quantization code to quantize a layer. It specifies what parts of a layer ...
Read more >
Layer up_sampling2d:<class 'tensorflow.python.keras.layers ...
UpSampling2D '> is not supported.You can quantize this layer by passing a `tfmot.quantization.keras.QuantizeConfig` instance to the ...
Read more >
Quantizing with Custom Layers - 2.5 English - Xilinx
Note: Custom model via subclassing tf.keras.Model is not supported by vai_q_tensorflow2 in this release, please flatten it to layers. Quantizing models with ...
Read more >
Basics — TensorFlow 2.x Quantization Toolkit 1.0.0 ...
Subclassed models are not supported in the current version of this toolkit. Original Keras layers are wrapped into quantized layers using TensorFlow's ......
Read more >
Keras layers API
Keras layers API. Layers are the basic building blocks of neural networks in Keras. A layer consists of a tensor-in tensor-out computation function...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found