question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

MaxPooling conversion

See original GitHub issue

Hi,

I tried converting the following model to PrivateModel using the secure_model method:

input_img = tf.keras.Input(shape=(32, 32, 3))
x = tf.keras.layers.Conv2D(16, (3,3), padding='same')(input_img)
x = tf.keras.layers.MaxPool2D(pool_size=(2,2))(x)
x = tf.keras.layers.Flatten()(x)
x = tf.keras.layers.Dense(10)(x)
model = tf.keras.Model(inputs=input_img, outputs=x)

And got the following error:

File “/cs/labs/peleg/avitalsh/tools/temp/tf-encrypted/tf_encrypted/private_model.py”, line 79, in secure_model y = c.convert(remove_training_nodes(graph_def), tfe.convert.registry(), ‘input-provider’, inputs) File “/cs/labs/peleg/avitalsh/tools/temp/tf-encrypted/tf_encrypted/convert/convert.py”, line 92, in convert outs = op_handler(self, nodes, input_list) File “/cs/labs/peleg/avitalsh/tools/temp/tf-encrypted/tf_encrypted/convert/register.py”, line 508, in flatten input = converter.outputs[inputs[0]] KeyError: ‘max_pooling2d/MaxPool’

When I remove the MaxPool layer, secure_model works fine.

I am using tfe version 0.5.2. When using older version (0.4.0) it works fine (with the max pool layer).

Is this a bug? or did the API change between the version and i’m not using it correctly?

Thanks

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:7 (7 by maintainers)

github_iconTop GitHub Comments

1reaction
jvmncscommented, Jun 14, 2019

Hi @avitalsh, I’ve found two bugs related to this issue.

The problem with the for-loop was that it was registering all special ops as soon as it reached the first one, which dependencies to later special ops may not have been registered yet. I’ve solved this by filtering out special ops not associated with the current node/subgraph being converted.

Another bug was in the function match_numbered_specop in convert.py. The regex built to match numbered scopes for special ops was only recognizing unnumbered ones, i.e. model/conv2d/... would match on conv2d but model/conv2d_1/... would not match on conv2d_1, which again meant that certain ops/layers that were dependencies for other layers would go unregistered. I fixed this by modifying the function to capture the right groups from the original regex.

I’m adding these fixes in an upcoming PR, which also has some new functionality around inspecting Keras models that should be helpful when checking for convertibility.

I also realized that depending on how you’re calling secure_model, it could try to convert into Pond by default, in which case the pooling layer would throw an error. The script below is how I recreated and solved these issues, and they can also be used as an example for how to use the new tfe.convert functionality.

import tensorflow as tf
import tf_encrypted as tfe

shape = (10, 10, 3)
x = tf.keras.layers.Input(shape=shape)
y = tf.keras.layers.Conv2D(16, (3, 3))(x)
y = tf.keras.layers.MaxPooling2D((2, 2), (2, 2))(y)
y = tf.keras.layers.Flatten()(y)
y = tf.keras.layers.Dense(10)(y)

model = tf.keras.Model(inputs=x, outputs=y)

# Helper to inspect the incoming graph, to ensure that TFE has conversion
# functions for everything you're requesting.
sess = tf.keras.backend.get_session()
tfe.convert.inspect_subgraph(model, shape, sess)

# Idiomatic way of converting in a specific protocol
with tfe.protocol.SecureNN():
  s_model = tfe.private_model.secure_model(model)

# This one should work as well
prot = tfe.protocol.SecureNN()
s_model = tfe.private_model.secure_model(model, protocol=prot)
0reactions
jvmncscommented, Jun 13, 2019

Hi @avitalsh, I looked into it briefly before having to jump on something else. I’ll reinvestigate today and plan get back to you by tomorrow!

Read more comments on GitHub >

github_iconTop Results From Across the Web

Max Pooling Explained - Papers With Code
Max Pooling is a pooling operation that calculates the maximum value for patches of a feature map, and uses it to create a...
Read more >
Maxpool Layer - Summarizing the output of Convolution Layer
The implementation of the forward pass is pretty simple. # convert channels to separate images so that im2col can arrange them into separate...
Read more >
Spiking Approximations of the MaxPooling Operation in Deep ...
In this paper, we present two hardware-friendly methods to implement Max-Pooling in deep SNNs, thus facilitating easy conversion of CNNs with ...
Read more >
A Gentle Introduction to Pooling Layers for Convolutional ...
This tutorial is divided into five parts; they are: Pooling; Detecting Vertical Lines; Average Pooling Layers; Max Pooling Layers; Global ...
Read more >
What is max pooling in convolutional neural networks? - Quora
As well, it reduces the computational cost by reducing the number of parameters to learn and provides basic translation invariance to the internal ......
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found