SecureNN Integration
See original GitHub issueWe (Dropout Labs) are ready to start merging our internal SecureNN prototype into tf-encrypted.
Integration plan
The plan is for SecureNN to be a separate Protocol class that inherits code from Pond, but switches out Int100Tensor for a new BackingTensor. As a result, we may not be able to directly subclass Pond as it stands in the code right now, although nearly all of the code will need to be used in the SecureNN Protocol. More concrete plans and issues for each of these steps are in the works.
- Our first step should be to figure out precisely what we need to do to be able to inherit the Pond Protocol with our new BackingTensor.
- Subclass pond directly and create a SecureNN class.
- Once we’ve done that, we can write out the new BackingTensor class. i.e. NativeTensor
- Integrate the Select Share protocol (#154)
- Integrate the bit decomposition operation (aka
binarize
) in Tensorflow (usingtf.bitwise.right_shift
or repeated division/modulus as a fallback). This is required for all further work on SecureNN protocols. Do this on the NativeTensor to start. - Integrate the bit decomposition operation (aka
binarize
) in Tensorflow for the CrtTensor. - Integrate the Private Compare protocol (#153)
- Integrate the Share Convert protocol (first as noop for Int100Tensor, then progressively more general for other BackingTensors, see note below)
- Integrate the Compute MSB protocol
- Integrate the dReLU & ReLU protocols
- ~Integrate the Division protocol~
- this is not complete, but will not be done as part of this issue: #258
- Integrate the MaxPool protocol
- Figure out how to modify which uses of Mul/MatMul need to include the zero masking for hiding results from the crypto provider (conforming to the SecureNN MatMul protocol). (cc @mortendahl)
Here is a convenient reference for each protocol’s dependencies. Squares represent Main Protocols, circles represent Supporting Protocols. Development can be parallelized accordingly.
Note that the bit decomposition operation is not pictured, as it is a prerequisite for Private Compare and is not itself a SecureNN protocol.
Changes from the original SecureNN paper
Based on our internal experimentation with numpy, we plan on changing a few things from the original paper.
- It’s highly likely, pending the result of #100, that we’ll be using a BackingTensor based on tf.int64 (as opposed to uint64 from the paper). This is largely motivated by what’s currently/easily enabled in Tensorflow. For the moment, we’ll be developing with Int100Tensor in mind, to avoid complications, although we’ll also try to test with planned Int32 and Int64 Tensors by the end. This is possible because the BackingTensor is abstracted away from Pond, so we can subclass Pond without too much complication.
- As a result, we’ll need to switch the ring SecureNN uses from
L = 2 ** 64
according to which BackingTensor we’re using. Again, this is all abstracted away in the BackingTensor implementation. All results from SecureNN will remain. - We want to abstract a few parts of SecureNN a bit differently. In particular, there’s a bitwise XOR method that gets used implicitly quite a bit in the protocols that deal with bits. We’ll want that, and any other recurring functionality, abstracted into a standalone protocol method.
- We hope to have our implementation of Share Convert work for converting to arbitrary rings, so that it can be reused in future protocols (should it become useful on its own). We’ll have similar hopes for the rest of the protocols, although this is the most obvious case.
- We’ll be using our prototype repo to guide and motivate the integration, to take into account any other adjustments we had to make to the published pseudocode in order to get all protocols working together in numpy.
Issue Analytics
- State:
- Created 5 years ago
- Reactions:2
- Comments:17 (17 by maintainers)
Notes from discussion with @mortendahl and @jvmancuso:
I will update the top level comment to reflect these decisions.
Turns out we don’t really need share convert; skipping it only introduces a tiny error on the same scale as (either) truncation protocol. Let’s keep code around for now in case some uses come up.