question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

SecureNN Integration

See original GitHub issue

We (Dropout Labs) are ready to start merging our internal SecureNN prototype into tf-encrypted.

Integration plan

The plan is for SecureNN to be a separate Protocol class that inherits code from Pond, but switches out Int100Tensor for a new BackingTensor. As a result, we may not be able to directly subclass Pond as it stands in the code right now, although nearly all of the code will need to be used in the SecureNN Protocol. More concrete plans and issues for each of these steps are in the works.

  • Our first step should be to figure out precisely what we need to do to be able to inherit the Pond Protocol with our new BackingTensor.
  • Subclass pond directly and create a SecureNN class.
  • Once we’ve done that, we can write out the new BackingTensor class. i.e. NativeTensor
  • Integrate the Select Share protocol (#154)
  • Integrate the bit decomposition operation (aka binarize) in Tensorflow (using tf.bitwise.right_shift or repeated division/modulus as a fallback). This is required for all further work on SecureNN protocols. Do this on the NativeTensor to start.
  • Integrate the bit decomposition operation (aka binarize) in Tensorflow for the CrtTensor.
  • Integrate the Private Compare protocol (#153)
  • Integrate the Share Convert protocol (first as noop for Int100Tensor, then progressively more general for other BackingTensors, see note below)
  • Integrate the Compute MSB protocol
  • Integrate the dReLU & ReLU protocols
  • ~Integrate the Division protocol~
    • this is not complete, but will not be done as part of this issue: #258
  • Integrate the MaxPool protocol
  • Figure out how to modify which uses of Mul/MatMul need to include the zero masking for hiding results from the crypto provider (conforming to the SecureNN MatMul protocol). (cc @mortendahl)

Here is a convenient reference for each protocol’s dependencies. Squares represent Main Protocols, circles represent Supporting Protocols. Development can be parallelized accordingly. SecureNN: Operation Dependency Graph Note that the bit decomposition operation is not pictured, as it is a prerequisite for Private Compare and is not itself a SecureNN protocol.

Changes from the original SecureNN paper

Based on our internal experimentation with numpy, we plan on changing a few things from the original paper.

  • It’s highly likely, pending the result of #100, that we’ll be using a BackingTensor based on tf.int64 (as opposed to uint64 from the paper). This is largely motivated by what’s currently/easily enabled in Tensorflow. For the moment, we’ll be developing with Int100Tensor in mind, to avoid complications, although we’ll also try to test with planned Int32 and Int64 Tensors by the end. This is possible because the BackingTensor is abstracted away from Pond, so we can subclass Pond without too much complication.
  • As a result, we’ll need to switch the ring SecureNN uses from L = 2 ** 64 according to which BackingTensor we’re using. Again, this is all abstracted away in the BackingTensor implementation. All results from SecureNN will remain.
  • We want to abstract a few parts of SecureNN a bit differently. In particular, there’s a bitwise XOR method that gets used implicitly quite a bit in the protocols that deal with bits. We’ll want that, and any other recurring functionality, abstracted into a standalone protocol method.
  • We hope to have our implementation of Share Convert work for converting to arbitrary rings, so that it can be reused in future protocols (should it become useful on its own). We’ll have similar hopes for the rest of the protocols, although this is the most obvious case.
  • We’ll be using our prototype repo to guide and motivate the integration, to take into account any other adjustments we had to make to the published pseudocode in order to get all protocols working together in numpy.

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Reactions:2
  • Comments:17 (17 by maintainers)

github_iconTop GitHub Comments

2reactions
justin1121commented, Sep 13, 2018

Notes from discussion with @mortendahl and @jvmancuso:

  • Try to abstract certain equations from securenn out into nameable functions. Bit xor in private compare for example.
  • Create a native tensor which is represented underneath by an int32 tensor where we can pass any modulus into and do all the mod reductions internally to the class.
  • Subclass Pond and create a SecureNN protocol. We will start by using the native tensor in this new protocol.
  • Fill in all the needed functions for securenn from top down and then begin filling in the more complicated functions.
  • Start work on crt tensor bit extraction.
  • Skip the zero masking in matmul for now and just remask shares before sending to the crypto provider.

I will update the top level comment to reflect these decisions.

1reaction
mortendahlcommented, Nov 5, 2018

Turns out we don’t really need share convert; skipping it only introduces a tiny error on the same scale as (either) truncation protocol. Let’s keep code around for now in case some uses come up.

Read more comments on GitHub >

github_iconTop Results From Across the Web

SecureNN: Efficient and Private Neural Network Training
In this work, we provide novel three-party secure computation protocols for various NN building blocks such as matrix multiplication, convolutions, Rectified ...
Read more >
3-Party Secure Computation for Neural Network Training
In this work, we provide novel three-party secure computation protocols for various NN building blocks such as matrix multiplication, convolutions, Rectified ...
Read more >
latticex - PyPI
The current version integrates the secure multi-party computation protocols for 3 parties. The underlying protocol is SecureNN. It is secure in the semi-honest ......
Read more >
Falcon: Honest-Majority Maliciously Secure Framework ... - arXiv
In this manner, we propose a hybrid integration of ideas from SecureNN and ABY3. Correlated Randomness: Throughout this work, we will need two ......
Read more >
thesis.pdf - Sameer Wagh
3 SecureNN: Efficient 3-Party Computation Protocols ... (B) Falcon demonstrates a hybrid integration of techniques from SecureNN [109].
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found