question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Refactoring into kernels

See original GitHub issue

Objectives:

  • Improve performance through specialised protocols. For instance, secure comparison may be performed via generic bit decomposition but can in certain cases be made more efficient through specialised protocols such as those given in e.g. SecureNN.

  • Allow easy mixing and matching of protocols to benchmark performance, potentially having several variants of an operation in order to observe how they compare on a set of tasks. For instance, polynomial approximation can be done in a constants number of rounds but may not be worth it due to the compute/communication tradeoff. Similarly, testing model performance with different approximation coefficients and degrees would be useful.

  • Make it easy to express mixed-protocol solutions. Concretely, this is needed for ABY (different types of sharing) and Gazelle (HE and GC).

  • Allow protocol code to be isolated or modular based on e.g. type, making it easier to verify a particular operation from e.g. a security point of view. Specifically, kernels should somewhat represent reactive functionalities as used in cryptographic literature.

  • Minimise code duplication by providing high-level kernels that work across data types. For instance, sigmoid polynomial approximation is often independent of the cryptographic technique used for private fixedpoints, as long as multiplication and addition are supported. Another example is that many operations (e.g. reduce_sum or cumsum) simply delegate to the underlying ring tensors and it would be nice to have a generic kernel for this.

Derived objectives:

  • As a consequence of supporting operation overloading, tensors should be decoupled from any specific protocol and instead be simple data containers. One should still be able to perform, say x + y, and have it mapped to a current protocol for addition given the type of x and y.

  • As a consequence for minimising code duplication we should support kernel composition, where e.g. a generic multiplication kernel for fixedpoints could simply delegate to multiplication for ring elements and apply a truncation.

  • As a consequence of having a modular code base it should be easy to add support for new data types such as quantised values or SecureNN’s odd ring tensors. Doing so should simply mean extending the code base and not require any update to existing kernels.

Given the above it only seems necessary to put the kernel abstraction in place for protocol and data type tensors, and not for backing tensors. While there is still room for improvements and experiments around the latter it seems that there is currently no need to do so from high levels.

Operations (as current implemented in Pond and SecureNN):

  • cache
  • truncate
  • reveal
  • add
  • reduce
  • cumsum
  • sub
  • mul: (private, private), (private, public), (public, private), (public, public)
  • square (think this could/should be removed)
  • matmul
  • conv2d
  • avgpool2d
  • batch_to_space_nd
  • indexer
  • transpose
  • strided_slice
  • split
  • stack
  • concat
  • mask
  • reshape
  • expand
  • squeeze
  • equal
  • zeros
  • bitwise_not
  • bitwise_and
  • bitwise_or
  • msb
  • lsb
  • bits
  • negative
  • non_negative
  • less
  • less_equal
  • greater
  • greater_equal
  • select
  • equal_zero
  • relu
  • maxpool2d
  • maximum
  • reduce_max
  • argmax

Types:

  • public: int, fixed
  • private: additive_int, additive_fixed
  • masked: masked_int, masked_fixed ??

Metatypes:

  • constant
  • public placeholder
  • private placeholder
  • public variable
  • private variable
  • cached public
  • cached private
  • cached masked

Issue Analytics

  • State:open
  • Created 5 years ago
  • Reactions:2
  • Comments:24 (24 by maintainers)

github_iconTop GitHub Comments

4reactions
mortendahlcommented, Mar 1, 2019

Some thoughts around how high level users can use different kernels.

Constraints:

  • model specification should be free of cryptographic jargon (keras only specifies the structure)

Pond only

model = tfe.keras.Sequential([
  tfe.keras.layers.Dense(activation='relu'),  # approximate when used with Pond
  tfe.keras.layers.Dense(activation='sigmoid'),  # approximate when used with Pond
])

model.compile(protocol='pond')

Pond and SecureNN

We are doing to invest a new activation function here, relu_exact, to differentiate from the built-in relu.

model = tfe.keras.Sequential([
  tfe.keras.layers.Dense(activation='relu'),  # will use Pond when compiled below
  tfe.keras.layers.Dense(activation='relu_exact'),  # will use SecureNN when compiled below
  tfe.keras.layers.Dense(activation='sigmoid'),  # will use Pond when compiled below
])

my_protocol = tfe.protocol.pond + {
  'relu_exact': tfe.protocol.securenn.relu
}

model.compile(protocol=my_protocol)

Gazelle

model = tfe.keras.Sequential([
  tfe.keras.layers.Dense(activation='relu'),
])

gazelle = {
  'dense': tfe.protocol.ahe.dense,
  'relu': tfe.protocol.aby.relu,
}

model.compile(protocol=gazelle)

Plaintext training, encrypted prediction

model = tfe.keras.Sequential([
  tfe.keras.layers.Dense(activation='relu'),
  tfe.keras.layers.Dense(activation='sigmoid'),
])

model.compile()
model.fit(..)

model.predict(x, protocol='pond')
1reaction
justin1121commented, Mar 1, 2019

Also just realized that aby would be the protocol when you use the Replicated numbers. Perhaps there are other protocols that could share kernels with aby.

Read more comments on GitHub >

github_iconTop Results From Across the Web

Refactoring the Linux Kernel
This talk will take a close look at recent refactoring work and explain the scope, the benefits and the pitfalls. This includes a...
Read more >
Refactoring the Linux Kernel - Thomas Gleixner - YouTube
The effort to merge the real-time preemption patch into the mainline kernel requires to refactor existing infrastructure in the kernel.
Read more >
Refactoring the FreeBSD Kernel with Checked C - IEEE Xplore
We evaluate the use of Checked C on operating system kernel code by refactoring parts of the FreeBSD kernel to use Checked C...
Read more >
Refactor kernel strings - eLinux.org
Most important is to create awareness for the string bloat in the kernel. One way to achieve that is to document upstream the...
Read more >
Code refactoring should not be done just because…
The first of these is to believe that the kernel coding standards do not matter and are not enforced. … The other trap...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found