Refactoring into kernels
See original GitHub issueObjectives:
-
Improve performance through specialised protocols. For instance, secure comparison may be performed via generic bit decomposition but can in certain cases be made more efficient through specialised protocols such as those given in e.g. SecureNN.
-
Allow easy mixing and matching of protocols to benchmark performance, potentially having several variants of an operation in order to observe how they compare on a set of tasks. For instance, polynomial approximation can be done in a constants number of rounds but may not be worth it due to the compute/communication tradeoff. Similarly, testing model performance with different approximation coefficients and degrees would be useful.
-
Make it easy to express mixed-protocol solutions. Concretely, this is needed for ABY (different types of sharing) and Gazelle (HE and GC).
-
Allow protocol code to be isolated or modular based on e.g. type, making it easier to verify a particular operation from e.g. a security point of view. Specifically, kernels should somewhat represent reactive functionalities as used in cryptographic literature.
-
Minimise code duplication by providing high-level kernels that work across data types. For instance, sigmoid polynomial approximation is often independent of the cryptographic technique used for private fixedpoints, as long as multiplication and addition are supported. Another example is that many operations (e.g. reduce_sum or cumsum) simply delegate to the underlying ring tensors and it would be nice to have a generic kernel for this.
Derived objectives:
-
As a consequence of supporting operation overloading, tensors should be decoupled from any specific protocol and instead be simple data containers. One should still be able to perform, say
x + y
, and have it mapped to a current protocol for addition given the type of x and y. -
As a consequence for minimising code duplication we should support kernel composition, where e.g. a generic multiplication kernel for fixedpoints could simply delegate to multiplication for ring elements and apply a truncation.
-
As a consequence of having a modular code base it should be easy to add support for new data types such as quantised values or SecureNN’s odd ring tensors. Doing so should simply mean extending the code base and not require any update to existing kernels.
Given the above it only seems necessary to put the kernel abstraction in place for protocol and data type tensors, and not for backing tensors. While there is still room for improvements and experiments around the latter it seems that there is currently no need to do so from high levels.
Operations (as current implemented in Pond and SecureNN):
- cache
- truncate
- reveal
- add
- reduce
- cumsum
- sub
- mul: (private, private), (private, public), (public, private), (public, public)
- square (think this could/should be removed)
- matmul
- conv2d
- avgpool2d
- batch_to_space_nd
- indexer
- transpose
- strided_slice
- split
- stack
- concat
- mask
- reshape
- expand
- squeeze
- equal
- zeros
- bitwise_not
- bitwise_and
- bitwise_or
- msb
- lsb
- bits
- negative
- non_negative
- less
- less_equal
- greater
- greater_equal
- select
- equal_zero
- relu
- maxpool2d
- maximum
- reduce_max
- argmax
Types:
- public: int, fixed
- private: additive_int, additive_fixed
- masked: masked_int, masked_fixed ??
Metatypes:
- constant
- public placeholder
- private placeholder
- public variable
- private variable
- cached public
- cached private
- cached masked
Issue Analytics
- State:
- Created 5 years ago
- Reactions:2
- Comments:24 (24 by maintainers)
Some thoughts around how high level users can use different kernels.
Constraints:
Pond only
Pond and SecureNN
We are doing to invest a new activation function here,
relu_exact
, to differentiate from the built-inrelu
.Gazelle
Plaintext training, encrypted prediction
Also just realized that
aby
would be the protocol when you use theReplicated
numbers. Perhaps there are other protocols that could share kernels withaby
.