Dmytro Zakharov, Lasha Antadze, Oleksandr Kurbatov
The security of current neutral networks is comparable to the security of the Internet in its early days. While all major companies are focused on performance, there is a massive gap in AI safety. We can't trust that there are no hundreds of low-paid workers behind the model. We can't be sure the model doesn't lie. We can't verify the model output. We can't be sure it doesn't collect (actually it does) any personal data about clients and doesn't share it with anyone.
While AI is expanding the world, we need a fundamentally different approach to how it must operate. We introduce Bionetta🌿 -- a zkML framework that is designed to resolve some security concerns and allow achieving the following set of properties:
Provability. The verifier can check the correctness of the model execution.
Zero-knowledge. The verifier cannot extract any information except the statement is correct. For example, a user can scan their face and create proof of likeness, but no biometric data is shared with the verifier.
Client-side. We try to avoid any data leakage to a trusted provider. The framework allows for creating models, the proof of which is computationally cheap.
Simply explained, the neural network is a function, parameterized by a large number of parameters (which we call ). Neural network takes the input $x$ and outputs the prediction . Formally, we write that as . The inference process is illustrated below in Figure 1.
As an example, consider the linear regression model, taking three inputs . The model can be parameterized by weights and the function itself looks as:
When we train the model (aka neural network), we adjust the parameters to minimize the so-called loss value, representing how poorly our model performs on the training data. After the training, we fix values , and whenever we need to gain some information from the incoming input , we simply compute
Note. The reason why out of a sudden we decided to recall the machine learning theory (although “quite” superficially) is that one of the key optimizations we came up with is closely related to how we will interpret the parameters . Keep that in mind!
Now, having the neural network , how do we wrap it in a zero-knowledge system? More importantly, what are the goals we are pursuing? In ZKML, we have two primary use-cases:
The model parameters are private (architecture is still public!), but the input is public. For example, OpenAI might want to hide the weights of their new model (which they do by the way), but in order to prove to the user that this is the neural network that generates the input, they might use some existing ZKML framework (such as EZKL).
The model parameters are public, but the input is private. This is the case we are primarily interested in, and which is called the client-side proving. For instance, the neural network might be a face recognition system, while the input is the person’s photo.
This way, to implement the neural network circuit, we need to provide two sets of signals: input values (in a specifically formatted way to fit in the circuits, which we call the “quantized value” and denote with a hat) and quantized weights . Depending on the use case, we either conceal or . Then, we represent all the calculations of the original neural network inside the circuit $\widehat{f} $ so that and represent approximately the same value. That is pretty much it.
If the only goal is to check the output correctness, we might add another public signal — claimed quantized output. The circuit can then check whether (for example, by checking that the distance (norm) between and is below a certain small threshold ).
The whole process is depicted below, in Figure 2. First, we calculate the neural network output , quantize this value to get the claimed output . Then, given the neural network circuit $\widehat{f} $ and quantized input , we compute and inside the zero-knowledge prover assert that , yielding the proof . Finally, the user can publish .
So what is so special about Bionetta🌿? As said earlier, we specifically target the case when the input is private while weights are public, leading to very neat optimization tricks, when implemented in the R1CS system. To see the trick, suppose you want to implement in zero-knowledge calculation of the following simple model, consisting of a single matrix-vector multiplication with three elements, outputting two elements (in fancy terms, called multidimensional linear regression):
For simplicity, assume that all parameters are already properly quantized, so we are working over elements from some field. So, what is the number of gates/constraints needed to implement the verification of the function ?
All current approaches (including EZKL or keras2circom) do the following: we introduce public weight signals , private input signals , and compute six products for and . Finally, we compute the corresponding sums to get the output vector elements. The example implementation in Circom might look like:
template MatVecMul2x3() {
// Public input: weights matrix W of size 2x3
signal input W[2][3];
// Private input: three input neurons x
signal input x[3];
// Output: matrix-vector product result y=W*x
signal output y[2];
for (var i = 0; i < 2; i++) {
signal sum = 0;
for (var j = 0; j < 3; j++) {
sum += W[i][j] * x[j];
}
y[i] <== sum;
}
}
Halo2/-ish circuits would have a similar structure. Seems like the end of the story, we have successfully implemented the circuit.
However, what if we tell you that calculating such $f(\mathbf{x};W)$ costs zero constraints in Bionetta🌿? The trick is almost obvious and lies on the surface, but we have not seen it anywhere in the ZKML jungle! Behold:
All neural network weights must not be signals, but must be embedded into the circuit as constants. Such construction would save LOADS of signals.
Indeed, when implemented in R1CS, any multiplication by constants and additions are essentially free. Consider the general example of where the weight matrix is of size (and thus is of size ).
Note that probably such optimization might be somehow exploited in , but the direct implementation would not give any results: additions and multiplications by constant in $\mathcal{P}\text{lon}\mathcal{K} $ still require separate gates and thus cannot be efficiently implemented as in R1CS.
With such an approach, classical machine learning methods such as linear regression, PCA/LDA, or even logistic regression cost roughly zero constraints.
Free linearities are great news, but here are the bad news: the number of non-linear activations is still overwhelming in the modern neural networks. Indeed, consider, for example, the simplest fully connected neural network with a single layer:
And the corresponding illustration of this execution:
Note that this is just a previous matrix-vector product example, but here we simply apply the non-linear function $\phi$ to each output value.
While ish-based approaches can handle such a problem quite efficiently by utilizing the lookup tables, the R1CS in its vanilla form does not possess such power. For this very reason, verifying the computation of requires, for the most basic neural network activations such as ReLU, roughly 255 constraints per call (corresponding to roughly the bitsize of the prime field). In the MLP above, for example, verifying the computation of would cost zero constraints for computing , but applying element-wise would add roughly constraints. How to solve this problem?
As of now, we propose the following two solutions:
Build a custom neural network architecture that uses an insignificant number of non-linearity calls, but can use a high degree of linearity (e.g., large matrix products and such).
Implement the custom look-up logic for R1CS.
Both approaches have their own solution and implementations worth their blogs, so we will skip the details here and move on.
All linear operations inside our framework are free.
Every non-linear call costs 255 constraints. However, this number would be severely reduced after introducing incoming optimization tricks.
If the neural network is built in the specific manner — that is, it utilizes many linear calls with a relatively low number of activations — we can build efficient circuits requiring a negligible amount of resources for both proving and verification.
Nonetheless, our benchmarks show that Bionetta🌿 (V1) is faster than existing approaches even when assessed on the models with a significant number of activations.
Now, the previous discussion completely omitted the fact that we need to somehow work not over the prime field, but over floats/doubles. In other words, proving that is very straightforward in the circuit if are assumed to be the field elements, but how do we prove statements like when are arbitrary numbers (say, )?
While there are many amazing papers out there explaining how to prove such statements (see “Zero-Knowledge Location Privacy via Accurate Floating-Point SNARKs” by Jens Ernstberger et al., for instance), we are not looking for perfect accuracy. For that reason, we are going to use a standard technique: instead of multiplying $x \times y $ directly, we are going to quantize them first: that is, find and (where denotes the rounding operation), and then find the product . We call the precision coefficient, which specifies the degree of accuracy we want from the computation. The question, though, is how to interpret the result of such a product: in other words, how do we get the value of ? The answer is very simple: simply divide by ! That being said, .
Example. Suppose we want to multiply by . We choose the precision coefficient . This way, the quantization is performed as follows: and . Then, one multiplies two values to get . To get the “real” value of , we divide the result by , so we get . As one might see, this is close to the expected value of
Now, the idea that we need to divide by is not random: notice that is precisely , but since we expect both and to be large enough (since is in practice larger than 20), we can drop the flooring operation: . So all in all, , as specified above.
Now, since we need to encode numbers into finite field elements, we need to somehow handle negative integers. The idea here is simple: during the quantization step, if the integer is negative, we simply set .
In this scenario, to check whether the quantized value is positive or negative, typically one checks whether it falls in range or so that we allocate roughly “slots” equally for both positive and negative integers. In the latter case, we assume the integer is positive while in the former that it is negative.
Note. However, we noticed that range check on can be sometimes suboptimal, so we decided to change it to where is the bitsize of the prime field order . This way, to check the sign of some integer, we simply check its th bit. This still does not save us from constraints per range check, but certain complex operations (such as computing LeakyReLU) might be better optimized.
This sounds awesome, but we encounter the same issue that keras2circom once faced. Suppose our goal is to compute with the precision coefficient . In this case, if we compute the quantization and calculate the result as we would be screwed: multiplying a (roughly) 35-bit integer 10 times results in a 350-bit integer, which exceeds the allowed prime field order. Needless to say, the result of such operation would be completely incorrect (as the result would be in fact ).
One way to fix this problem is to use smaller . For example, setting would suffice: in that case, $\widehat{x}^{10} $would result in a (roughly) 200-bit integer which is lower than allowed 253 bits. However, what if we told you to compute ? You would need to set (for , the result would be already with 300 precision bits), resulting in a drastic drop of accuracy.
Example. To see why, suppose we want to compute in-circuit. You would proceed as follows: you quantize the value of as . Then, you compute and divide the result by . So the result is 🥁… The result is , while the real result equals approximately 0.366.
To resolve the issue, we propose to simply cut the precision at certain intermediate steps. Now, again, suppose we are computing with . We can split the function execution into two parts: . Then, we compute the quantization and compute the first part: . If we were to extract the “real” value of this intermediate result, we would divide this result by (note that we have no issues here since we have 210 bits of precision — significantly less than allowed 253 bits). However, instead of dividing by , we are going to divide by . Why? Because such division would correspond (again, approximately) to the quantized value of this intermediate result. This way, if we got, say, , we can compute the final result as , which would have bits of precision. To get the final result, we divide by .
Example. Suppose we are computing in-circuit with . The quantization of $x=0.8 $ yields . Then, we compute , yielding the intermediate result . We finally multiply this value by to get . To interpret this result back to floats, we divide this result by , getting roughly 0.196874, which almost precisely coincides with the “real” value.
The whole idea of our arithmetization scheme is depicted below.
The biggest issue, though, is that the shift operation is very costly to compute inside the circuit (in fact, the cost of such operation is roughly 255 constraints — the bitsize of prime field). We solve this issue by noticing the following:
Computing ReLU and then cutting precision costs 255 constraints in contrast to expected 510. This comes from the fact that both ReLU and cut precision operations require the bit decomposition of an integer, and thus we can reuse it for both cases.
This way, we can naturally cut precision after each activation applied during the neural network inference. This way, we both apply the non-linearity and take care of an accumulated precision.
In fact, there are even more tricks under the hood (for example, you can easily compute in roughly 256 constraints the execution of the LeakyReLU function when is the negative power of two). Yet, we will describe them more formally in our incoming paper.
The provided quantization scheme provides almost perfect accuracy for large enough (in practice, values of around 20-35 suffice).
The provided quantization scheme does not impose any additional constraints besides precision cuts, which can be applied together with the non-linearity calls.
The proposed quantization scheme allows to use a wide range of modern activation functions: ReLU, LeakyReLU, and even functions like hard sigmoid or hard swish without losing a single percent of accuracy.
Suppose we want to implement a very simple neural network, recognizing digits based on the image of size . For that reason, we specify the following neural network architecture: we flatten the input neurons, connect them with 64 neurons with a LeakyReLU used as a non-linearity function (with ) and finally use 10 neurons in the end. Simply speaking, each output neuron would represent the model’s confidence that the corresponding digit is depicted (for example, if the output were , it would imply that the neural network is most confident that the depicted digit is three). When this neural network is trained and built in Bionetta🌿 with , we get the following Circom circuit:
// AUTOGENERATED
pragma circom 2.1.6;
include "./weights.circom";
include "../bionetta-circom/circuits/zkml/includes.circom";
include "../bionetta-circom/circuits/hash/poseidonAny.circom";
include "../bionetta-circom/circuits/matrix/matrix.circom";
template Core(FIELD_BITS) {
var p = 20;
signal input image[1][28][28];
signal input address;
signal addressSquare <== address * address;
signal input threshold;
signal input nonce;
signal nonceSquare <== nonce * nonce;
signal input features[10];
component hash_features = PoseidonAny(10);
hash_features.inp <== features;
signal output featuresHash <== hash_features.out;
component layer1Comp = Flatten(28,28,1,64,get_dense_W(),get_dense_b());
layer1Comp.inp <== image;
component layer2Comp = VectorLeakyReLUwithCutPrecision(64,p,2*p,5,FIELD_BITS);
layer2Comp.inp <== layer1Comp.out;
component layer3Comp = Dense(64,10,get_dense_1_W(),get_dense_1_b());
layer3Comp.inp <== layer2Comp.out;
component lastLayerComp = VectorLeakyReLUwithCutPrecision(10,p,2*p,0,FIELD_BITS);
lastLayerComp.inp <== layer3Comp.out;
component verifier = VerifyDist(10, FIELD_BITS);
verifier.in1 <== lastLayerComp.out;
verifier.in2 <== features;
verifier.threshold <== threshold;
signal output isCompleted <== verifier.out;
}
component main{public [threshold, nonce, address]} = Core(254);
Notice that instead of simply executing the layer logic (as with keras2circom, for instance), we also cut the precision to handle the overflowing (for example, see VectorLeakyReLUwithCutPrecision
function). Our automatic tools allow to spot places where the overflow might occur, so you might not care much about handling precisions when working with Bionetta🌿
Note that in contrast to requiring specifying matrices and biases as signals, we use them as constants, which are hardcoded in the file weights.circom
, which looks… Well… As is…
function get_dense_W(){
var dense_W[64][784];
dense_W = [[
0xb75d, 0xad2f, 0x2ebd, 0x5e67,
0xa981, -0xeaba, -0x60e6, 0x16bc,
0x126b, -0x7c81, -0x13ba9, -0xb458,
0x11a1, 0x511b, 0xb117, 0x5c92,
-0xf179, -0x4855, 0x15a9, -0x10b14,
0x2e42, 0x146d, -0x541c,...
]];
return dense_W;
}
// ...Endless parameters specified here...
function get_dense_1_b(){
var dense_1_b[10];
dense_1_b = [
0x12ff2dc000,0x137ab6e00,0x13d35b2000,
0xf43e25000,0xfec98a000,0x14e6c40000,
0x125e2fa000,0x144bc40000,
0x18c2c4c000,0x12979f8000
];
return dense_1_b;
} // Oofff, that's the last one
Now, to pass the input to this neural network, we multiply the input image (formally being a tensor of shape $(28,28,1) $ filled with float32
entries) by element-wise and round the result. For example, suppose I pass the following quantized image:
The result of such operation, if we check the witness, is (to make the notation smaller, if the integer exceeded , we subtracted to get the “sign-aware” format):
To get the “real” result, we divide all values by :
As can be seen, the neural network successfully identified the digit and predicted two. In fact, the number of constraints turned out to be just slightly above 19500: this number is very small and can be easily proven and verified.