ep5 Ingonyama | Making ZKP computation more seamless and democratized

Guest: Omer Shlomovits, the CEO and Co-founder of Ingonyama, X(twitter)@OmerShlomovits

Cohost: Miles, the Chinese Ambassador of Ingonyama, X(twitter) @Sixmiles0825

Host: Franci

Contents

Intro

Why we need hardware for zkp computation

How to make hardware that are catering for different requirements

Related resources

Timeline

00:25 Podcast inro

01:42 Omer and Miles' self-intro

04:49 What is Ingonyama and its primary mission

07:29 Why zkp hardware acceleration and how it serves the whole system in terms of software, algorithms and hardware

11:48 What makes zkp computation different from the past PoW mining?

15:25 Talking about ZKP hardware adoption from the perspective of necessity beyond blockchain

22:15 The fusion of academia and producing industry

27:05 Ingonyama's main focus at the moment

30:06 Intro to primary solutions for zkp acceleration and the path that Ingonyama takes

35:35 What is ICICLE library used for?

40:18 Which zk underlying primitive and elliptic curves does ICICLE support

42:52 How will Ingonyama stay in the same line with zkRollup's developments

49:29 What’s the vision for Ingonyama

Intro

While we all care about how zk can make our life better, the tools that makes it easier is also crucial. In the past podcast, we talked about proving systems that might evolve a lot, which is something make proof generation more efficient. And we also talked about ZKML from the perspective of application and user cases.

In this podcast, we move from software and algorithms to hardware. How hardware makes all these ZK things more seamless and democratized. We have Omer here in this episode, the co-founder and CEO of Ingonyama. And also, we have a co-host to help dive deep into some questions, Miles, the Chinese Ambassador of Ingonyama.

Why we need hardware for zkp computation

Why zkp hardware acceleration and how it serves the whole system in terms of software, algorithms and hardware

First, I guess as a general rule in history, usually for a technology to break into mainstream and become successful, you need software hardware and algorithms.

If you look at AI today, which is really more mature than us, you can see that the way that we even measure or talk about AI is in terms of system.

Like we start from the hardware, you need to have Nvidia GPU and you measure performance on that GPU and you know how to basically play all of AI models on the GPU. And therefore you need to build software for that. And obviously to improve and optimize, you need more algorithms.

Now the fact that eventually this is what we need, it doesn't mean that this is how it starts. So obviously with many of these technologies like zero knowledge, it starts with algorithms. Someone in academia, 40 years ago, very few someone managed to come up with this idea.

And from that point, algorithms kept improving and kind of like showing us that the complexity can be lower and lower until the point where we can understand what is the tradeoff that we want to make.

When we actually approach the industry and try to make this first breakthrough in production system. Now the first thing you do is you implemented naively on whatever hardware you have and the most accessible one is CPU. But CPU is meant for general purpose computation, right?

So most cryptographies, finite field arithmetic, which is a different kind of a beast. It's like simply there's better things.

It's not you cannot try it on CPU, but you can do much better if you want it on hardware that can actually was built and designed to support this type of of mathematics.

So in terms of where the industry is, is that we started to use this software based on CPU. But now we've already been able to observe and see and measure results on different types of hardware so that we can have this better understanding of hardware can actually get you more performance.

So that's maybe more of the technical side, but I guess part of the question here is also why do we even need fast ZKPs? And it connects in a way to what I said before, which is about removing trust in essence, technologies like zero knowledge proofs, other privacy enhancing technologies are aimed to be used by by end users, by us to make statements without centralized parties and to hold privacy.

And for that, you want to be able to achieve the same user experience that you have also with Web2 and the Internet. So if I need to make a transaction, and I want it to be private, but it's going to cost me a lot of money or it's going to take me a lot of time, or it will require some very complex hardware and setup. I know it's something that I'll probably avoid doing. So part of the reason we use hardware is to bring these requirements to very similar levels to what we have today with Web2.

What makes zkp computation different from the past PoW mining?

At least for this type of cryptography, it's very clear that there's bottlenecks and some of them are compute bottlenecks. Some of them are other bottlenecks like can be memory network, but they can be solved using hardware like this is the extension of Moore's law. In a way, if we want to keep improving the technology, we need at some point to align with this type of like Moore's law and start improving the hardware to be able to run ZKPs more efficiently.

Now, you can actually use zero knowledge proofs as a way of doing proof of work. There are actually papers that are showing that and there were even experiments with some blockchains like Aleo that done something similar.

So in terms of exploration, some mining company have a ton of GPUs for Ethereum, for example. But not Bitcoin miners because they are very specific, but to see what you can do already today in terms of ZKPs and there's going to be probably use case and it might also be early adopters, and maybe the first use case for zero knowledge would come from this type of proof of work consensus.

So eventually it will lead to some even better hardware design if you think about it more broadly, zero knowledge proof do not stop there, it can be seen as many other things. For example, there might be a big market around just adding ZKPs as features in AI products. I imagine an AI company that now wants to have this kind of verifiability using zero knowledge proof.

So we might see in the future some different types of data centers that are tailored for ZKPs and part of what we do is we try to enable this. First just to think and ideate about these data centers and how they would look like, and also how they can be put into good use by end users so how can they be accessible to anyone in a trustless manner.

So I guess the short answer is that there are similarities, you can probably replace pow with ZKPs but ZKPs are way bigger than that in the potential.

Talking about ZKP hardware adoption from the perspective of necessity beyond blockchain

We've been doing that in the company since very early on trying to explore different verticals inside and outside the space, because basically what we can do is we can take any type of hardware and run zero knowledge proofs on that hardware and basically bring you the full solution.

So when you think about it from that lens, it allowed us to have conversations with different types of like industries. To give an example, like we went into cameras and IoT industry.

So you have some IOTs that are meant for surveillance, you want to monitor some perimeter. And usually one way to do it is by collecting data from data points from these IoT devices, doing some processing on the device and then sending it to some centralized server. Obviously, there is an issue here because you are sending a lot of data which is basically noise or nothing to this data center.

Now still, you do need to have this logic running in the data center because no other way, so you do need to get this data. But what if we can have this zero knowledge happening at the edge, let's say it's running some machine learning models, saying that detecting events. So what if we're going to run this ZK at the edge, basically showing that there was nothing happening for some period of time. So then instead of sending all the raw data, you just send the proof which as you know is very small, it's like mathematical proof nothing happened during that time.

I mean, it's something that we actually explored with companies building this type of like cameras and chips doing machine learning on IoT cameras. So I think that saving money will probably be one of the leading factors in how this technology is going to get adopted.

I can give you another example that is similar but different. You know now there's a war between Israel and Gaza. And I think people realize that the actual war is psychological war, is like how do you impact the public opinion into which side is winning and get to continue the war or get criticized and so on.

So where's ZKP, I think that in future mathematical proofs are going to be crucial because you want the sensors to be able to prove that they send data which is authentic. Like, when I tried to make a claim about something that happened or did not happen. Now I have a way to back it up mathematically. Yeah, this is an image it was taken from that sensor. It was not altered. So again, ZKP would emerge out of necessity because eventually this is like a weapon in a way that would help you defend yourself in a war.

There are many other very good that we've covered in different fields along since we started the company around gaming and metaverse and AI. There are few others in the pipeline, banking and identity and privacy with ads. So, basically, we can just talk on and on on the topic of use cases but I want to stop just here and the two examples that obviously involve hardware.

So you cannot escape the fact that eventually ZKP needs to run on some hardware. Something is going to be running on some different type of hardware, which eventually we need to guide to which hardware is better, more suitable, how we develop hardware that would support this type of computation and make it a feasible use case.

How to make hardware that are catering for different requirements

Ingonyama's main focus at the moment

I do think that our industry is now focused on building what we call middleware. And it might sound like a bad thing, where are the applications. But also it's very important and critical to have and build tools that are accessible for developers and allow them to, as you say, remove complexities and have developer experience, which is very intuitive.

So, I see quite a lot of efforts around that like Starkware open sourced their prover. It's not necessarily making it easy for others but now it opens up for many dev teams to build on top and kind of build the tooling that would allow others to enjoy this sophisticated prover. I mean this stuff become more and more easy with time and then it's very nice to see.

I guess that our focus is in a way in the same line with all of that, meaning that we need some time to understand where as an industry we exactly stand in terms of hardware adoption.

And it was very clear that today when a new team of developers is building a protocol, usually they take something like an existing framework and then they start tweaking it and optimize algorithms which still there's a lot of room to do and optimize on software. But they're still missing this part of just running on the right hardware. It's not even like optimizing. Imagine writing an AI paper today, publishing it and using CPU for your AI benchmark and performance.

You cannot imagine no one does it. So we need to do the same with ZK right now. I think at this point it's just about getting this standardization, having good way to even measure. What does it mean? I see even hardware companies making these mistakes, how do I compare between the different appliances solutions. So having this best practices in place allowing teams to deploy on the right hardware. That's our main focus at the moment.

Intro to primary solutions for zkp acceleration and the path that Ingonyama takes

So, I would say that today, we still do not have ASICs for zero knowledge. There are companies in the world that are really good in terms of taking an algorithm and making and doing tape out an ASIC and packaging it and so on.So, eventually, this is going to be probably one type of business.

What we do have today is we have access to quite a lot of different hardware devices that for some of them we already start to understand how they can work with zero knowledge proofs.

So CPU obviously was like the most battle tested, but what happens when you try to take ZKPs into the cloud. So today with cloud instances you have all different like setups and configurations. Can ZKP run efficiently in the cloud. So that's one path and solution that you might explore and we talk with some cloud providers, trying to better optimize the type of instances that can actually support ZKPs let's say AWS is challenging. Okay, it's not going to actually produce you the best results. But we do want eventually to get it accessible to anyone. So having it on a cloud instance might be a good solution.

Same goes by the way with data centers that can be configured in in different ways and it's still like a good question. What exactly should be this type of configuration but going more into the cheap level. Obviously the two most accessible categories is GPUs and FPGAs. And here we do start to have a sense of how they work.

So at least in my company we started with building IP for hardware, for FPGAs for ASICs.And you know, start to play with how MSM NTT all of these problems actually play out on hardware, which is one important insight and eventually would lead into design that we think would be relevant for a variety of ZK use cases. Like when we started put an MSM with an NTT on IPs in the chip and maybe some hash function and you should be kind of covered in many ways but that's not good enough.

So we've changed our design into basically building the GPUs. We are also exploring GPUs running snarks, starks, some kind of hybrid solutions.

What happens when you want to run the GPU in your Mac? What happens when you want to run it in browser, right? In mobile? So there's quite a lot of hardware open questions here yet and directions to go.

And I mean, we're somewhere on this kind of spectrum trying to find what's going to work best for most of the use cases today and support these companies building ZKPs.

What is ICICLE library used for?

Basically, let's say you have your own like ZK based product and it is on your own CPU. You have your customer base, your users, and eventually you're going to have a plan of scale. You want to make it faster, you need to move to hardware. You need to basically let go of the CPU and start running on specific hardware.

ICICLE today supports NVIDIA GPUs and it allows you as a developer, either if you already have an existing ZK protocol working to port it into GPU. So you basically move one part at a time to run on the GPU and not just on the CPU until you can run everything on the GPU resulting in immediate performance boost.

And it eventually would also allow you to work with other types of hardware, but today it's just like this GPU, which is very accessible and easy to program. And I mean, also if you are a researcher and you just want to implement with the AI on CPU paper, which would not happen again here. I do expect that papers will start to build on hardware and to benchmark on hardware.

So that's kind of like the way forward. You can take components from ICICLE, combine them in a way like Legos and a new protocol out of that and then benchmark it. And the API should be very intuitive and simple.

Again, I meant for developers with Rust and Go language and so on. That should be like familiar with it without the hassle of under the hood. So it should be very simple. This library aimed for any developer that uses zk and allows you to basically run on GPUs.

We started from supporting snarks in a sense. And even like before they're just supporting the primitives like MSM, NTT, hash function in Merkle tree.

But yeah, going forward, we are going to start supporting both like more primitives, stark provers, more protocols, make the API better.For us API is the most important part.

How will Ingonyama stay in the same line with zkRollup's developments

So zkRollups have been one of our main focus points for a long time. And if you look at our investors, we have Polygon, Scroll, Starkware, zkSync as investors and obviously we're also in very good terms and good friends with Taiko and other L2s.

Therefore, we've been tracking and trying to figure out or answer these questions of self for quite some time. And, you know, zkRollups are often different for one another but there are so many similarities. So we do have projects with most of them in different level of maturity.

Scroll as you mentioned launched mainnet now you need to separate between launching mainnet and launching an incentivized and decentralized prover network.

So today, proving with Scroll is still centralized. However, it is something we are also participating with. But like so we do work with them on understanding better the performance that their own native prover provides and understanding how much we differentiate from that.

Because we started to develop our own Scroll Prover some time ago, and we have some promising results. So that's one approach just by releasing our Prover would allow to extend who can actually use and run Scroll Provers.

We might also have some ideas around prover optimization that either Scroll can use in their eventually open source Prover, or I don't know we can just deploy with some of our partners like mining companies.

So we need to see how it evolves and what the stakeholders and actually interested in participating in this network, eventually understand their setup requirement, because I can build a solution and algorithm but it will be not accessible by anyone cause the requirements are too high.

So that's kind of like our approach we first partner with people that has access to hardware.

And at this point they are more like idealists that want to participate in the network, and we do our best to support them by whatever means necessary, or the way from tailor solutions to their hardware to open source and contribute to eventually the improvement of the Prover efficiency overall.


Ingonyama official website

Ingonyama Chinese Community

ICICLE API - Behind the Scenes of GPU and ZK Provers, by Jeremy Felder

Silicon Safari: A Survey from the Humble CPU to exotic silicon, by Tony Wu

ZK9 Hardware Acceleration & ZK Panel


Wen Building aims to engage in in-depth conversations with Web3 builders. It explores the cutting-edge technological developments in Web3, analyzes the mechanics of blockchain products, and observes industry trends and challenges.

Homepage:https://labs.antalpha.com/podcast/

Notion: https://www.notion.so/antalpha/Wen-Building-Homepage-e052297983e944f8a80f9e00f1871093

Youtube: https://www.youtube.com/@Antalpha_Labs/podcasts

Contacts:

Donation!

ETH Address: 0x18226b84677a7a59D0A498d428feE9208105D0F7

Subscribe to Antalpha HackerHouse Media
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.