Fully Homomorphic Encryption: Looking Forward

Fully Homomorphic Encryption (FHE) is a form of encryption that enables computations to be run directly on encrypted data, meaning third parties can leverage private data without any trust assumptions. Please find a general motivation and introduction of the topic here.

Today, FHE is rarely discussed without an accompanying lament of its downsides: painfully slow and unwieldy. That being said, we wish to share some interesting developments we’ve come across to mitigate these pains and broaden adoption, all the while introducing some other cool facets of this family of encryption schemes.

We broadly categorize the problem space into four major areas of development: developer experience, compilers, hardware, cryptography. We’ll describe the four areas but spend most time looking at developer experience and compilers as they’re of more interest to us. Before we start, we’ll loosely introduce the cryptography that underpins FHE as a basis for a few of the upcoming points.

Learning-With-Errors (LWE)

Encryption schemes rely on computationally hard problems, problems that computers can’t easily solve at a big enough scale. For a familiar example, RSA security (the asymmetric encryption scheme we use to protect ourselves online) relies on the assumption that we don’t have efficient ways to factor big numbers. The bigger these numbers get, the more secure the messages encoded through them.

There are multiple FHE schemes but most of the modern ones are based on LWE. LWE is a problem that banks on the complexity of solving systems of linear equations that are perturbed by small errors (often called noise). Unless you already know the solution, it’s hard to isolate the noise and recover the content. Under the hood, it’s a rephrasing of Lattice problems which— in case you were wondering— are secure even with quantum computers. We recommend this video for a short intro.

So, for the RSA scheme we owe our security to the size of the number we have to factor. What are the parameters for LWE? Here, we not only have to worry about the dimension of our system of equations but we also need to define how precise we want to be with these small errors and how many possible values we want to be able to encrypt per equation in our system. Put together, these parameters generally present a security, efficiency, and correctness tradeoff.

Now, let’s dive back into the areas of R&D:

Developer Experience (DevX):

FHE has some adoption barriers with the DevX as they introduce a new set of development limitations. For example, developers have to be considerate using branching structures with encrypted information: to avoid any information leakage, both branches of if-else statements have to be evaluated on different copies of the state and then reconciled homomorphically at the end. There are similar concerns with loops: when does the program stop iterating the loop in order to avoid leaking information on the encrypted condition? Existing approaches include running the loop for a fixed and sufficient number of iterations, storing all the different states, and merging them homomorphically at the end. This can lead to massive increases in computation requirements.

Naive solutions crafted without a deep understanding of these constraints are generally orders of magnitude less efficient than those crafted by experts. On this front, we see DevX improvements coming from the development of libraries with high-level, optimized functions for most algorithms and computations. This is analogous to Python libraries like numpy: this library provides out-of-the-box, optimized, high-level mathematical functions and no one needs to care about their compilation in C and the tricks for efficiency like contiguous memory allocation or homogeneous data types; they work and they make code go zoom.

Most FHE libraries have bounty programs or partnerships with universities to crowdsource the development of their offerings but these are small and fragmented communities.

Compiler Improvements:

Exciting developments to look forward to! One notable area of discussion is the potential of hybrid compilations. In this approach, a single program could be compiled into different cryptographic schemes, depending on which scheme offers optimal performance for a given operation. For example, TFHE is well suited to work on exact computations and lots of logic but CKKS is a better choice to handle large dataset and approximate arithmetic (ie. data science tasks).

This type of compilation hinges on the development of two things: efficient transciphering and appropriate Intermediate Representations (IR).

Transciphering is translating from one scheme to another without decrypting the data in the process. Because these schemes each have their set of parameters, and noise structures, it’s complicated to map each characteristic over correctly, and every mapping has to be done individually. Transciphering also introduces a lot of complexity and overhead, particularly with big datasets. Finally, the keys used in the process have to be managed securely. It’s early days on this topic but you can start on existing research with Hermes, Pegasus, and Chimera.

Secondly, IRs are a stack of lower-level languages that can progressively specialize or generalize to optimize the different parts of the codebase into the optimal schemes. Coupled with transciphering, this gives compilers the potential to orchestrate the transitions, optimize at different levels, and for different schemes. The image below shows how a high level language program could first be lowered to a generic FHE IR, before lowering further into specialised IRs that can handle the different schemes, before going down into the  hardware considerations of a compiler.

How compilers can lower high level languages into optimised hybrid compilations
How compilers can lower high level languages into optimised hybrid compilations

However, these IRs require standardization across the ecosystem and while it’s still early, there are development and standardization efforts led by HomomorphicEncryption.org that include most of the key players in the space. You can also check out the HEIR paper that describes the implementation of a two-level IR stack like the one sketched out above.

Another interesting space— though not as significant as optimizing programs and compilers in general— is the selection of cryptographic parameters chosen for a FHE program.

As far as we know, finding optimal combinations is an open problem; experts have developed an art more than a science on figuring these parameters out. Most compilers take care of this for users but there’s always work being done on parameter picking: the Zama compiler takes care of this for users by turning circuits into subgraphs with better defined requirements and then finding the best aggregate secure-and-efficient set of parameters. Interestingly, these subgraphs can also be used as benchmarks to find more efficient ways to structure circuits. This is described in detail in their blogpost.

Hardware:

Hardware is another dimension where speedups are anticipated, owing to the promise of FHE itself, but also benefiting from the work being poured into Zero-Knowledge (ZK) hardware as there is some overlap in the calculations that are being optimized.

While advancements in hardware can provide some boosts in performance, they won't address the foundational inefficiencies that arise from non-expertly designed solutions. Additionally, since cryptographic standards for FHE are still evolving, it adds another layer of complexity to the development of dedicated hardware. Without fixed standards, there's added risk in committing to hardware designs that might become obsolete as the cryptographic landscape changes. Fixing standards will also create trade-offs in program efficiency as compilers won’t have as much freedom to optimize the parameters based on the program.

We refer you to HashKey’s article for more on Hardware.

Cryptographic Improvements to schemes:

The world of cryptography witnessed significant advancements in FHE since its conceptualization in 2009. State-of-the-art schemes were gaining orders of magnitudes in efficiency nearly every year until around 2016 (Gm, CKKS and TFHE!). However, there has been diminishing returns on these improvements and we’re focusing on the other areas for more immediate mega speed gainz.

--------------------------------------------

In short, for the better part of the last two decades, FHE research and development was theoretical, in major part due to its limitations. We’re starting to see application-focused research and tooling, increasingly accessible content, and more mindshare. At Polymorphic Capital, we're excited about the future of FHE and its applications. We're eager to collaborate with those pushing this frontier. If you're building in this space, reach out to @romdespoel on X/tg!

Subscribe to Polymorphic Capital
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.