Introducing Orion: A verifiable, extensible framework to superpower web3 & AI

As the integration of machine learning (ML) models in industries grows more intricate, firms increasingly integrate APIs and services from providers like Amazon, Google, and Microsoft to adopt complex ML structures. While for some use cases such low-trust assumptions suffice, this modus operandi presents a critical concern around the integrity and validity of ML inference use. As the capabilities and use-cases of ML models in production grows, authentication of sources of inference will grow ever more relevant. Therefore, we need mechanisms with which we can trustlessly prove and verify sources and traces of inference. We call this: Validity ML.

Validity ML leverages validity proofs like STARKs, which enable the verification of the correctness of computational processes. By deploying such proof systems in machine learning applications, we gain the ability to validate the inference of ML models or to confirm that a specific input produced a certain output with a given model.

Specifically, Validity ML addresses this by enabling the validation of private data with public models or verifying private models with public data. Concurrently, the necessity of creating and maintaining open-source frameworks that empower model inference, like HuggingFace, becomes increasingly crucial. Giza aims to develop a transparent and trustless ML ecosystem by enabling a lean protocol for the deployment and use of verifiable ML models.

As we build towards an open ecosystem of AI, we are guided by the ethos of open source and the coordination vision of Web3. Hence, we are thrilled to present Orion, a Cairo library for Validity ML. Leveraging Cairo and ONNX's capabilities, Orion establishes a transparent, verifiable, and wholly open-source inference framework, readily accessible for community contribution and use.

Orion: a new ONNX Runtime built in Cairo 1.0

What is ONNX Runtime? - ONNX Runtime is a cross-platform machine-learning model accelerator, with a flexible interface to integrate hardware-specific libraries. ONNX Runtime can be used with models from PyTorch, Tensorflow/Keras, TFLite, scikit-learn, and other frameworks.

Orion offers a new ONNX runtime built in Cairo 1.0. The purpose is to provide a runtime implementation for verifiable ML model inferences using STARKs.

Orion leverages Cairo to guarantee the reliability of an inference, providing developers with a user-friendly framework to build complex and verifiable machine learning models. We invite the community to join us in shaping a future where trustworthy AI becomes a reliable resource for all.

Orion APIs

Orion provides three APIs: Operators / Numbers / Performance.


The API provides a comprehensive set of standard mathematical functions and operations specifically designed for computing neural network models and compatible with ONNX standard.

It introduces the Tensor type, which includes essential linear algebra operations such as matrix multiplication (matmul), as well as neural network functions such as softmax and linear layer. With this API, developers have access to a wide range of functions for efficient and effective neural network calculations.

You can check the full list of implemented operators here.


This API extends Cairo's built-in number capabilities by introducing Signed-Integer and Fixed-Point implementations. By incorporating these features, developers can work with a wider range of numeric data types, allowing for more accurate calculations in their applications.


This API contains a set of functions to increase the performance of your model. We understand the importance of memory efficiency and computational speed in machine learning applications. That's why our initial release supports 8-bit quantization. This feature reduces the memory footprint and accelerates the computation time of your models, allowing you to build more efficient and faster ML applications without sacrificing accuracy.

Emphasizing Community and Open Source Principles

At Giza, our journey of developing our AI framework is a stride towards a grand vision of transparent, reliable, and community-driven Artificial Intelligence, not just product creation. We are devoted to transforming our belief in a shared, inclusive, and universally accessible AI future into reality.

We value the collective intelligence of a community, surpassing a single entity, and strive to create an environment that encourages participation from everyone, from novices to experts. This commitment extends beyond code-writing to cultivating a diverse community where every voice and idea matters.

We've chosen an open-source model for our framework, reflecting our commitment to transparency, collaboration, and shared ownership. We invite all to partake in the developmental process, review our work, and contribute unique insights. Providing support to all contributors is an essential part of our approach. We appreciate all contributions, whether coding, reporting bugs, proposing new features, or enhancing documentation. We recognize that building this future is a collective task, and we are committed to supporting everyone toward this shared aim.

To start contributing, you can visit our website, where you'll find a wealth of resources, including our official documentation and repository. Join our Discord channel, where you can connect with like-minded individuals, exchange ideas, ask questions, and keep up-to-date with the latest developments. We warmly invite you to become part of our journey toward a more transparent, reliable, and community-driven AI. At Giza, we envision a future where AI extends beyond mere algorithms and computation, one that fundamentally revolves around community and collective learning. Join us to build the future of machine intelligence together, in the open.

Subscribe to gizatech.eth
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
This entry has been permanently stored onchain and signed by its creator.