Blockchain technology has been heralded as the foundation of a new decentralized internet, now referred to as Web3. But as this "world computer" grows, it’s becoming increasingly clear that something fundamental is missing: a high-performance memory layer.
Memory—the component that enables computers to store, access, and update data efficiently—is a key element of every computer on earth. From your laptop to a supercomputer, all rely on memory. This architecture, first outlined by John von Neumann, has been the backbone of computing for decades. The memory bus facilitates data exchange between the CPU and RAM. RAM provides temporary storage for the operating system, software, and data in use, enabling programs to run efficiently.
At the surface, blockchains do resemble traditional computers. We have operating systems like the EVM and SVM, running on decentralized nodes and powering a growing ecosystem of applications. But dig deeper, and the gaps start to show. What we find is that most of the top level computing parts are recognizable while the memory unit is not only unrecognizable but also inefficient.
Instead of a proper memory architecture, blockchains rely on a mashup of different best-effort approaches, creating critical bottlenecks and costly operations:
Redundancy: Gossip networks have replaced memory buses but redundantly propagate the same data to multiple nodes, wasting bandwidth, slowing block confirmation, increasing overall system cost;
Congestion: Inefficient networking stacks & state access cause unpredictable delays and cost surges in transaction processing;
State Bloat: Full nodes must store all state data permanently, making retrieval costly and complex.
Today, most blockchain networks use gossip protocols to propagate data — broadcasting everything to everyone, redundantly. Bitcoin and Ethereum 1.0 relied on this approach, but it didn’t scale. Ethereum 2.0 had to rethink this with new wire protocols to cut down on message overload. Avalanche also had to push a software update to reduce "excessive gossip" between validator nodes as it was straining its network. Solana, on the other hand, sidestepped full history retention by storing only the recent state, relying on professional “warehouse nodes” to archive older data.
While today's solutions address data accessibility, they fail to solve the fundamental challenge of next-generation memory architecture—one that seamlessly integrates lightning-fast retrieval, atomic-level updates, and fluid real-time interactions. Existing technologies across platforms offer glimpses of this potential, but none deliver the comprehensive foundation required for a truly scalable world computer that can transform how we interact with decentralized information.
Despite years of R&D, this fundamental piece of architecture — memory — has yet to be solved. Until now.
Optimum is building the missing memory layer for Web3.
It’s the world’s first decentralized, high-performance memory infrastructure for any blockchain — designed to scale data access, reduce network strain, and power the next generation of dApps. Powered by Random Linear Network Coding (RLNC) — a proven, MIT-developed data encoding technique — Optimum turns sluggish, redundant networks into fast, efficient, scalable systems.
With Optimum, blockchains gain a memory bus and RAM that rivals the performance of modern computing. At its core, Optimum is building a provably optimal memory infrastructure that transforms blockchains into high-speed, scalable computing networks. The architecture is modular, permissionless, and easy to integrate via API.
OptimumP2P - A pub-sub protocol designed to replace outdated gossip networks with smart, coding-based data propagation. It dramatically cuts down redundancy and improves throughput — leading to higher APY for validators, faster transactions, and smoother user experiences for dApps, DEXs and more.
Optimum deRAM - A decentralized RAM layer that ensures Atomicity, Consistency, and Durability (ACD). deRAM gives applications real-time read/write access to blockchain state, enabling fast, cheap storage and access. This is what unlocks the next wave of latency-sensitive, on-chain use cases: trading, gaming, AI, and social.
At the heart of Optimum lies Random Linear Network Coding (RLNC), a breakthrough in data coding developed by MIT Professor Muriel Médard, Optimum’s co-founder and CEO.
RLNC has been refined over two decades. RLNC has garnered significant recognition for its transformative impact on data networks. It was honored with the IEEE Koji Kobayashi Computers and Communications Award in 2022 and has contributed to Muriel Médard’s election to the U.S. National Academy of Engineering.
Data coding has been around for decades and there are many iterations of it in use in networks today. RLNC is the modern approach to data coding built specifically for decentralized computing. This scheme transforms data into packets for transmission across a network of nodes, ensuring high speed and efficiency.
It’s a mathematical, provably optimal way to handle memory for distributed systems — and Optimum is the first to bring it on-chain.
High-performance Web3 memory powered by RLNC empowers faster data propagation, efficient storage, and real-time access, making it a key solution for scalability and efficiency.
Blockchains rely on data moving efficiently across a network of nodes. RLNC is an advanced encoding technique that transforms data into encoded fragments, allowing nodes to recover information efficiently from a subset rather than receiving everything.
Optimum's products include OptimumP2P and decentralized Random Access Memory (deRAM) deliver benefits across the entire blockchain ecosystem:
For L1 and L2 blockchains: Faster block propagation, reduced bandwidth consumption, and optimized storage
For validators: Accelerated data propagation, lower operational costs, higher APY and MEV income
For dApp developers: Improved transaction relay and prioritization, enabling latency, throughput, and cost-sensitive apps
For end users: Faster transactions and more responsive interfaces, improve user experience
Behind Optimum is a world-class founding team with deep expertise in distributed systems, cryptography, and high-performance computing.
Prof. Muriel Médard (Co-Founder) Twitter | LinkedIn
NEC Chair of Software Science and Engineering at MIT
Co-inventor of Random Linear Network Coding (RLNC)
Ranked #1 globally in network coding citations (Google Scholar)
U.S. National Academy of Inventor, U.S. National Academy of Engineering, American Academy of Arts and Sciences and German National Academy of Science.
Dr. Kishori Konwar (Co-Founder) LinkedIn
Former Senior Engineer & Scientist at Meta, MIT PostDoc
Previously a Quant Developer at Goldman Sachs
Deep expertise in distributed systems and fault-tolerant computing
Kent Lin (Co-Founder) Twitter | LinkedIn
Former Partner at GSR Ventures, a $4B global VC
Harvard MBA, President of Harvard Blockchain, Co-founder of Plug and Play Crypto
Founder of McKinsey Crypto DAO, with a focus on blockchain infrastructure and strategy
Prof. Sriram Viswanath
B.Tech. from IIT Madras, M.S. from Caltech, and Ph.D. from Stanford, all in electrical engineering
Recipient of the NSF CAREER Award and IEEE IT/ComSoc Best Paper Award
Renowned for his work on coding theory, data compression, and distributed algorithms
Prof. Nancy Lynch
Former NEC Chair of Software Science and Engineering at MIT (predecessor to Muriel Médard)
Published the foundational consensus results in 1985
Creator of the DLS algorithm in 1988, a foundational precursor to modern consensus systems like Tendermint
Optimum recently announced the successful closure of its $11M seed round, led by 1kx, with participation from top-tier investors including Robot Ventures, CMT Digital, Spartan, Finality Capital, SNZ, Triton Capital, Big Brain, CMS, LongHash, NGC, Animoca, GSR, Caladan, Reforge, and more.
Optimum's angel investors include many renowned builders and investors, such as Abhijeet Mahagaonkar (CTO, Polychain), Arthur Cheong (Founder CEO CIO, DeFiance Capital), Gracy Chen (CEO, Bitget), Robinson Burkey (Co-founder CCO, Wormhole), Sandeep Nailwal (Co-founder, Polygon), Sankha Banerjee (Chief Economist, Babylon), Saurabh Sharma (GP, Jump Crypto), Tal Tchwella (Head of Product, Solana), and Zaki Manian (Co-founder, Sommelier), as well as co-founders of Aethir, Aztec, Espresso, Magna, Pyth, Quantstamp, Taiko, Zama, ZkCloud and more.
Web3 doesn’t just need more block space or cheaper gas. It needs architecture that supports real-time data access, minimal latency, and scalable throughput—without compromising decentralization.
“If you think of Web3 as a world computer, what we’re building is the critical component every computer needs—memory,” says Muriel Médard, co-founder and CEO of Optimum. “With a high-performance memory layer, our goal is to scale every blockchain.”
By applying core ACID principles—Atomicity, Consistency, and Durability—in a decentralized context, Optimum enables a new paradigm: decentralized systems that can finally scale. This unlocks real-time, cost-sensitive applications in trading, gaming, AI, and beyond—previously bottlenecked by legacy blockchain architecture.
Optimum isn’t just patching symptoms—it’s solving the root problem. By introducing a true memory layer to Web3, Optimum has the potential to redefine decentralized computing from the ground up. We’re not just making blockchains faster. We’re making them smarter, more responsive, and ready for what’s next.
The world computer was never complete—until now.
Optimum is the first decentralized high-performance memory layer for the world computer, designed to eliminate scalability bottlenecks by enabling fast data propagation, efficient storage, and real-time access.
Optimum is now live on private testnet with OptimumP2P, actively onboarding L1s, L2s, validators, and node operators to experience the world computer’s missing memory layer in action.
Learn more at getoptimum.xyz
Follow Optimum on x.com/get_optimum.
Check our open roles at jobs.ashbyhq.com/optimum