ICP = Web 3.0 (EN version)

About the Author

Advisor @Moledao @Web3Geeks, prev Tech Lead @Bybit

Twitter@0xkookoo, DM Open

Telegram: @web3kookoo

IC Research DAO: @neutronstardao

Feel free to connect and discuss.

TL;DR

Note: This article reflects the author's personal views at this stage. Some thoughts may contain factual errors and biases. They are shared solely for the purpose of discussion and I look forward to corrections from other readers.

  • BTC proposed electronic cash, pioneering the blockchain industry from 0 to 1.

  • ETH introduced smart contracts, leading the blockchain industry from 1 to 100.

  • ICP proposed Chainkey technology, driving the blockchain industry from 100 to 100,000,000.

Introduction

On January 3, 2009, the first BTC block was mined, marking the start of a tumultuous 14-year development in the blockchain industry.

Looking back over the past 14 years, the ingeniousness and greatness of BTC, the meteoric rise of Ethereum, the passionate crowdfunding of EOS, the inevitable conflict between PoS & PoW, the interconnection of multiple chains by Polkadot - all of these awe-inspiring technologies and intriguing stories have made countless people in the industry bow in admiration.

Currently, in the year 2023, what is the landscape of the entire blockchain industry? Here are my thoughts, please refer to the public chain landscape interpretation section in this article.

  • BTC, by introducing the electronic cash, stands unshakeable and legitimacy as a cornerstone of the industry.

  • ETH, by introducing the programmability of smart contracts and the composability of the L2 ecosystem, blooms like a hundred flowers in the garden, establishing itself as the leader of the industry.

  • Cosmos, Polkadot, and others, with their cross-chain interoperability, are attempting to unite the world under one banner.

  • Various "Ethereum killers" keep emerging, each dominating in their small domains.

But how will the entire blockchain industry develop in the next 10 years? Here are my thoughts:

  • Sovereignty is the only issue that blockchain needs to address, including asset sovereignty, data sovereignty, and speech sovereignty. Otherwise, there is no need for blockchain;

  • Immutability is a sufficient condition, but not a necessary condition. As long as you can ensure that my sovereignty is not damaged, I don't care if you tamper. What difference does it make if everyone's assets in the world are doubled in same proportion?

  • Complete decentralization is impossible to achieve. No matter how it is designed, there will always be "gifted" individuals/interest groups occupying more say, and there will always be people who choose not to participate. "Decentralization with multiple centers" is the final pattern;

  • Transparency is a must. Isn't this social experiment of all mankind for everyone to have a say and the right to protect their own sovereignty? Although there will always be lazy people, people willing to believe in more professional individuals, and people who choose to give up voting for maximum efficiency, this is also an active choice they make. They have the right but choose not to exercise it. As long as everything is transparent and there are no underhanded maneuvers, I am willing to accept even death. If I lose, it's because my skills are inferior. Survival of the fittest is also in line with market economy;

  • The control of decentralized code execution is the core, otherwise, it's just an unnecessary fuss. Public voting for a week, but in the end, the project team still deploys the malicious version of the code, even if it's not malicious, it's still making a mockery of everyone. You could say that half the world is made up of code now. If decentralized entities do not include control over code execution, then how would people, including governments, dare to let the blockchain industry grow?

  • Linear cost of infinite scalability. As blockchain becomes more and more closely integrated with real life, more and more people participate, and the demand grows larger and larger. The infrastructure cannot support infinite scalability, or if it is too expensive to expand, it is unacceptable.

Why ICP

Let's start with a story. In 2009, Alibaba proposed the "Remove IOE" strategy, which later became a major milestone in the success of Alibaba's "Double 11" event.

Remove IOE

The core content of the "Remove IOE" strategy is to get rid of IBM minicomputers, Oracle databases, and EMC storage devices, and to embed the essence of "cloud computing" into Alibaba's IT DNA. Where:

  • I stands for IBM p series minicomputers, running the AIX operating system (IBM's proprietary Unix system);

  • O stands for Oracle databases (RDBMS);

  • E stands for EMC mid-to-high-end SAN storage.

The reasons for removing IOE mainly include the following three points, but the first is the fundamental reason, and the latter two are more indirect:

  • Unable to meet demand, traditional IOE systems struggle to adapt to the high concurrency demands of internet companies and cannot support large-scale distributed computing architectures;

  • Costs are too high, maintaining IOE is too expensive, for example, IBM minicomputers cost 500,000, Oracle's annual guarantee is several hundred thousand, and so on;

  • Strong Dependency, IOE systems are highly dependent, "held hostage" by manufacturers such as IBM and Oracle, making it difficult to adapt flexibly according to their own needs.

Why was the "Remove IOE" strategy proposed in 2009 and not earlier?

  • Before that,

    • Alibaba's business scale and data volume had not yet reached a level that made the traditional IOE system difficult to adapt to, so there was no urgent need to remove IOE;

    • Chinese domestic database products were not yet mature enough in terms of technology and quality to effectively replace IOE;

    • Internet thinking and cloud computing concepts had not yet become widespread in China, and distributed architecture had not become a popular direction;

    • Management and technical personnel may have needed a period of practical accumulation before they realized the problems that existed and the measures that had to be taken.

  • In 2009,

    • As Alibaba rapidly expanded its business, the IOE system struggled to support the scale, and cost issues became more apparent;

    • Some open-source database products, such as MySQL, had reached a high level of maturity and could serve as replacements;

    • Internet-level thinking and cloud computing began to spread widely and be applied in China, which facilitated the promotion of the "Remove IOE" concept;

    • Former Microsoft tech guru, Wang Jian, joined Alibaba in 2008 with a global tech perspective, and deeply trusted by Jack Ma, he proposed "Remove IOE".

However, "Remove IOE" is not simply about changing the software and hardware itself, replacing old software and hardware with new ones, but replacing old methods with new ones, and using cloud computing to completely change the IT infrastructure. In other words, this was driven by industry changes, not just a simple technology upgrade.

Three stages of a business

The development of a business can be divided into 3 stages:

  1. Shaping the DNA and building the organizational culture, also called the Start-up stage, is about going from 0 to 1.

  2. Rapid growth, or "small steps, fast running", is called the Scale-up stage, about going from 1 to 100.

  3. Infinite expansion, or broadening boundaries, is called the Scale-out stage, about going from 100 to 100,000,000.

Now, let's analyze the entire blockchain industry as if it is a single business.

Start-up / Blockchain 1.0 / BTC

The innovation of Bitcoin lies in its solution to a problem that has perplexed computer scientists for decades: how to create a digital payment system that can operate without the need to trust any central authority.

However, there are indeed some limitations in the design and development of BTC, which have provided market opportunities for subsequent blockchain projects such as Ethereum (ETH). Here are some of the main limitations:

  • Transaction Throughput and Speed: The block generation time of Bitcoin is about 10 minutes, and the size limit of each block leads to an upper limit on its transaction processing capacity. This means that during busy network times, transaction confirmation may take a long time, and higher transaction fees may be required.

  • Limited Smart Contract Functionality: Bitcoin is designed primarily as a digital currency, and the types of transactions it supports and the functionality of its scripting language are relatively limited. This limits Bitcoin's application in complex financial transactions and decentralized applications (DApps).

  • Difficulty in Upgrading and Improving: Due to Bitcoin's decentralized and conservative design principles, major upgrades and improvements usually require broad consensus from the community, which is difficult to achieve in practice. This also means that Bitcoin's progress is relatively slow.

  • Energy Consumption: Bitcoin's consensus mechanism is based on Proof of Work (PoW), which means that a lot of computing resources are used in the competition among miners, leading to a lot of energy consumption. This has been criticized in terms of environmental protection and sustainability. On this point, you can also pay attention to EcoPoW, which somewhat alleviates this limitation.

Scale-up / Blockchain 2.0 / ETH

Indeed, the current Layer 2 scaling solutions for Ethereum can be seen as a form of "vertical scaling", which relies on the security and data availability guarantees of the underlying Layer 1. Although it seems like a 2-layer structure, it is ultimately limited by the processing capacity of Layer 1. Even if it is changed to a multi-layer structure and Layer 3 and Layer 4 are built, it just increases the complexity of the entire system and only delays a little time to expose the main issue. Moreover, according to the law of diminishing marginal returns, the more layers added, the more overhead will cause the scaling effect to be greatly reduced. This layered vertical expansion can be seen as a single-machine hardware upgrade, and this single machine refers to the entire ETH ecosystem.

Moreover, as usage increases, user demand for low fees and high performance will also increase. As an application on Layer 1, the cost of Layer 2 can only be reduced to a certain extent and is ultimately subject to the basic cost and throughput of Layer 1. This is similar to the theory of demand curves in economics - as prices fall, the total demand will increase. Vertical scaling is difficult to fundamentally solve the scalability problem.

Ethereum is a towering tree, and everyone relies on that root. Once the speed at which the root draws nutrients cannot keep up, people's needs will not be met;

Therefore, only horizontal expansion is more likely to be infinite.

Some people think that multi-chain and cross-chain is also a form of horizontal expansion,

  • Take Polkadot as an example, it is a heterogeneous kingdom. Each country looks different, but you have to build a kingdom for everything (e.g. a DEX) you do;

  • Cosmos is a homogenous kingdom, the meridians and bones of each country look the same, but the same, you have to establish a kingdom every time you build an application;

However, from the perspective of Infra, the modes of these two are a bit strange. Every time you build an application, you have to construct a whole kingdom? Let's give an example to see how strange it is,

  • I bought a Mac 3 months ago and developed a Gmail application on it;

  • Now I want to develop a Youtube application, but I have to buy a new Mac to develop it, which is very strange.

And both of these methods face the problem of high complexity of cross-chain communication when adding new chains, so they are not my first choice.

Scale-out / Blockchain 3.0 / ICP

Indeed, to achieve scale-out, a complete set of underlying infrastructure is needed to support rapid horizontal expansion without reinventing the wheel.

A typical example of supporting scale-out is cloud computing. The underlying templates such as "VPC + subnet + network ACL + security group" are exactly the same for everyone. All machines carry labels and types, and core components such as RDS, MQ, etc. at the upper layer support unlimited expansion. If more resources are needed, they can be quickly initiated with a click of a button.

One of my leader once shared with me, if you want to understand what infrastructure and components Internet companies need, you just need to go to AWS and look at all the services they provide. That's the most complete and powerful combination.

Similarly, let's take a high-level look at ICP to see why it meets the requirements for Scale-out.

Here are a few concepts to clarify first:

  • Dfinity Foundation: It is a non-profit organization dedicated to promoting the development and application of decentralized computing technology. It is the developer and maintainer of the Internet Computer Protocol, aiming to realize the comprehensive development of decentralized applications through innovative technology and an open ecosystem.

  • Internet Computer (IC): It is a high-speed blockchain network developed by Dfinity Foundation, specifically designed for decentralized applications. It uses a new consensus algorithm that can achieve high throughput and low latency transaction processing while supporting the development and deployment of smart contracts and decentralized applications.

  • Internet Computer Protocol (ICP): It is the native token in the Internet Computer Protocol. It is a digital currency used to pay for network usage fees and reward nodes.

What’s ICP

Let's dive into the more complex aspects of the topic. I'll do my best to keep the explanations as understandable as possible. If you want to discuss more detailed issues or have any questions, feel free to contact me. I'm here to make this complex topic more digestible for everyone.

Architecture Overview

  • Let's break down the architecture of the Internet Computer (IC) into its various layers from the bottom up:

    • P2P Layer: This layer is responsible for collecting and sending messages from users, other replicas within the subnet, and other subnets. It ensures that messages can be delivered to all nodes within the subnet, ensuring security, reliability, and resilience.

    • Consensus Layer: The main task of this layer is to sort the inputs, ensuring that all nodes within the same subnet process tasks in the same order. To achieve this goal, the consensus layer uses a new consensus protocol, which is designed to ensure security and liveness, and has the ability to resist DOS / SPAM attacks. After reaching consensus on the order of processing various messages within the same subnet, these blocks are passed to the message routing layer.

    • Message Routing Layer: Based on the tasks delivered from the consensus layer, it prepares the input queues for each Canister. After execution, it is also responsible for receiving the output generated by the Canister and forwarding it to local or other Canisters as needed. In addition, it is responsible for recording and verifying responses to user requests.

    • Execution Layer: This layer provides a runtime environment for Canisters, reads inputs in an orderly manner according to the scheduling mechanism, calls the corresponding Canister to complete tasks, and returns the updated state and generated output to the message routing layer. It utilizes the non-determinism brought by random numbers to ensure fairness and auditability of computation. In some cases, the behavior of Canisters needs to be unpredictable. For example, when performing encryption operations, random numbers are used to increase the security of encryption. In addition, the execution results of Canisters need to be random to prevent attackers from discovering vulnerabilities or predicting Canister behavior by analyzing the execution results of Canisters.

4-layers of ICP
4-layers of ICP

Key Components

  • Let's take a look at the components of the Internet Computer (IC) architecture:

    • Subnet (Subnetwork): Supports infinite expansion, each subnet is a small blockchain. Subnets communicate with each other through Chain Key technology. Since consensus has already been reached within the subnet, it only needs to be verified through Chain Key.

    • Replica: There can be many nodes in each subnet, and each node is a replica. The consensus mechanism in IC ensures that every replica in the same subnet will process the same input in the same order, making the final state of each replica the same. This mechanism is called Replicated State Machine.

    • Canister: A Canister is a type of smart contract. It is a computational unit that runs on the ICP network, can store data and code, and can communicate with other Canisters or external users. ICP provides a runtime environment for executing Wasm programs in Canisters and communicating with other Canisters and external users through message passing. It can be simply regarded as a Docker used for running code, and then you inject your own Wasm Code Image to run inside it.

    • Node: An independent server. Canisters still need a physical machine to run, and these physical machines are the actual machines in the server room.

    • Data Center: The nodes in the data center are virtualized into a replica (Replica) through the node software IC-OS, and some replicas are randomly selected from multiple data centers to form a subnet (Subnet). This ensures that even if a data center is hacked or hit by a natural disaster, the entire ICP network can still run normally. It's kind of like an upgraded version of Alibaba's "Two Places, Three Centers" disaster recovery high-availability solution. Data centers can be distributed all over the world, and even a data center could be set up on Mars in the future.

    • Boundary Nodes: Provide entrances and exits between the external network and IC subnets, and verify responses.

    • Principal: External user identifier, derived from the public key, used for access control.

    • Network Nervous System (NNS): An algorithmic DAO for governance using staked ICP, used to manage IC.

    • Registry: A database maintained by NNS, containing the mapping relationship between entities (such as Replica, canister, Subnet), a bit like the working principle of DNS today.

    • Cycles: Local tokens, representing CPU quotas for paying for resources consumed during canister runtime.

Key Innovative Technologies

  • Let's explore the Chain-key technology, a key component of the Internet Computer (IC) protocol:

    • Threshold BLS signatures: ICP has implemented a threshold signature scheme. For each subnet, there is a publicly verifiable public key, and its corresponding private key is divided into multiple shares. Each share is held by a replica within the subnet, and only when replicas in the same subnet exceed the threshold number for message signing is it considered valid. In this way, messages sent between subnets and replicas are encrypted but quickly verifiable, ensuring both privacy and security. The BLS algorithm is a well-known threshold signature algorithm that can generate simple and efficient threshold signature protocols and its signature is unique, meaning that for a given public key and message, there is only one valid signature.

    • Non-interactive Distributed Key Generation (NIDKG): To securely deploy the threshold signature scheme, Dfinity designed, analyzed, and implemented a new DKG protocol. This protocol runs on an asynchronous network and has high robustness (even if up to a third of the nodes in the subnet crash or are damaged, it can still succeed), while still providing acceptable performance. In addition to generating new keys, this protocol can also be used to reshare existing keys. This feature is crucial for the autonomous evolution of the IC topology, as subnet memberships change over time.

      • Publicly Verifiable Secret Sharing scheme (PVSS Scheme): The PVSS scheme is used in the Internet Computer protocol's white paper to implement the decentralized key generation (DKG) protocol, ensuring that node private keys are not leaked during the generation process.

      • Forward-secure public-key encryption scheme: A forward-secure public-key encryption scheme ensures that even if the private key is leaked, previous messages cannot be decrypted, thereby enhancing system security.

      • Key resharing protocol:A key sharing scheme based on threshold signatures, used to implement key management in the Internet Computer protocol. The main advantage of this protocol is that it can share existing keys with new nodes without creating new keys, thereby reducing the complexity of key management. Moreover, the protocol uses threshold signatures to secure key sharing, thereby enhancing the system's security and fault tolerance.

    • PoUW: PoUW, where U stands for Useful, improves performance and reduces wasted computation by nodes. Unlike PoW, which artificially creates difficult hash computations, PoUW focuses computational power as much as possible on serving users. The majority of resources (CPU, memory) are used for the execution of code within actual canisters.

    • Chain-evolution technology: A technology used for maintaining blockchain state machines, consisting of a series of techniques to ensure blockchain security and reliability. In the Internet Computer protocol, chain-evolution technology primarily includes the following two core technologies:

      1. Summary blocks: The first block of each epoch is a summary block, which contains some special data for managing different threshold signature schemes. A low-threshold scheme is used to generate random numbers, while a high-threshold scheme is used to authenticate the replicated state of subnets.

      2. Catch-up packages (CUPs): CUPs is a technology used for quickly synchronizing node states. They allow newly joined nodes to quickly get up to speed with the current state without having to rerun the consensus protocol.

My logical deduction for the entire IC (Internet Computer) underlying technology is as follows:

  • In traditional public key cryptography, each node has its own pair of public and private keys. This means that if a node's private key is leaked or attacked, the security of the entire system will be threatened. However, the threshold signature scheme divides a key into multiple parts, which are distributed to different nodes. A signature can only be generated when a sufficient number of nodes cooperate. This way, even if some nodes are attacked or leaked, it will not pose a significant threat to the security of the entire system. Moreover, the threshold signature scheme can also improve the decentralization of the system, because it does not require a centralized institution to manage the keys, but instead distributes the keys among multiple nodes, thus avoiding single point of failure and centralization risks. Therefore, IC uses the threshold signature scheme to enhance the security and decentralization of the system, hoping to use threshold signatures to achieve a highly secure, scalable, and quickly verifiable general-purpose blockchain.

  • The BLS is a famous threshold signature algorithm. It is the only signature scheme that can produce very simple and efficient threshold signature protocols. One advantage of BLS signatures is that they do not require the preservation of signature states. As long as the message content remains the same, the signature is fixed, which means that for a given public key and message, there is only one valid signature. This ensures extremely high scalability, hence ICP chose the BLS scheme.

  • Because the threshold signature is used, there needs to be a distributor to distribute key fragments to different participants. However, this person who distributes the key fragments is a single point, which can easily lead to a single point of failure. Therefore, Dfinity designed a distributed key distribution technology, also known as NIDKG. During the initialization period of subnet creation, all participating Replicas non-interactively generate a public key A. For the corresponding private key B, each participant calculates and holds one of the derived secret shares through mathematical methods.

  • To implement NIDKG, it is necessary to ensure that every participant in the distribution has not cheated. Therefore, each participant can not only get their own secret share but can also let others publicly verify whether their secret share is correct. This is a very important point in realizing distributed key generation.

  • After NIDKG, if a certain secret share is held by a node for a long time, once the nodes are gradually eroded by hackers, the entire network may have problems. Therefore, it is necessary to continuously update the keys. However, key updates should not require all participating Replicas to gather together for interactive communication, but must also be conducted non-interactively. But because the public key A has already been registered in the NNS, other subnets will use this public key A for verification, so the subnet public key should preferably not change. But if the subnet public key does not change, how can the secret shares between the nodes be updated? Therefore, Dfinity designed a Key Resharing Protocol. Without creating a new public key, all Replicas holding the current version of the secret share non-interactively generate a new round of derived secret shares for the holders of the new version of the secret share. This way, it ensures that:

    • The new version of secret share is certified by all current legal secret share holders.

    • The old version of secret share is no longer legal.

    • Even if the new version of secret share is leaked in the future, the old version of secret share will not be leaked, because the polynomials between the two are completely unrelated and cannot be reverse engineered. This is also the forward security just introduced.

    • It also guarantees efficient redistribution at any time when trusted nodes or access control changes, allowing access policies and controllers to be modified without the need to restart the system. This greatly simplifies the key management mechanism in many scenarios. For example, it is very useful in scenarios where subnet members change, because resharing will ensure that any new member has the appropriate secret share, and any replica that is no longer a member will no longer have a secret share. Moreover, if a small amount of secret share is leaked to the attacker in any period or even every period, these secret shares are also of no benefit to the attacker.

  • What if the subnet key at a certain moment in history is leaked? How to guarantee the immutability of historical data? Dfinity has adopted a forward-secure signature scheme. This ensures that even if the subnet key at a certain moment in history is leaked, the attacker cannot change the data of the historical block, thus preventing the threat of late-stage corruption attacks on the historical data of the blockchain. If this restriction is stronger, it can also ensure that information will not be eavesdropped on during transmission, because if the timestamps do not match, even if the key is cracked in a short time, past communications cannot be cracked.

  • Traditional blockchain protocols require the storage of all block information from the genesis block onward. As the blockchain grows, this can lead to scalability issues, making it troublesome for many public chains to develop a light client. To solve this problem, the Internet Computer (IC) developed the Chain-evolution Technology. At the end of each epoch, all processed inputs and consensus-required information can be safely cleared from the memory of each replica. This greatly reduces the storage requirements of each replica, enabling IC to scale to support a large number of users and applications. In addition, the Chain-evolution technology includes the CUPs (Canister Update Protocol) technology, which allows newly joined nodes to quickly obtain the current state without having to rerun the consensus protocol. This significantly lowers the barriers and synchronization time for new nodes to join the IC network.

  • In summary, all of IC's underlying technologies are interconnected, based on cryptography (from theory), and also fully consider industry challenges such as fast node synchronization (from practice). Truly, it is a comprehensive solution!

Key Features

  • In terms of features:

    • Reverse Gas Model: Traditional blockchain systems generally require users to hold native tokens first, such as ETH or BTC, and then consume these tokens to pay for transaction fees. This increases the barrier to entry for new users and does not align with people's usage habits. Why should I have to own stocks in TikTok just to use the app? The Internet Computer (ICP) introduced a reverse gas model design, where users can directly use the ICP network, and the project side will cover the transaction fees. This reduces the usage threshold and aligns more closely with internet service habits, favoring a broader network effect and thus supporting the addition of more users.
  • Stable Gas:For other public chains on the market, to ensure the security of the chain and facilitate transfers, people will buy native tokens. Miners then strive to mine as much as possible, or people hoard native tokens, thus contributing computational power to the chain as in the case of Bitcoin, or providing pledge economic security as with Ethereum. It can be said that our demand for BTC/ETH actually comes from the Bitcoin/Ethereum public chains' requirements for computational power/pledges, which are essentially security requirements for the chain. Therefore, as long as the chain uses native tokens to pay for gas directly, it will become expensive in the future. Even if the native tokens are cheap now, they will become expensive once the chain's ecosystem gets established. However, the Internet Computer (ICP) is different. The gas consumed in the ICP blockchain is called Cycles, which are exchanged by consuming ICP. Cycles are stable under algorithmic adjustment, pegged to 1 SDR (Special Drawing Rights, which can be regarded as a stable unit calculated after integrating multiple national fiat currencies). Therefore, no matter how much the price of ICP increases in the future, the cost of doing anything on the ICP will remain the same as today (not considering inflation).
  • Wasm: WebAssembly (Wasm) is used as the standard for code execution. Developers can write code using a variety of popular programming languages (such as Rust, Java, C++, Motoko, etc.), thereby supporting the participation of more developers.

  • Support for Running AI Models: Python language can also be compiled into Wasm. Python has one of the largest user bases in the world and is the primary language for AI, such as vector and large number calculations. Some people have already run the Llama2 model on IC. If the concept of AI + Web3 happens on ICP in the future, I wouldn't be surprised at all.

  • Web2 Speed Experience: Many applications on ICP have achieved astonishing results with millisecond-level queries and second-level updates. If you don't believe it, you can directly use OpenChat, a decentralized chat application that is entirely on-chain.

  • On-Chain Frontend: You've probably only heard of writing some parts of the backend as simple smart contracts and running them on-chain to ensure that data assets and core logic cannot be tampered with. However, the frontend also needs to be fully run on-chain for safety, as frontend attacks are a very typical and frequently occurring problem. Imagine you might think that Uniswap's code is very safe. The smart contract has been verified by so many people over so many years, the code is simple, there could be no problems. But what if Uniswap's frontend is hijacked one day? The contract you're interacting with is actually a malicious contract deployed by a hacker, and you might go bankrupt instantly. But if you store and deploy all frontend code in IC's Canister, at least IC's consensus ensures that the frontend code cannot be tampered with by hackers. This protection is more comprehensive, and IC can directly run and render the frontend without affecting the normal operation of the application. On IC, developers can directly build applications without traditional cloud services, databases, or payment interfaces, and there is no need to buy a frontend server or worry about databases, load balancing, network distribution, firewalls, etc. Users can directly access the frontend web pages deployed on ICP through a browser or mobile App, like the personal blog I deployed on IC before.

  • DAO-Controlled Code Upgrades: Many DeFi protocols now allow project owners to have complete control, initiating significant decisions such as suspending operations or selling funds without going through community voting and discussion. I believe everyone has witnessed or heard of such cases. In contrast, DAPP code in the ICP ecosystem runs in DAO-controlled containers. Even if a project party occupies a large proportion in voting, it still implements a public voting process, meeting the necessary conditions for blockchain transparency described at the beginning of this article. This process guarantee mechanism better reflects the will of the community, and relatively speaking, achieves a higher degree of governance compared to other public chain projects.

  • Automatic Protocol Upgrades: When it is necessary to upgrade the protocol, a new threshold signature scheme can be added in the summary block to implement automatic protocol upgrades. This method can ensure the security and reliability of the network, while avoiding the inconvenience and risks brought by hard forks. Specifically, the Chain Key technology in ICP can ensure the network's security and reliability by maintaining a blockchain state machine through a special signature scheme. At the start of each epoch, the network uses a low-threshold signature scheme to generate a random number, and then uses a high-threshold signature scheme to authenticate the replicated state of the subnet. This signature scheme can ensure the security and reliability of the network, while also implementing automatic protocol upgrades, thus avoiding the inconvenience and risks brought by hard forks.

Proposal Voting
Proposal Voting
  • Fast Forwarding: This is a technology in the Internet Computer Protocol that quickly synchronizes the state of nodes. It allows newly joined nodes to quickly access the current state without having to rerun the consensus protocol. Specifically, the process of Fast Forwarding is as follows:

    1. The newly joined node obtains the Catch-up package (CUP) of the current epoch, which contains the Merkle tree root, summary block, and random number of the current epoch.

    2. The new node uses the state sync subprotocol to obtain the full state of the current epoch from other nodes, and uses the Merkle tree root in the CUP to verify the correctness of the state.

    3. The new node uses the random number in the CUP and the protocol messages of other nodes to run the consensus protocol, thereby quickly synchronizing to the current state.

    The advantage of Fast Forwarding is that it allows newly joined nodes to quickly access the current state, and they don't have to start from the beginning like some other public chains. This can speed up the synchronization and expansion of the network. At the same time, it can also reduce the amount of communication between nodes, thereby improving the efficiency and reliability of the network.

fast forwarding
fast forwarding
  • Decentralized Internet Identity: The identity system on IC really makes me feel that the DID problem can be completely solved, whether it is scalability or privacy. The identity system on IC currently has an implementation version called Internet Identity, and a more powerful NFID developed based on it.

    Its principle is as follows:

    1. At the time of registration, it generates a pair of public and private keys for the user. The private key is stored in the TPM security chip inside the user's device and will never be leaked, while the public key will be shared with services on the network.

    2. When the user wants to log in to a dapp, the dapp creates a temporary session key for the user. This session key will be signed by the user through the authorization of electronic signature, so that the dapp obtains the permission to verify the user's identity.

    3. After the session key is signed, the dapp can use this key to access network services on behalf of the user, and the user does not need to sign electronically every time. This is similar to representative authorization login in Web2.

    4. The session key has a short validity period. After it expires, the user needs to reauthorize the signature through biometric recognition to obtain a new session key.

    5. The user's private key is always stored in the local TPM security chip and will not leave the device. This ensures the security of the private key and the anonymity of the user.

    6. By using temporary session keys, different dapps cannot track user identities with each other. Achieve true anonymity and private access.

    7. Users can easily sync and manage their Internet Identity across multiple devices, but the devices themselves also require corresponding biometric or hardware key authorization.

  • The advantages of Internet Identity are as follows:

    1. No need to remember passwords. Use biometrics features such as fingerprint recognition to log in directly, without the need to set and remember complex passwords.

    2. Private keys do not leave the device, making it more secure. The private key is stored in the TPM security chip and cannot be stolen, solving the problem of username and password theft in Web2.

    3. Anonymous login, cannot be tracked. Unlike Web2, which uses email as a username that can be tracked across platforms, Internet Identity removes this tracking.

    4. Multi-device management is more convenient. You can log into the same account on any device that supports biometrics, rather than being restricted to a single device.

    5. Independence from central service providers, achieving true decentralization. This is unlike the Web2 model where usernames correspond to email service providers.

    6. Use a delegated authentication process, no need to sign again every time you log in, providing a better user experience.

    7. Supports logging in using dedicated security devices such as Ledger or Yubikey, enhancing security.

    8. Hide the user's actual public key, and privacy can be ensured by preventing the transaction record from being queried through the public key.

    9. Seamlessly compatible with Web3 blockchain, it can securely and efficiently log in and sign blockchain DApps or transactions.

    10. Advanced architecture, representing the organic integration of the advantages of Web2 and Web3, is the standard for future network accounts and logins.

  • In addition to providing a new user experience, the following technical measures are also taken to ensure its security:

    1. Use the TPM security chip to store the private key. This chip is designed so that even developers cannot touch or extract the private key, preventing the private key from being stolen.

    2. Biometric authentication such as fingerprint or facial recognition and other two-factor authentication mechanisms require verification with the device, so only the user who holds the device can use this identity.

    3. The session key adopts a short-term expiration design to limit the time window for theft, and forces the destruction of the related ciphertext at the end of the session to reduce risk.

    4. Public key encryption technology encrypts data during transmission, and external listeners cannot know the user's private information.

    5. Does not rely on third-party identity providers. The PRIVATE KEY is generated and controlled by the user, not trusting third parties.

    6. Combined with the immutability brought by the IC blockchain consensus mechanism, it ensures the reliability of the entire system operation.

    7. Continuously updating and upgrading related cryptographic algorithms and security processes, such as adding more secure mechanisms such as multiple signatures.

    8. Open source code and decentralized design optimize transparency, which is conducive to community cooperation to enhance security.

Internet Identity
Internet Identity

Core Team

  • The team consists of 200+ employees, all of whom are very elite talents. The team has published 1600+ papers, cited 100k+ times, and holds 250+ patents.

    • The founder, Dominic Williams, is a crypto-theorist and a serial entrepreneur.

      • Academically speaking, his recent mathematical theories include Threshold Relay and PSC chains, Validation Towers and Trees, and USCID.

      • From a technical background, he has a deep background in technology R&D and has been involved in the field of big data and distributed computing in his early years. This has laid a technical foundation for building the complex ICP network.

      • From an entrepreneurial perspective, he previously operated an MMO game using his distributed system, which hosted millions of users. In 2015, Dominic started Dfinity, and he is also the president and CTO of String Labs.

      • From a visionary perspective, he proposed the concept of a decentralized Internet more than 10 years ago. It is not easy to promote this grand project for a long time, and his design ideas are very forward-looking.

    • In terms of the technical team, Dfinity's strength is very strong. The Dfinity Foundation has/ha gathered a large number of top cryptography and distributed system experts, such as Jan Camenisch, Timothy Roscoe, Andreas Rossberg, Maria D., Victor Shoup, etc. Even the "L" in the BLS encryption algorithm - Ben Lynn is at Dfinity. This provides strong support for ICP's technical innovation. The success of a blockchain project cannot be separated from technology, and the gathering of top talents can bring about technological breakthroughs, which is also a key advantage of ICP.

Dfinity Foundation Team
Dfinity Foundation Team

Fund-raising & Tokenomics

If I discuss this part as well, the article will be too long. Therefore, I've decided to write a separate article later to analyze this in detail for everyone. This article focuses more on why ICP has great potential from the perspective of the development direction of the blockchain industry.

Applications

  • All types of applications can be developed on ICP, including social platforms, content creator platforms, chat tools, games, and even metaverse games.

  • Many people say that because it is difficult to achieve global state consistency on IC, it is naturally not suitable for DeFi. But I think this problem itself is wrong. It is not that global state consistency is difficult to achieve, but that global state consistency under low latency is difficult to achieve. If you can accept a minute's time, 10,000 machines worldwide can also achieve global consistency. Ethereum and BTC now have so many nodes, haven't they already been forced to achieve global state consistency under high latency, and therefore they currently can't achieve horizontal infinite expansion. IC solves the problem of horizontal infinite expansion by splitting subnets first. As for global state consistency under low latency, it can be achieved through strong consistency distributed consensus algorithms, well-designed network topology, high-performance distributed data synchronization, effective time stamp verification, and mature fault tolerance mechanisms. But to be honest, building a trading platform at the application level on IC is more difficult than what Wall Street people are doing now with high-performance trading platforms, and it's not just about achieving consistency across multiple data centers. However, the difficulty does not mean that it is completely impossible to do, but that many technical problems must be solved first, and a moderate state will eventually be found that ensures both security and an acceptable user experience. For example, ICLightHouse below.

  • ICLightHouse is an orderbook dex running fully on chain. What is the concept of fully on chain? How many technical difficulties need to be solved? On other public chains, people don't even dare to think about this, but on IC, at least it's doable and gives us hope.

  • OpenChat is a decentralized chat application with an excellent user experience. I haven't seen a second such product in the entire blockchain industry. Many other teams have also attempted in this direction before, but they all ultimately failed due to various technical problems. The root cause is that users feel that the experience is not good, such as the speed is too slow, sending a message takes 10 seconds, and receiving someone else's message also takes 10 seconds. However, a small team of three people on ICP has created such a successful product. Just how smooth it is, you have to experience it for yourself. You're welcome to join the organization, where you can enjoy the collision of ideas and to a certain extent, the pleasure of free speech.
  • Mora is a platform for super creators, where everyone can create a planet and build their own personal brand. The content you output is always yours, and it can even support paid reading. It can be described as a decentralized knowledge planet, and I now read articles on it every day.
Mora - 0xkookoo
Mora - 0xkookoo
  • OpenChat and Mora are products that I truly use almost every day. They give me a sense of comfort that is hard to leave, and if I were to describe it in two words, it would be freedom and fulfillment.

  • There are already some teams developing gaming applications on IC. I think the narrative of fully-on-chain games may eventually be taken over by IC. As I said in the GameFi section of the article I wrote earlier, playability and fun are things that the project team needs to consider, and playability is easier to achieve on IC. Looking forward to the masterpiece from Dragginz.

Summary

  • ICP is like the Earth, with Chainkey technology serving as the Earth's core. Its relationship with ICP is akin to the relationship between TCP/IP protocol and the current entire internet industry. Each Subnet is like a continent such as Asia, Africa, or Latin America. Of course, a Subnet could also be likened to the Pacific or Atlantic Ocean. Within these continents and oceans, there are various buildings and regions (Replicas and Nodes). Each area and building can grow plants (Canisters), and different animals live happily.

  • ICP supports horizontal expansion. Each Subnet is autonomous and can communicate with other Subnets. No matter what type of application you have—be it social media, finance, or even the metaverse—you can achieve eventual consistency through this distributed network. It's easy to achieve a global ledger under synchronous conditions, but it's a significant challenge to achieve "global state consistency" under asynchronous conditions. Currently, only ICP has a chance to do this.

  • Note that I'm not referring to "worldwide state consistency," but "global state consistency." "Global state consistency" requires all participating nodes to reach consensus on the order of all operations, ensure the final result is consistent, objectively consistent regardless of whether nodes encounter faults, ensure clock consistency, and provide immediate consistency as all operations are processed synchronously. This can be ensured within a single IC Subnet. However, if you want to guarantee "worldwide state consistency," all Subnets physically all over the world as a whole need to achieve "global state consistency" regarding the same data and state. In practical implementation, this is impossible to achieve with low latency, which is also the bottleneck preventing public chains like ETH from scaling horizontally. Therefore, IC opts to reach consensus within a single Subnet, and other Subnets quickly verify the results to ensure no fraud has occurred, thereby achieving "eventual global state consistency." This is essentially a combination of the decentralization of large public chains and the high throughput and low latency of consortium chains, all while enabling infinite horizontal expansion of Subnets through mathematically and cryptographically proven methods.

In summary, according to my initial thoughts on the ultimate development direction of blockchain, which involves SovereigntyDecentralized Multipoint CentralizationTransparencyControl over Code Execution, and Infinite Scalability with Linear Cost:

  • Sovereignty: This is the only problem blockchain needs to solve, including asset sovereignty, data sovereignty, and speech sovereignty. Otherwise, there's no need for blockchain.

    • IC has completely achieved this.
  • Immutability: This is a sufficient condition, but not a necessary one. As long as you can ensure that my sovereignty is not compromised, I don't care about tampering. If everyone's assets in the world are tampered with and doubled proportionally, what's the difference?

    • IC has also achieved this.
  • Decentralization: Complete decentralization is impossible. No matter how it's designed, there will always be "gifted" individuals or stakeholders with greater say. There will always be people who voluntarily choose not to participate. Decentralized multipoint centralization is the ultimate pattern.

    • IC is currently the best among all public chains. It manages to maintain a certain degree of decentralization while fully utilizing the advantages of centralized entities, thereby better facilitating network governance and operation.
  • Transparency: This is a must. Isn't this grand social experiment involving all of humanity about giving everyone a voice and the ability to protect their own sovereignty? Some people may be lazy, some may prefer to trust professionals, and some may choose to give up voting for maximum efficiency. However, these are choices they actively make. They have the right but voluntarily choose not to exercise it. As long as everything is transparent and there's no underhanded manipulation, I'm willing to accept the outcomes. If I lose, it's because my skills were inferior. Survival of the fittest aligns with the market economy.

    • IC has completely achieved this.
  • Control over Code Execution: This is the core. Without it, the rest is unnecessary. If a vote is publicly announced for a week, and in the end, the project team still deploys a malicious version of the code, or even if it's not malicious, it's still a mockery of everyone.

    • Currently, only IC has achieved this.
  • Infinite Scalability with Linear Cost: As blockchain becomes more and more intertwined with real life, more and more people are participating, and the demand is growing. If the infrastructure cannot support unlimited scalability, or if it's too expensive to expand, it's unacceptable.

    • Currently, only IC has achieved this.

Based on these facts and my analytical thinking, I believe that ICP = Web 3.0.

This article is just to discuss why ICP might be the innovation driver for Web/Blockchain 3.0 from the perspective of the future development direction of the blockchain industry. Admittedly, there are some issues with ICP's tokenomics design, and its ecosystem has not yet exploded. At present, ICP still needs to continue its efforts to reach the ultimate Blockchain 3.0 I envision.

However, don't worry, this task is inherently difficult. Even the Dfinity Foundation has prepared a 20-year roadmap. Only two years after the mainnet launch, it has already achieved great accomplishments. Currently, it's also bridging the BTC and ETH ecosystems using cryptographic methods. I believe that it will reach greater heights in three years.

Future

  • ICP has already completed the infrastructure construction from bottom to top, and applications from top to bottom are beginning to emerge. My recent direct impression is that ICP has more and more cards to play, preparing for the next bull market.

  • ICP represents a paradigm shift, not just a simple technical upgrade. It signifies the transition from standalone computing to distributed computing, and even more so, from standalone systems to distributed systems. The concept of decentralized cloud computing can provide many small companies with a one-stop development experience right from the start.

  • According to the product value formula by Yu Jun: Product Value = (New Experience - Old Experience) - Migration Cost, in the future, as long as some people find that the experience gain of joining the ICP ecosystem outweighs the migration cost, more people, including project parties and users will join. This will make the scale effect of "cloud computing" more evident. Once the "chicken or egg" problem is solved, the positive flywheel of ICP will be established.

  • Of course, everyone's definition of experience is subjective, so some people will choose to join early, while others will join later. Those who join earlier bear greater risks, but they usually also receive greater average benefits.

References

Subscribe to 0xkookoo
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.