Disclaimer: This article is intended to convey more market information and does not constitute any investment advice. The article only represents the author's views and does not represent the official position of MarsBit. Editor: Remember to follow Source: Kyle Original title: Wh

Disclaimer: This article is intended to convey more market information and does not constitute any investment advice. The article only represents the author's views and does not represent the official position of MarsBit.

Editor: Remember to follow

Source: Kyle

Original title: Why is modular blockchain design the future?

Ethereum's development is reaching a new level of maturity. At present, the gap between where Ethereum is located and the roadmap it defines is rapidly narrowing. It is obvious that Ethereum is developing a modular design architecture. The charm of modular design is that the optimization of each module can amplify the optimization of other modules. So, what does a modular blockchain look like and how does it work?

What is modular blockchain?

modular blockchain is a blockchain that focuses on handling a few responsibilities and outsourcing the rest to one or more independent layers. To understand how modular blockchain works, we must first evaluate the "responsibilities" of the regular block chain: consensus, execution, data availability, and settlement.

1. Consensus

Consensus refers to the mechanism by which nodes can reach an agreement on which data on the blockchain can be verified as true and accurate. The consensus protocol determines how transactions are sorted and how new blocks are added to the chain.

2. Execution

execution is the way nodes on the blockchain process transactions to convert blockchain between states. The node participating in the consensus must use its blockchain copy to execute transactions before validating the block.

3. Data Availability

blockchain enforces rules that require transaction data to be available. This means that block producers must publish data for each block for download and storage by network peers, which must be provided upon request.

4. Settlement

Finally, the blockchain provides "finality" - ensuring that transactions submitted to the chain history are irreversible (or "immutable"). To do this, blockchain must be confident in the validity of transactions. Therefore, the settlement function requires chain verification of transactions, verification proofs and arbitration disputes. How does

modular blockchain work?

  • The working principle of modular blockchain is modularity, which refers to separating the system into different components that can be combined in various ways to achieve specific goals. Modularity depends on specialization: each component can only do a few things, but it has to do them well. You can think of modular components as LEGO , which can be combined into different structures. Modular chains are a component in a larger blockchain “modular stack” that can be combined to achieve different purposes. Modular blockchains act as "pluggable modules" that can be exchanged or merged with each other according to use cases.

modular blockchain can be designed to handle one or more of the following tasks:

  • execution: supports transaction execution and realizes the deployment and interaction of smart contracts.
  • Data availability: guarantees the availability of transaction data.
  • Consensus: reached an agreement on the content and order of transactions.
  • Settlement: provides a layer for completing transactions, resolving disputes, verifying proofs, and bridging between different execution layers.

Rollups is an example of a modular blockchain. The rollup total chain handles transactions (execution), but outsourcing consensus, data availability, and settlement to the parent chain. Modular chains can usually perform two or more functions, especially when they are interdependent. For example, the data availability layer must also reach a consensus on the order of the data, otherwise it is impossible to know which data represents the correct version of history.

modular blockchain uses three components of a single-piece blockchain currently located on L1 and separates them. is like division of labor. After splitting each component, we can optimize each component and produce better products, making the whole larger than the sum of the parts. There are three synergies here:

  • modular PoS security can redistribute validators on more shards, as more validators are online and can safely support more data, more decentralized, and larger scale.Other shards on
  • L1 have an amplified impact on the summary's execution capabilities. Rollup can compress large amounts of data before adding data to an L1 shard, so any extra space in the shard can have a huge impact on the aggregated available space. The larger the scale, the faster the execution speed. The more network transaction activity occurs on
  • rollup, the more total fees you pay to purchase the L1 block space. The more you pay for the block space, the more you pay for the L1 validator. The more you pay the validator, the greater the motivation to increase the validator. Add more validators in L1, add compute resources, and create more shards. The advantage of

modular design is that the optimization of each module will amplify the optimization of other modules.

  • increases decentralization through PoS and increases the number of shards on Ethereum.
  • More shards on Ethereum L1 will increase the size of L2 rollup by orders of magnitude. The larger scale of
  • L2 rollup releases new viable economic activity and ultimately increases the collective fees paid by L1 aggregates. The more collective fees paid by
  • to L1 adds the motivation to run the validator, makes the validator pool larger, and allows more shards to be created.
  • Do this back and forth.

Unlocking the Future of Modular Blockchain

Although proof of failure is a useful tool to solve distributed block verification, the full node relies on block availability to generate proof of failure. Malicious block producers may choose to publish only the block headers and retain some or all of the corresponding data, thereby preventing the full node from verifying and identifying invalid transactions, thereby generating proof of failure.

This type of attack is trivial for a full node, as they can simply download the entire block and fork out of the invalid chain when they notice inconsistent or retained data. However, the lightweight client will continue to track the headers of potentially invalid chains, forking from the full node.

This is the nature of the data availability problem, as it relates to proof of failure: lightweight clients must ensure that all transactional data is published in the block before verification, so that the full node and lightweight clients must automatically agree on the same header of the canonical chain. , essentially, game theory stipulates that the fault-based verification system used here will be exploited and lead to honest participants in a lose-lose situation. ‍

How does a lightweight client ensure that all transaction data in a block is freed without having to download the entire block - centralize hardware requirements, thus destroying the purpose of the lightweight client? One way to achieve this is through a mathematical primitive called erase encoding. By copying the bytes in the block, the erasure code can reconstruct the entire block, even if a certain percentage of the data is lost.

This technology is used to perform data availability sampling, allowing lightweight clients to probabilistically determine the published entire block by randomly sampling a small portion of the block. This allows lightweight clients to ensure that all transaction data is included in a specific block before accepting a specific block and follow the corresponding block header.

However, there are some things to note about this technique: data availability sampling has high latency, and similar to the honest few assumptions, security guarantees rely on the assumption that there are enough lightweight clients to perform the sampling so that the availability of blocks can be determined probabilistically. Another solution to

Validity Proof and Zero Knowledge Summary

distributed block validation is to eliminate the transaction data required for state transitions. By contrast, the validity proof assumes a more pessimistic view than the erroneous proof. By deleting the dispute process, the proof of validity can guarantee the atomicity of all state transitions and requires proof of each state transition. This is achieved by leveraging new zero-knowledge technologies SNARKs and STARKs.

Compared with proof of failure, proof of validity requires more computational strength in exchange for stronger state guarantees, which affects scalability.

Zero Knowledge Collection always uses validity proofs rather than error proofs for summary of status verification. They follow a similar computational and validation model to optimistic summary (although proof of validity is used as a pattern rather than a false proof), through a sorter/proving model, where the sorter processes the calculations and proofs generate corresponding proofs.

For example, Starknet starts with a centralized sequencer for boot purposes and gradually disperses the open sequencer and prover on the roadmap. Since the off-chain execution is performed on the sequencer, the calculation itself is infinite on the ZK summary.

However, since the proof of these calculations must be verified on-chain, the finality of the proof generation is still bottlenecked. It is important to note that the technology for state verification with lightweight clients is only suitable for failure protection architectures. Since state transitions are guaranteed to be valid through validity proof, nodes no longer need transaction data to verify blocks. However, the data availability problem of proof of validity remains, and is slightly more subtle: despite guaranteed state, transactional data of proof of validity is still necessary so that the node can update and provide state transitions to the end user. Therefore, summary using validity proofs remains subject to data availability issues.

We are now at

The world of cryptocurrency is full of tribalism and politics. A person's performance in cryptocurrencies will be influenced by which tribe the person comes from. Motivation and motivation are driven by pre-existing beliefs and prejudice. Fortunately, code and math are immune to all of these things. The entire article above can be rewritten without using the word “Ethereum” and replaced by an unknown roadmap of an overall optimized modular blockchain.

In fact, this architecture is not implemented separately by Ethereum. Summary is not just Ethereum business, Tezos also adopts a summary-centric roadmap, NEAR is also designing data availability for shards, and Celestia is building a security and DA layer dedicated to aggregation. The point is that if we go back to the past or jump to a different parallel universe and roll the dice 10,000 times again, the cryptocurrency industry will find itself in the conclusion of modular design 99.9% of the time.

This is the most reasonable conclusion of the development of blockchain technology . The only reason it has “political connections” with Ethereum is that Ethereum has been the only ecosystem that can fully fund R&D and bring us to this point. Over time, we will see that all L1 blockchains either degenerate into modular design structures (limiting L1 block space, pushing execution to summaries, increasing the number of nodes) and becoming a global non-sovereign currency world, or they will remove the burden of consensus and data, just porting their execution environment into a more decentralized chain.

modular blockchain design also illustrates the necessity of decentralization as a key attribute of blockchain, which allows all other functions to be implemented. Ethereum solves the scalability triad by increasing decentralization rather than sacrificing decentralization. Only by optimizing decentralization can you get the benefits of modularity explained above.

Editor in charge: Felix