Disclaimer: This article is intended to convey more market information and does not constitute any investment advice. The article only represents the author's views and does not represent the official position of MarsBit.
Editor: Remember to follow
Source: MarsBit
Recently, Vitalik had a detailed post about the third layer.
So, do I have anything to add? I always think I didn't, which is why I've never written about this topic -- it's been 10 months since I first discussed it with someone at StarkWare. But today, I think I might be able to talk about this from a different perspective.
The first thing to know is that "web2" can run on 100 million servers around the world. "Web3" is a pretty stupid meme because it's obviously a niche subset of "Web2". But let's assume that the blockchain thing can create its small, sustainable but profitable market, attracting scenarios that strictly require distributed trust models and relatively minimal computing (that is, nothing like supercomputers encode millions of videos in real time with customized hardware). Suppose we only need 0.1% of the computing power of "web2". This is to use 100,000 servers to build a small target market.
Now, let's consider a high TPS monolithic chain, like BNB chain or Solana. While a secure and decentralized priority chain like Bitcoin looks impressive, it has to be a mid-range server because you have to get hundreds of entities in sync. Today, the higher-end server will be 128 cores, not 12 cores; using 1TB of RAM, not 128GB, etc. It seems ridiculous to immediately be able to meet the needs of a normal server. In fact, if you want to succeed, a real chain of games may require multiple high-end servers with 10 times Solana's computing power.
The next step is rollups. Although the design space for the dedicated execution layer is large and constantly evolving, I'm talking about rollups with the 1/N trust assumption. Due to the 1/N assumption (relative to 51% of the large M), thousands of nodes are no longer required to run. Therefore, rollups can be upgraded to higher performance servers under other conditions. ZK rollups has a special advantage because many nodes can simply verify the proof of validity – so you only need a small number of full nodes with high-performance servers. Yes, you need a validator, but these proofs only need to be generated once, and as software and hardware advance, the proof time keeps getting less.
However, at some point, the rollups node becomes a bottleneck. At present, the biggest bottleneck is the growth of status. Let's assume state growth is resolved and the next thing becomes a little blurry, some combination of bandwidth/latency or computation, depending on the scenario. According to Dragonfly's benchmark, even for exchanges with low computing intensive market makers like AMM swaps, the BNB chain limit reaches 195TPS and Solana's limit reaches 273TPS. As mentioned earlier, since there are much fewer nodes to be synchronized, rollups can further alleviate bandwidth bottlenecks, but they will soon encounter computing bottlenecks. Solana's devnet proves this, and its running configuration is more similar to a rollups, performing 425TPS instead of 273TPS.
and then parallelization. Rollups like StarkNet and Fuel V2 focus on parallel execution; and that's also on the roadmap of other teams like Optimism. In theory, you can run different dapps with different users on multiple cores, but in reality, the benefits implemented here are expected to be quite limited. The MEV robot will access all statuses at any time, and the fees will be determined based on the financial activity on the chain. Therefore, in reality, you will have a core bottleneck. This is a basic limitation of the smart contract chain. This is not to say that parallelism will not help - it will help. For example, StarkNet's optimistic parallel approach is definitely positive - because if the transaction cannot be parallelized, it will fall back to the main core.
's idea of 64-core CPU → 64 times potential throughput is very wrong. First, as mentioned above, parallel execution is only helpful in certain scenarios. However, the bigger problem is that the single-threaded performance of the 64-core CPU is significantly reduced when running.For example, the 64-core EPYC has a core clock of 2.20 GHz, or 3.35 GHz boost; while the 16-core Ryzen 9 clock based on the same architecture is 3.4 GHz, boosting to 4.9 GHz. Therefore, 64-core CPUs are actually significantly slower in many transactions. Incidentally, the latest seventh-generation Ryzen 9 was released a few weeks later, boosting each core’s speed to 5.7 GHz, up 15% -- so, over time, computing power does improve for everyone. But the speed is far less fast than many people think - it will take 4-5 years to double now.
So, due to the importance of fast main cores, you can scale to the biggest ones that are 16 cores (by the way, that's why your cheap Ryzen 5 offers 2x FPS of 64 core EPYC in the game). But these cores are unlikely to be exploited, so we can only see a 2-5x improvement at most. For anything compute-intensive, we only consider a few hundred TPS at most to achieve the fastest execution layer.
One tempting solution is the ASIC virtual machine -- so basically you have a huge single core, 100 times faster than a regular CPU core. A hardware engineer told me that turning EVM into lightning-fast ASIC is trivial, but it will cost hundreds of millions of dollars. Maybe this is very worthwhile for financial activities like EVM? The downside is that we need to first address the rigid specification of state management and validity proof (i.e. zkEVM) - but maybe it's something to consider in the 2030s.
Back to the present - what if we take the concept of parallelism to the next level? Rather than trying to stuff everything into one server, why not expand the content to multiple servers? This is where we get the third floor. For any compute-intensive application, rollups for a specific application are very necessary. This has the following benefits:
· Optimized for an application with zero virtual machine overhead
· No MEV, or MEV limited, can be used to mitigate harmful MEV
· Specialized charging market is also helpful. Additionally, you can provide novel charging modes for the best UX
· fine-tuning hardware selected for specific purposes (and smart contract chains will always have some bottlenecks that are not suitable for your application).
· Solution to the Triple Difficulty of Transaction Quality – You can avoid paying or paying trivial fees, but you can still avoid spam with targeted DDoS mitigation. This way, because the user can always exit to the settlement layer (2 or 1), retaining the censorship resistance.
So why not L1 for a specific application, such as the Cosmos area, Avalanche subnet or Polygon supernet? The answer is simple: socioeconomic and security divisions. Let's revisit the issue statement: If we have 100,000 servers, and each server has its own set of validators -- this obviously won't work. If you have overlapping validators, each validator needs to run multiple supercomputers; or, if each computer has its own set of validators, the security is minimal. Currently, fraud proof or validity proof is the only way. Why not shards like Polkadot or NEAR? There are strict restrictions - for example, each Polkadot shard can only do dozens of TPS, and only 100. Of course, they can completely turn to fractal expansion methods, I hope they do this -- Tezos is leading the charge of alt-l1.
needs to be noted that the scope of fraudulent design and validity proof of execution layer is very wide - so not everything requires rollups. Validium is a great solution for most users who solve low-value transactions or commercial transactions run by an application or company. Only high-value decentralized financial products require complete Ethereum security and a rollups, really. With the development of a few people's data layer ideas like adamantium and eigenDA, they can be almost as secure as rollups in the long run.
I will skip the part about how it works because StarkWare's Gidi and Vitalik have already done better than I did.But the point is: you can have 1000 3, 4 or others on the 2nd layer, all of which can be solved with a concise recursive validity proof; only this needs to be solved in the 1st layer. So you can do countless TPS (with different properties as mentioned above), validated by a concise proof of validity. Therefore, the term "layer" is quite limited, and if we reach the goal of 100,000 servers, there will be various wild structures. Let's think of them as rollups, validiums, voltage, or others and discuss the security attributes of each.
is now obvious: composability. Interestingly, the execution layer of the validity proof can be combined with its settlement layer with one-way atomic formula. The requirement is that each block generates a proof--we obviously haven't met this requirement, but it is possible--proof generation is parallel. So, you can synthesise the atomic formula from layer 3 and layer 2. The problem is, you need to wait for the next block to be synthesized. This is not a problem at all for many applications, and these applications can happily stay on the smart contract chain. This is also possible if layer 2 provides some form of pre-confirmation, so the transaction between L3 and L2 can actually be atomically combined.
When you can make multiple sequencers/nodes form a unified state, the Holy Grail will enter the battlefield. I know at least the teams at StarkWare, Optimism and Polygon Zero are working on related solutions. While I don't know anything about the technical engineering needed to achieve this, it does seem like a very likely thing. In fact, Geometry has made progress in this regard through Slush.
This is what the real parallelism looks like. Once this problem is solved – you can actually do a lot of fractal expansion and make minimal compromises to security and composability. Let's review: you have 1000 sequencers that form a unified state, a concise proof of validity, and that's all you need to verify all of these 1000 sequencers. So you inherit the trust of Ethereum, you retain full composability, some inherit full security, some inherit partial security, but in each case there is absolutely huge net gains over running 1000 heavily dispersed monolithic L1s in terms of scalability, security, composability, and decentralization.
I expect the first batch of app-specific L3s will be available online via L2 StarkNet later this year. Of course, we will first see the existing L2s taking action. But the real potential will be unleashed with new applications we have not seen before, which are only truly possible with fractal expansion. On-chain games or similar projects, such as Topology's Isaac or Briq, may be the first to deploy their own L3.
The fractal expansion we are talking about here is very rich, the current cumulative cost is US cents (in fact, Immutable X, Sorare, etc. are $0.00), but they are hardly exploited. This brought me back to the real bottleneck in the blockchain field - new applications. This is no longer the case with chicken and eggs - we had a lot of blank block space yesterday and today, but there was no demand. It’s time to focus on building novel applications, uniquely leveraging the advantages of blockchain, and building some real consumer and enterprise needs. I haven't seen enough commitment from the industry or investors--innovation at the application layer has hardly existed since 2020. Needless to say, without these applications, any kind of extensions—fractal or otherwise—is a total waste.
Editor in charge: Felix