
Disclaimer: This article is intended to convey more market information and does not constitute any investment advice. The article only represents the author's views and does not represent the official position of MarsBit.
Editor: Remember to follow
Source: Lee Bousfield
Original title: comparing Optimism Bedrock to Arbitrum Nitro, we made the choices Arbitrum
An Optimism developer @norswap published a great blog post comparing Optimism Bedrock with Arbitrum Nitro!
I think I'll explain why we @Arbitrum made this choice and explained it in more detail here. I suggest reading this blog post!
(A) Fixed with variable block time
We did not fully achieve 1tx per block. Actually, we currently create blocks up to 4 times per second, but if there are no new transactions, we will skip creating blocks. This minimizes the delay in obtaining transaction receipts.
How do we deal with the timing of smart contracts using block.number? On Arbitrum, block.number actually returns the *L1* block number. If needed, you can get the L2 block number from ArbSys precompilation.
(B) Geth as a library or as an execution engine + state storage
As mentioned in the blog post, this is because Arbitrum has more L2-specific states, such as gas pricing for L1 and L2, as well as retryable systems. We'll discuss it later!
(C) L1 to L2 messages contain delayed
We can reorganize L2 when necessary (delayed_seq_reorg_test.go demonstrates this), but we really want to avoid it as much as possible because it hurts the user experience. Our goal is to never reorganize L2 to provide users with a good and stable experience.
(D) L1-to-L2 Message Retry Mechanism
Retry Mechanism is definitely more complex, but the reason is that they do not rely on trusted L2 gas price oracles on L1 to calculate how much they charge. Instead, you can charge for L2 gas prices and try again later if the price is too high.
(E) L2 fee algorithm
has variable block time, and a more complex gas pricing scheme is required. However, we are still deeply inspired by EIP-1559!
(F) L1 fee algorithm
What we want to avoid is that the sequencer overcharges, publishes batches only when gas prices are low, but charges using the moving average of all L1 gas prices. Our L1 pricer tracks the fees paid by batch submitters to prevent this.
(G) Anti-fraud instruction set
Because there is no extension WASM without threads, we do not need to do any additional trimming of concurrency. The go compiler directly processes a single WASM thread through a green thread. This difference in
is partly what allows us to prove the complete geth, not the streamlined minigeth like Optimism's Cannon. We just need to implement the API that Go expects for the WASM host, we do this in the WASM module here:
https://github.com/OffchainLabs/nitro/blob/master/arbitrator/wasm-libraries/go-stub/src/lib.rs
(H) Bisection Game Structure
Not only is there less hash, you don't actually need to execute any uncontroversial blocks in WASM! We still execute them to be safe anyway, but there is no need to participate in rollup.
Speaking of this, one advantage of WASM is that it can re-execute security checks, and we can use the off-the-shelf WASM JIT instead of our custom WAVM interpreter. This makes it very fast to check if the execution of the WASM block is correct!
JIT Verifier: https://github.com/OffchainLabs/nitro/pull/1079 Add a faster, JIT-accelerated Verifier to ensure block correctness on limited hardware.
(I)Author oracle (Preimage Oracle)
We don't actually explicitly use Preimage Oracle to parse any L1 data. Instead, because we record the hash value for each batch in bridge , we have a ReadInboxMessage opcode that can both retrieve that hash value and extract the original image in one instruction.
(J) Large Avatar
As mentioned in the blog post, we avoid this problem by simply making sure our Avatar is small enough.For example, we Merkel the data availability batch to ensure that any given image is small enough, even if the entire batch won't be.
(K) Batch and State Root
We are not actually binding our sorter batch to the state root. The sorter sends each batch to the sorter inbox, which stores them in the bridge, and the validator will then publish an RBlock after using some batching to assert the state.
(L) Miscellaneous
(i) I'm curious how Optimism knows if it contains any garbage before processing. I'm assuming that "trash" contains a tx that the sender cannot pay, depending on the previous state. However, in practice no one posted garbage.
(ii) As stated in (B), we have more state, so we need to precompile so that users can easily access that state.
(iii) I'm not sure if this is how we estimate the cost of gas for bisection, but hardhat-gas-reporter is helpful in reporting gas costs.
Anyway: This is a great blog post and a very interesting comparison. Along the way, I learned a lot about Optimism Bedrock and I am very excited about the future of rollup! If you are as excited as I am and you have read this post, we are hiring!
Editor in charge: Kate
For example, we Merkel the data availability batch to ensure that any given image is small enough, even if the entire batch won't be.(K) Batch and State Root
We are not actually binding our sorter batch to the state root. The sorter sends each batch to the sorter inbox, which stores them in the bridge, and the validator will then publish an RBlock after using some batching to assert the state.
(L) Miscellaneous
(i) I'm curious how Optimism knows if it contains any garbage before processing. I'm assuming that "trash" contains a tx that the sender cannot pay, depending on the previous state. However, in practice no one posted garbage.
(ii) As stated in (B), we have more state, so we need to precompile so that users can easily access that state.
(iii) I'm not sure if this is how we estimate the cost of gas for bisection, but hardhat-gas-reporter is helpful in reporting gas costs.
Anyway: This is a great blog post and a very interesting comparison. Along the way, I learned a lot about Optimism Bedrock and I am very excited about the future of rollup! If you are as excited as I am and you have read this post, we are hiring!
Editor in charge: Kate