Polygon zkEVM And Scalability: A Controversial Topic Of Interest
- Polygon zkEVM launched, and it created controversy among the founders of the public chain.
- Solana announced plans to ramp up network upgrades, but it didn’t work out.
- The scalability of zkEVM will still be a topic later.
On February 26, Polygon zkEVM uploaded a billboard at the main net’s opening, igniting a discussion about the usage of the word “Ethereum” equivalency. According to Ryan Wyatt, the CEO of Polygon Labs, this is more likely the result of poor communication between the team’s marketing and technical workers.
This is not a major issue, and the level of effectiveness will be directly proportional to L1/L2 performance and scalability. And one thing that may be known is that ZK L2s are not the greatest answer for scalability.
The Polygon engineering team, the protagonist of the incident, also actively responded to the idea of working zkRollup, clarifying where the scalability of zkRollup points is.
Solana has announced “plans to increase network upgrade” and once again raised the Ethereum killer flag. Unfortunately, after several outages, its mechanism of action hasn’t been fundamentally reinvented, and it’s more of a post-crisis optimization, casting a shadow over the high-performance L1 that replaces Ethereum.
This article will synthesize the opinions of many L1/L2 founders to explain in detail the important debate regarding public chain performance.
The EVM equivalence debate’s origins
Polygon zkEVM is exclusively a zk VM method that is L2 compatible with the Ethereum mainnet via mapping. The introduction of zk technology is the reason it is referred to as a zk VM, but in a broader sense, as long as synchronization is maintained, it doesn’t hurt to be referred to as a zk EVM.
Nevertheless, EVM equivalence is not precisely comparable to Ethereum equivalence, according to Ye Zhang, the founder of Scroll, because the latter must be at least compatible with the Ethereum mainnet in terms of data storage.
In general, none of the present L2s or Ethereum cross-chain bridges can claim to be equal to Ethereum. Ethereum will become more modular in the future, and more complicated stacks will bring more tough difficulties.
zKEVM equivalence = k + EVM
Ethereum equivalence = EVM + Storage (at least)
To explain the difference between the two, a simple example may be presented. In product planning, optimism entails the distinction between the two. According to it:
- EVM equivalency: From the standpoint of dApp developers, producing dApps on OVM is similar to developing dApps on the Ethereum mainnet, hence OVM has EVM equivalence.
- Ethereum equivalence: From the standpoint of protocol developers, it is critical to maintain high consistency with Ethereum in terms of client, communication layer, consensus layer, and execution layer.
Apart from the issue of EVM equivalency, it should be highlighted that the debate over scalability is also at the heart of the present battle between various Ethereum-based L2 and high-performance L1.
Solana’s reservations regarding ZK’s scalability
Solana Toly tweeted on February 24, just two days before Polygon zkEVM was revealed to be live, to dispute ZK L2s’ capacity to handle the public chain growth problem in order to dispel public questions regarding Solana’s working mechanism.
The following are his primary points:
- On-chain real-time data must have a consistent sequential execution capability.
- The ZK proof scheme is established if the total of the ZK generation proof time plus the verification time is less than the real-time execution time.
- The ZK method cannot handle big batches of data in real time and can only provide intermittent summaries, but it cannot retain the chain’s real-time nature.
As a result, the ZK method is better suited for single-time, low-frequency circumstances like batch settlement, whereas Solana is still required for public chain scaling.
Nevertheless, on February 25, Solana had a significant outage. After two restarts, the community, engineers, and verifiers restored the main network. DBCryptoX, a Twitter user who questions Solana’s method, says that “the teal part is validator messages/votes and typically makes up 90-95%.”
Solana has a PoS consensus technique that it refers to as a “Proof of History” mechanism. Since nodes do not need to interact to validate a block, PoH allows the network to function quicker, and PoH allows validators to establish precisely what happened at a given point in time.
On the one hand, this technique provides a high degree of unity and a TPS greatly beyond that of the Ethereum mainnet, but it does significantly take block space on the chain. When a problem develops, it is difficult to achieve an agreement to restore the network. Every year from 2021 through 2023, there will be at least one mainnet outage.
Downtime issues taught Solana valuable insights about scalability. Solana collaboration Toly argues that the ZK L2 prover cannot be run in real-time. As a result, the execution on the chain of ZK L2s will deviate from the prescribed sequence. As a result, users either operate full nodes to raise network strain or rely on a few Honest nodes to progress toward centralization to optimize efficiency.
Unfortunately, present evidence indicates that high-performance L1 Solana does not appear to be capable of solving the real-time execution challenge. After all, the collapsed network will soon lose its established order, and the data after forceful recovery will be an artificially recognized “consensus” rather than the network’s spontaneous state.
Consequences: The key to Polygon’s scalability
Polygon zkEVM was first questioned by Solana and then announced that there was an oolong incident, which was questioned as unprofessional and misleading. Therefore, its developer, Jordi Baylina, jumped out to take on the challenge of professionalism, focusing on explaining that the prover is not the limiting factor of ZK L2s and the real obstacle is actually DA (data availability).
The first is the operation process of zkRollup, as shown in the figure below, which can be roughly divided into three steps:
To keep the network synchronized, no matter what kind of Rollup architecture, as long as the data on L2 is involved, it is necessary to prove the validity of the batch-processed messages so that they can be confirmed when they are finally sent back to L1.
Aggregated proof generation requires the use of a zk proof scheme (ZKP). Parallel processing can speed up the process, but batch-proof generation itself does take some time. It is even possible to design a dynamic mechanism where the number of network certifiers can be increased or decreased according to network requirements.
After the ZKP is running, a complete tree-like proof network will be generated for the data, allowing different servers to verify the data and send the results to the L1 chain.
The second is the cost issue, the most critical of which is the proof cost. The software, hardware, and time invoked by ZK operation will be factored into the calculation factor by the TX (transaction) fee and finally reflected on the Gas Fee of the network.
Different algorithms, such as STARK/SNARK /FLONK, etc., will greatly optimize the cost of network usage. The key point is that the loading of data does not need to be executed in a complete sequence to facilitate parallel processing.
Therefore, the prover (Prover) considered by Solana and Toly cannot hinder the operation of the ZK proof scheme, but the real obstacle is data availability, which needs solutions such as ETH 2.0, DankSharding, and EIP4844.
The debate about Polygon zkEVM will go on, and the key will be how effectively zk EVM handles the current scalability issue. As a result of large-scale dApps and user volume, many L1/L2s will confront this difficulty in the future.
DISCLAIMER: The Information on this website is provided as general market commentary and does not constitute investment advice. We encourage you to do your own research before investing.
Join us to keep track of news: https://linktr.ee/coincu