Key Points:
In the past year or two, zk tools have improved tremendously. Developers of ordinary software can take advantage of the powerful properties of zk without needing a deep understanding of the daunting underlying mathematics and engineering. On the other hand, there has been a proliferation of tools for advanced users, giving zk experts extremely fine-grained control over the zk stack.
Modern software is built on countless layers of abstraction to maximize the productivity of experts. Abstraction in engineering has many advantages that are somewhat intuitive — web developers don’t need a deep understanding of how an operating system works.
The key to building good, reusable abstraction layers is to encapsulate the complexity of one layer and then provide simple but expressive interfaces to higher layers in the stack. Done right, this enables developers with different areas of expertise and knowledge to build useful tools across the stack.
Not surprisingly, these principles also apply to zk systems, and these abstraction layers are becoming mature enough that a zk novice can start using them and building applications today.
Arkworks-rs is an ecosystem of Rust libraries that provide efficient and secure implementations for subcomponents of zkSNARK applications. Arkworks provides developers with the necessary interfaces to customize the software stack of zk applications without having to reimplement commonality with other existing libraries.
Before Arkworks, the only way to create a new zk application was to build everything from scratch. The main advantages of Arkworks-rs are its level of flexibility, reduced re-engineering, and reduced audit effort compared to custom vertically integrated tools. Reasonable interface lines between Arkworks components allow for a speed of upgrades that can keep the stack relevant as zk technology rapidly innovates without forcing the team to rebuild everything from scratch.
Advantage
Shortcoming
In order to create proof about some computation, first, this computation must be expressed in a form that a zkSNARK system can understand. Some domain-specific languages have created programming languages that allow application developers to express their computations in this way. These languages include Aztec Noir, Starknet’s Cairo, Circom, ZoKrates, and Aleo’s Leo, to name a few. The underlying proof system and mathematical details are generally not exposed to application developers.
Developers of zkApps must be proficient in writing programs in domain-specific languages. Some of these languages look a lot like familiar programming languages, while others can be quite difficult to learn. Let’s analyze a few of them.
Cairo – The Starkware DSL is necessary for building applications on Starknet. Compiled into Cairo-specific assembly language, which can be interpreted by Cairo zkVM.
ZoKrates – ZoKrates is a toolkit for the common needs of SNARKs, including a high-level language for writing circuits. ZoKrates also has some flexibility in terms of curves, proof schemes, and backends, allowing developers to hot-swap via simple CLI parameters.
Circom – Circom is a specialized language for building electrical circuits. Currently, it is the actual language for producing circuits. The language is not particularly ergonomic, making the developer acutely aware that a circuit is being written.
Leo – Leo was developed as the language of the Aleo blockchain. Leo has some Rust-like syntax specifically for state transitions inside blockchains.
Noir – Rust-inspired syntax. Built around the IR rather than the language itself, which means it can have an arbitrary front end.
Any application developer who wants to leverage zk’s unique properties in their applications.
Some of these languages have been battle-tested with billions of dollars flowing on chains like ZCash and Starknet. While some of the projects we’ll discuss aren’t quite ready for production, writing circuits in one of these languages is by far the best strategy unless you need the finer control that a toolkit like Arkworks offers.
Advantage
Shortcoming
The main goal of zkEVM is to take Ethereum state transitions and prove their validity using succinct zero-knowledge correctness proofs. As mentioned in Vitalik’s post, there are many ways to do this, with subtle differences and corresponding tradeoffs.
The main technical difference between all these approaches is where exactly in the language stack, the computation is transformed into a form that can be used in the proof system (aestheticization). In some zkEVMs, this happens in a high-level language (Solidity, Vyper, Yul), while others try to prove the EVM all the way down to the opcode level. The tradeoffs between these approaches are covered in depth in Vitalik’s post, but I’ll summarize them in one sentence. The lower the conversions/algorithms happen on the stack, the bigger the performance penalty.
The main challenge in creating proofs for virtual machines is that the size of the circuit grows proportionally to the size of all possible instructions for each instruction executed. This is because the circuit does not know which instructions will be executed in each program, so it needs to support all of them.
In general-purpose circuits, the cost of each instruction executed is proportional to the sum of all supported instructions.
What this means in practice is that you pay (the performance cost) for the most expensive instruction, even if you are only executing the simplest instruction. This leads to a direct tradeoff between generality and performance – as you add more instructions for generality, you pay for every instruction you prove! This is the fundamental problem with general-purpose circuits.
But with new developments in techniques like IVC (Incremental Verifiable Computing), this limitation can be improved by breaking the computation into smaller chunks, each with dedicated, smaller subcircuits.
Today’s zkEVM implementations use different strategies to mitigate this problem…for example, zkSync removes more expensive operations (mainly cryptographic precompilations like hashes and a few others).
Ideal customers for zkEVM are smart contract applications that need to be orders of magnitude cheaper than transactions on L1 Ethereum. These developers don’t necessarily have the expertise or bandwidth to write zk applications from scratch. Therefore, it is preferred to write applications in a familiar higher-level language, such as Solidity.
Scaling Ethereum is currently the most in-demand application of zk technology.
zkEVM is an Ethereum scaling solution that frictionlessly alleviates the congestion that limits L1 dApp developers.
The goal of zkEVM is to support a developer experience as close as possible to current Ethereum development. Full support for Solidity means teams don’t have to build and maintain multiple codebases. This is somewhat impractical since zkEVM needs to trade some compatibility to be able to generate proofs of reasonable size in a reasonable amount of time.
The main difference between zkSync and Scroll is where/when they perform arithmetic operations on the stack – that is, where they transition from normal EVM constructs to SNARK-friendly representations. For zkSync, this happens when they convert the YUL bytecode to their own custom zk instruction set. For Scroll, this happens at the end, when the actual execution trace is generated with the actual EVM opcodes.
So with zkSync, everything is the same as interacting with the EVM until zk bytecode is generated. With Scroll, everything is the same until the actual bytecode is executed. This is a subtle difference that trades performance for support. For example, zkSync will not support EVM bytecode tools like the out-of-the-box debugger because it is a completely different bytecode. While it is difficult for Scroll to get good performance from the instruction set, this was not designed for zk. Both strategies have pros and cons, and ultimately there are many exogenous factors that can affect their relative success.
As discussed in detail, there are countless different options for developing zk applications, all with their own unique tradeoffs. This chart will help summarize this decision matrix for choosing the best tool for the job based on your level of zk expertise and performance needs. This is not a complete list and will be updated as zk develops.
zkLLVM is designed as an extension of the existing LLVM infrastructure, an industry-standard toolchain that supports many high-level languages such as Rust, C, C++, and more.
A user who wants to prove some calculation can simply implement it in C++. zkLLVM takes high-level source code backed by its modified clang compiler (currently C++) and produces some intermediate representation of the circuit. At this point, the circuit is ready for verification, but the user may wish to verify the circuit against some dynamic input.
To handle dynamic inputs, zkLLVM has an additional component called an allocator, which generates an allocation table containing all inputs and witnesses that are fully preprocessed and ready to be proved along with the circuit.
These two components are required to generate proofs. In theory, users could generate proofs themselves, but since this is a somewhat specialized computational task, it might cost someone else with the hardware to do it. For this counterparty discovery mechanism, =nil; the Foundation also builds a “proof market” where provers compete to prove computation for users who pay them. This free market dynamic will lead provers to optimize the most valuable proof tasks.
Since each computational task to be proved is unique and generates different circuits, the number of circuits that the prover needs to be able to handle is infinite. This forced generality makes the optimization of individual circuits difficult. The introduction of the proof market allows for the specialization of circuits that the market deems valuable. Without this market, it would be challenging to convince verifiers to optimize this circuit due to this natural cold-start problem.
Another tradeoff is the classic abstraction versus control. Users willing to adopt this easy-to-use interface are ceding control of the underlying cryptographic primitives. For many users, this is a very valid trade-off, as it is often better to let a cryptography expert make these decisions for you.
Advantage
Users can write code in a familiar high-level language
All zk internal structures are abstracted and are not affected by users
Does not rely on specific “virtual machine” circuits that add additional overhead.
Shortcoming
Each program has a different circuit. Difficult to optimize. (proving that the market partly solves this problem)
Swapping/upgrading the internal zk library is not trivial (requires forking).
zkVM describes a superset of all zk virtual machines, while zkEVM is a specific type of zkVM that deserves a separate topic due to its popularity today. In addition to custom crypto VMs, several other projects are working on building a more general ISA-based zkVM.
Rather than certifying the EVM, the system can certify a different instruction set architecture (ISA), such as RISC-V or WASM, in the new VM. Two projects working on these general-purpose zkVMs are RISC Zero and zkWASM.
Let’s take a deep dive into RISC Zero here to demonstrate how this strategy works and some of its advantages/disadvantages.
RISC Zero is able to attest to any computation performed on the RISC-V architecture. RISC-V is an open-source instruction set architecture (ISA) standard that has grown in popularity. The idea of RISC (Reduced Instruction Set Computer) is to build an extremely simple instruction set with minimal complexity. This means that developers higher up in the stack end up with a larger load when implementing instructions using this architecture while making hardware implementation simpler.
This philosophy also applies to computing in general, and ARM chips have been taking advantage of RISC-style instruction sets and are beginning to dominate the market for mobile chips. It turns out that simpler instruction sets are also more energy and chip-area efficient.
This analogy applies quite well to the efficiency of generating zk proofs. As discussed earlier, when proving zk’s execution trajectory, you pay for the sum of the costs of all instructions for each item in the trajectory, so simpler and fewer total instructions are better.
From a developer’s perspective, using RISC Zero to handle zk proofs is a lot like using AWS Lambda functions to handle backend server architecture. Developers interact with RISC Zero or AWS Lambda by simply writing code, and the service handles all backend complexities.
For RISC Zero, developers write Rust or C++ (eventually anything targeting RISC-V). Then, the system accepts the ELF file generated during compilation and uses it as the input code of the virtual machine circuit. Developers simply call proof, which returns a receipt (zk proof containing execution trace) object, and anyone can call `verify’ from anywhere. From a developer’s point of view, it is not necessary to understand how zk works, the underlying system handles all these complex issues.
In order to support such a common interface, a lot of overhead (in terms of proof size and generation speed) is required.
Significant improvements to proof generation techniques are needed to enable broad support for existing libraries.
For some basic and reusable circuits that are particularly useful for blockchain applications or elsewhere, the team may have built and optimized these circuits for you. You just provide input for your specific use case. For example, Merkle Proofs of Inclusion is something commonly needed in cryptocurrency applications (airdrop lists, Tornado Cash, etc.). As an application developer, you can always re-use these battle-tested contracts and just modify some layers on top to create a unique application.
For example, Tornado Cash’s circuits could be repurposed for a private airdrop application or a private voting application. Manta and Semaphore are building a complete toolkit, including general-purpose circuit gadgets like this one, that can be used in Solidity contracts with little or no knowledge of the underlying zk moon math.
As discussed in detail, there are countless different options for developing zk applications, all with their own unique tradeoffs.
This chart will help summarize this decision matrix for choosing the best tool for the job based on your level of zk expertise and performance needs. This is not a complete list and will be updated as zk develops.
Applicable scene
Not applicable
Optional tools
Not applicable
Requires fine-grained control over proof backends (currently, backends can be swapped for some DSLs)
Optional tools
Applicable scene
Not applicable
Applicable scene
Not applicable
Optional tools
Applicable scene
Not applicable
Optional tools
Applicable scene
Not applicable
zk is at the cutting edge of several technologies, and building it requires a deep understanding of mathematics, cryptography, computer science, and hardware engineering. However, with more and more layers of abstraction becoming available every day, application developers don’t need a Ph.D. to take advantage of the power of zk. Over time, by optimizing all levels of the stack, the proof-time constraints will gradually lift, and we may see simpler tools for the average developer.
DISCLAIMER: The Information on this website is provided as general market commentary and does not constitute investment advice. We encourage you to do your research before investing.
Join us to keep track of news: https://linktr.ee/coincu
Harold
Coincu News
The presale for BTFD Coin is your second chance. Missed Notcoin? Here’s how you can…
Metaplanet Ordinary Bonds issuing mirrors MicroStrategy's approach, making the company a notable player in the…
BlackRock Bitcoin ad included a controversial disclaimer suggesting that the 21 million BTC supply cap…
El Salvador Bitcoin accumulation continues, even after an agreement of a $1.4 billion loan with…
Dive into the latest on BTFD Coin’s explosive presale, Ponke’s decentralised exchange plans, and Shiba…
Read about the fresh developments from Web3Bay's presale, Arweave’s ventures into DeFi, and Helium's transition…
This website uses cookies.