Reconstructing Smart Contracts, Part II: Parallel Universes and Unlimited Scalability

The Reconstructing Smart Contracts series is the brainchild of the founders of Antshares, representing their insights gained while designing Antshares smart contracts. The series, comprised of three parts, analyzes the existing smart contract systems through three angles respectively: determinism and resource control, scalability and decoupling, and universality and ecosystem compatibility. The series proposes a new philosophy for designing smart contracts.

In “Reconstructing Smart Contracts Part I: The Ghost of Undeterminism,” we analyzed the importance of determinism, resource control, and the isolation of smart contracts. We came to the conclusion that virtual machines (VMs), when used as the runtime environment, have obvious advantages over the container platform Docker. In “Restructuring Smart Contracts, Part II,” we will continue to compare the advantages and disadvantages of different smart contract systems from the perspective of scalability and give our insights for improving them.

The importance of runtime environment

Runtime environment is vital to the performance of a smart contract. Two major designs exist for blockchain architecture: virtual machines and Docker. Both function as a sandbox to execute the codes on smart contracts, and they either isolate or confine the resources used for executing the contracts.

Virtual machines

A virtual machine usually refers to software that can execute programs as a machine would. Some VMs, such as VMware and Hyper-V, will simulate a complete physical computer on which operating systems and apps can be set up. Other VMs, such as Java VM (JVM), will only simulate functions at the application level, without touching bottom-level hardware.

Smart contracts are seldom designed to simulate a complete physical computer, as such a design would consume enormous resources and would hardly be compatible with different hardware structures. This would seriously impact smart contract performance. Most blockchain networks therefore choose lighter VM structures. For example, Ethereum uses its own VM called EVM, Corda of R3 uses JVM, and some blockchains choose V8, or the JavaScript engine.

Two indicators are vital to analyzing the performance of runtime environment: (1) speed of instruction execution and (2) start-up time. For smart contracts, start-up time is usually more important than the speed of instruction execution. It is easier to optimize execution, as most instructions for such contracts seldom involve logic judgement like IO. Besides, as we discussed in “Reconstructing Smart Contracts Part I: The Ghost of Undeterminism,” smart contracts have to be run in isolated sandboxes for the sake of system security. This means every time a smart contract is used, a new VM or Docker needs to be started. Therefore, the start-up speed (time) of the runtime environment is the more important factor to smart contract performance.

The above mentioned lightweight VMs — including EVM, JVM and V8 JavaScript engine — are extraordinarily advantageous in terms of improving the performance of smart contracts. These VMs can be up and running very quickly, using fewer resources, and thus are suitable for lightweight programs such as smart contracts. The disadvantage of lightweight VMs is that their execution is less efficient. Luckily, smart contacts are lightweight and thus attach greater importance to start-up time over speed of instruction execution. Furthermore, just-in-time compilation, which can be used to code and cache hotspot smart contracts, will significantly increase the efficiency of VMs.


Fabric, a project of Hyperledger, uniquely uses Docker as the runtime environment for its smart contract. This is different from other mainstream blockchain designs. Dockers isolate resources, though not to the extent that VMs do. As Dockers do not apply any virtualization technology, and programs run directly on the underlying operating system, they claim high speeds in code execution. However, Dockers are still too heavy when compared to lightweight VMs. When starting up, a Docker will consume considerable time and resources. This becomes a drawback that constrains the performance of smart contracts. In performance tests, Fabric does not display high performance, even with IBM’s mainframe LinusONE.

Code execution speed can be compared to the maximum speed of a car, while start-up time of runtime can be compared to the acceleration of a car from 0 to 100km/h. Since smart contracts are lightweight programs, they are usually in a “start-stop-start-stop” mode. They seldom need to reach maximum speed. Therefore, start-up time should be seen as the key factor that influences their performance.

Parallel execution, sharding and unlimited scalability

When it comes to system scalability, scaling up and scaling out are the common methods involved. A typical case of scaling up is shown in the development of a single-core CPU; to enhance CPU performance, one can only increase the clock rate. Scaling up tends to easily meet the ceiling for difficulty. As it is increasingly difficult to further improve CPU engineering, scaling out — namely using multi-core architecture to simultaneously process several tasks — becomes an important method for improving CPU performance.

Scaling up will soon reach this ceiling due to the foreseeable limits of cost and technology. Therefore, the scalability of sequential systems on which tasks can’t be split will be very limited. In this case, it depends on the processing power of a single piece of computing equipment. If we can reconstruct the sequential system and turn it into a parallel system, we can theoretically have unlimited salability. Could we then achieve unlimited scalability for blockchain networks? In other words, could blockchains handle tasks in a parallel way?

Blockchain, as a distributed global ledger, not only records states, but also records rules for changing states. Smart contracts are just vehicles for recording these rules. Therefore, the ability of blockchains to handle tasks in parallel depends on whether multiple smart contracts can be executed simultaneously — or whether the execution of contracts is relevant to the order of their execution.

For example, two contracts want to edit an RMB account that carries a 10 yuan balance. Contract A wants to add 5 yuan to the account, while contract B wants to deduct 11 yuan. If we execute contract A first, followed by contract B, the final balance of the account will be 4 yuan. However, if we execute contract B before contract A, the execution of contract B will fail due to insufficient funds. The final balance will be 15 yuan, as only contract A can be executed. In this case, the difference in order of operations will lead to different results. Therefore, parallel execution is not applicable here.

On the other hand, if two contracts handle two different accounts, the final results will be the same, as order of execution will not impact results. In such a case, parallel execution becomes feasible. As we can tell from the above example, whether parallel execution is feasible for two smart contracts depends on whether the execution result is irrelevant to execution order. At the same time, the relevance of execution order depends on whether two smart contracts are able to edit the same state record.

Based on the above analysis, we can easily design a smart contract system that has unlimited scalability. We simply set the following rules: (1) a smart contract can only edit the state record belonging to this smart contract; (2) within the same block, each smart contract can only be run once. The order of execution becomes irrelevant. Nicely done!

But wait. … “A smart contract can only edit the state record belonging to itself” means that each contract becomes an information silo. And “for the same block, each smart contract can only be run once” means that only one transaction will be processed for a certain digital asset based on the smart contract. This deviates from the usual objectives of smart contracts. After all, to call data between different contracts and to call data repeatedly in one smart contract are the usual objectives for designing smart contracts.

Now things have become complicated. Especially for the smart contract system of Ethereum, which supports dynamic call. It is impossible to predict the behaviors and paths of Ethereum smart contracts before executing codes, and thus it is also impossible to know which state record the smart contracts will edit. The scalability of Ethereum therefore becomes its big disadvantage. The current architecture of Ethereum can hardly uphold its great vison to be a global computing platform. As a matter of fact, Ethereum has proposed sharding as the solution to its scalability drawback.

Sharding is actually just like China’s hukou household registration system. Modeling 256-bit value hashed via a smart contract and distributing it into 256 sections is just like giving each section a household registration certificate. Calling of the data can only be done in the Shanghai section or the Beijing section, but it cannot be done between different sections directly.

In our case, the contracts in each of the 256 sections can be handled directly. It’s like increasing the efficiency of code execution 256 times. In this mode, however, cross-region calling requires a calling request to be written into the full network. It also requires the confirmation of another section before the first section executes the call, which will also be written into the full network. This will greatly decrease the efficiency, as cross-section calling cannot be done in one block.

In a practical scenario, a possible result of sharding is that people will crowd into a “prosperous” section. Calling in one section spares the need for cross-section calling, thus calling can be effective. Traffic congestion in a downtown area will not be solved, no matter how many roads you have built on the outskirts.

Load method of codes of smart contracts will also influence their scalability. The smart contract system of current mainstream blockchain structures require smart contract codes to be released on-chain and before the codes are loaded and executed. Some codes will only be used once but will be permanently recorded on-chain. If such obsolete nodes continue to accumulate, it will be a huge burden for the network, negatively impacting scalability.

Here is another solution: Record the hash result of smart contracts on-chain and use a new hash-based distributed storage network like IPFS to store the complete nodes of smart contracts. When a smart contract is to be implemented, codes will be loaded off-chain. Since the hash result of the smart contracts have already been recorded on-chain, there is no need to worry about whether the contracts will be altered, even though we load the codes off-chain. This method saves large storage space for nodes and provides some protection for the privacy of the contents of the smart contracts.


Coupling refers to extent of which two or more parties are interdependent on each other. Two extreme examples show the control of coupling among different designs of blockchains and smart contracts. These are:

1) Ethereum

The design of the Ethereum smart contract system is a typical case of coupling of high degree. There are ubiquitous interdependencies between blockchain and EVM. For example:

  • The calculation of fees are mixed with the logic execution of EVMs.
  • The instructions set of EVM contains a number of instructions used to visit the ledger’s data.
  • EVM directly provides the constant storage instruction that is based on the blockchain ledger.

It is not a wise design to mix the execution logic of blockchain with VMs. This mixture will actually bring more problems. To modify or upgrade the function of the blockchain, you will have to upgrade EVM, usually by adding more instructions to them. Besides, EVMs can hardly be transferred to other blockchain systems, unless that blockchain has architecture highly similar to Ethereum, or it has developed a match mechanism for EVMs.

The high-degree coupling of Ethereum will greatly limit Ethereum’s application. We will further elaborate this point in “Reconstructing Smart Contracts, Part III: Compatibility and Ecosystem.”

2) Fabric

Fabric’s design uses low coupling. There is almost no inter-dependence between the blockchain ledger and Docker. Docker itself has already been used in many other scenarios besides blockchain. Smart contracts in Docker can only exchange information with other nodes via the gPRV protocol, which contains the functions of ledger visit and constant storage. When there is a need to improve or upgrade the functions of blockchain, one only needs to modify the gRPC protocol. The ultra-low coupling mode of Fabric provides a valuable lesson for other blockchain developers.

High cohesion and low coupling are the common targets for designing a certain system architecture.

The objective of Fabric is to build a general-purpose blockchain technology framework that makes module-based structure the philosophy from the beginning. On the other hand, the initial objective of Ethereum is to provide a concrete public chain instead of a technology framework. Therefore, Ethereum naturally has a coupling of high degree, which will impede Ethereum’s application in the Consortium chain and in the private chain.


In this article, we have discussed: (1) the relationship between runtime environment and the performance of smart contracts; (2) parallel execution; (3) coupling; (4) the load mode of codes. We have also pointed out some drawbacks of Ethereum’s scalability, owing to its coupling and abstract design. We have put forward a design philosophy for building a highly parallel smart contract system with “unlimited scalability”. Finally, we have explained why we believe a sound network suitable for creating and executing smart contracts should have the following features:

  • Lightweight runtime: The network should have a shorter start-up time and relatively high efficiency in execution.
  • A pluggable runtime framework: The default runtime should not provide constant storage, thus smart contracts are like non-state functions such as micro service, capable of parallel execution. Only when there is a need to store the state will the framework provide the pluggable module for constant storage. VMs of this type only have one CPU and one stack by default, and will only provide “hard disk” and other IO equipment when needed.
  • Explicit calling: They will only provide static calling function, which means the contracts’ behaviors and paths will be clear before the calling is run. This clarity will lead to the clarity of the state data that the contracts want to edit. Based on explicit calling information, dynamic sharding can be achieved, thus making parallel execution of smart contracts more feasible and effective.
  • Codes stored off-chain: The hash result on-chain, together with the complete codes off-chain, will increase storage scalability.
  • Decoupling: Low coupling between contract langue, runtime, and blockchain will make smart contracts more adaptable.

In the next and last article of the Reconstructing Smart Contracts series, we will analyze the compiling languages used by smart contract networks, and we will propose a new mode that enables developers from other sectors unrelated to blockchain to quickly develop smart contracts.

About the authors

The writers of this article series are the founders of the Antshares blockchain project ( Antshares Blockchain is an open-source, public blockchain, exploring the frontiers of a programmable smart economy. The birth of the Antshares project dates back to 2014, making it the first blockchain project in China. In 2017, the accumulation of technological experience from Antshares’ early days will come to its blossoming. Technological breakthroughs are to be expected in the field of smart contracts, cross-chain interoperability, new consensus mechanisms and cutting-edge cryptography.