Ming Wu from Conflux: Use DAG structure to enhance the throughput rate of Nakamoto Consensus Mechanism

22 February, 2019

On the Christmas Eve of 12/25, Ming Wu, Ph.D., CTO of Conflux was invited to the “Mars Financial Founding Study Group”, and taught a public lesson titled “Conflux: Use DAG structure to enhance the throughput rate of Nakamoto Consensus Mechanism”.

img

Conflux CTO: Ming Wu, PhD

Dr. Wu received his undergraduate education from the University of Science and Technology, China, and received his Ph.D. degree from Chinese Academy of Science in the major of computer system structure. He used to work as a senior researcher in the system research group of MSRA, studying distributed transaction processing system, graphics calculation engine, and artificial intelligence platform, etc. In recent years, he has published papers in many top-rated conferences in the field of system science (e.g., SOSP, OSDI, NSDI, ATC), and has served as a committee member in conferences like OSDI, ASPLOS, and HotDep.

A glimpse of the key points:

1. Based on the idea of Nakamoto Consensus, it is possible to greatly enhance the throughput rate without sacrificing decentralization or safety by using DAG-based technological transformation.

2. Why does the conventional PoW-based Nakamoto Consensus Mechanism have a very low throughput? Basically saying: for the sake of safety.

3. Ensuring safety is the core requirement of a Public Blockchain system. Although PoS decreases the resource consumption in the PoW mining, it brings in a lot of new flaws that can be easily attacked, and therefore a lot of threats to safety. Up to now, there is no effective solution to this problem.

Dr. Wu frankly acknowledged that the currently available transactions of PoW-based blockchain systems all have a very low throughput rate. This brings in a lot of problems like unpleasant customer experience and high transaction fee, and also, in turn, restricts the possibility of exploiting more meaningful applications on the basis of blockchain systems.

Under this background, by combining the DAG technology and Nakamoto Consensus (the consensus mechanism used by Bitcoin), Conflux successfully enhances the throughput rate of PoW-based blockchain systems without sacrificing decentralization or safety.

Below is the detailed contents of Dr. Ming Wu’s broadcast (summarized and organized by Conflux and huoxing24)

Part 1: industrial background

As we all know, one of the biggest bottlenecks in the current Blockchain field is low efficiency. The currently available PoW-based blockchain systems (e.g, Bitcoin, Ethereum) all have a very low transaction throughput rate. The rate limit for bitcoin is ~7 transactions per second, and for Ethereum is 30, which is far slower than the centralized transaction services like Visa that support throughput rates of >1,000 transactions per second.

Low efficiency brings in many problems:

First of all, it makes the customer experience of the current blockchain systems very painful. For example, a lot of time is wasted waiting for the transactions to be added into the blocks.

Secondly, the limited throughput rate induces a high transaction fee.

In turn, the high transaction fee limits the potential of exploiting more meaningful applications on the basis of blockchain systems.

Generally saying, there are several theoretical approaches that can enhance the throughput rate:

1. Sticking to the Nakamoto Consensus, but adjusting the protocol parameters.

It is possible to enhance the throughput rate by adjusting the block generation interval and the size of the blocks, for example, Litecoin (2.5 min/1MB block) and BCH (10min/32MB block). But according to some research reports, no matter how the two parameters are adjusted, the enhancement of the throughput rate is always accompanied by the compromise of safety. In Part 2, I will articulate the detailed reason.

2. Based on the idea of Nakamoto Consensus, modifying the system using DAG technique.

This approach is able to greatly enhance the throughput rate without sacrificing decentralization or safety.

3. Adopting the PoS or dPoS mechanism

The dPoS algorithm, like EOS, substantially sacrifices decentralization, and therefore its advantage remains controversial; whereas the PoS algorithm also has problems related to decentralization and safety, like concentrated ownership and inability of handling long-range attacks.

4. Using side-chain or off-chain solutions like sharding and state channel

Some of these solutions are called Layer 2. These solutions, being orthogonal to the current solutions provided by Conflux, aim to solve the throughput rate problem from a new perspective. Therefore, their enhancement of the throughput rate can complement and be overlaid with the enhancement achieved by Conflux. For example, a technology that enhances the throughput rate of the full node can help systems adopting sharding technology to overcome its performance bottleneck caused by cross chain transactions, and can fulfill more state channel settlement transactions as well.

Part 2: Why does the Nakamoto Consensus Mechanism has a low throughput rate?

Why does the conventional PoW-based Nakamoto Consensus Mechanism have a very low throughput? Basically saying: for the sake of safety.

In the public blockchains, in order to let all the machine nodes participating in the network protocol reach a consensus on the transaction execution, the miners need to broadcast the blocks they newly mine out in the P2P network, which helps ensure that all the machines maintain a consistent ledger structure. Each miner, according to the Longest Chain Rule, will choose to put the block newly mined out at the end of the longest chain it sees. This means that all the honest nodes will together extend the longest pivot chain. In this way, the malicious nodes will not be able to reverse the pivot chain if it does not have >50% computing power.

However, if there is a serious network latency which allows the generation of more blocks prior to the broadcast of the new block reaching the whole network, a large amount of forks will appear in the blockchain.

Below I will use a picture to demonstrate the appearance of a blockchain’s ledger after the appearance of forks.

img

The forks bring two problems: 1. The block with forks will eventually be abandoned, which results in a waste of the network/processing resources. 2. The forks will compromise the safety. If a large amount of blocks produced by honest nodes are abandoned, the attackers will only need less than 50% computing power to generate a malicious longest chain.

Therefore, to ensure safety, the Bitcoin or Ethereum need to maintain a very long block generation interval (i.e., very low block generation rate), or maintain very small block sizes to minimize the network latency of broadcasting, so as to reduce the likelihood of fork generation. But this in the meanwhile greatly reduces the throughput rate of the system.

Increasing the throughput rate will lead to the generation of forks. Blocks on different forks are in competition. They compete for the confirmed blocks mined out after them, in order to allow their branch to eventually become part of the longest chain while abandoning the other branches. This meaningless competition between blocks mined out by honest nodes will bring exploitable opportunities to attackers. To ensure safety, therefore, Bitcoin chooses to reduce the block generation rate to avoid this situation.

Yet, if we select another approach, which is to allow each block to select multiple blocks to be its parent or ancestor, it can totally avoid the meaningless competition between blocks mined out by honest nodes. In this way, the “safety vs. efficiency” dilemma of Nakamoto Consensus can be bypassed.

With this approach, the whole network will adopt a directed acyclic graph (DAG) structure. Then what needs to be considered next is: how should we decide the execution order of the transactions in different blocks? To solve this problem, we need to design a safe DAG topological sorting algorithm, so that each node will be able to generate an execution order of the block by running this algorithm once on its local DAG. This algorithm needs to ensure that:

The order provided by all mining machine nodes should be consistent with each other.

The order cannot be modified even by double spend attacks.

Later, I will use Conflux as an example to demonstrate the design of this high-safety topological sorting algorithm.

There are many DAG techniques available nowadays. For example, Spectre and Phantom algorithms, the two algorithms with the closest design idea to ours, both optimize the efficiency of PoW blockchain by taking advantage of DAG. But unfortunately, Spectre is incapable of providing an order for all blocks, which prevents the execution of a smart contract, whereas Phantom was found by us to contain serious flaws.

There are also some other DAG-based algorithms that adopt only the format but not the idea of the approach we discuss today, like Avalanche Consensus and Hashgraph. These algorithms in principle belong to the consortium chain algorithm. By way of “Equity Mortgage”, they convert the consortium chain algorithm into the PoS public blockchain algorithm. However, there are certain safety issues in the convert of the consortium chain algorithm to the PoS public blockchain algorithm. For example, certain attacks aim specifically at PoS, Nothing-at-Stake Attack and Long-Range Attack being two well-known ones of them.

Part 3: Using PoW+DAG to enhance the blockchain throughput rate: case Analysis — the mechanism design of Conflux

Next, I will use Conflux as an example to demonstrate, from a technical perspective, how the throughput rate can be enhanced using DAG.

In Conflux, each block contains two kinds of edges: parent edge and reference edge, which together form a DAG. By removing the reference edge, all parent edges will comprise a tree. During mining, all miners need to listen to the broadcast of block generation in the P2P network and maintain a local block ledger with DAG structure. When the miners are mining, their selection of the parent edge and reference edge for a new block need to obey the following rules:

img

  1. In the tree comprised on parent edge, a pivot chain need to be selected based on the Ghost protocol. Specifically, starting from the genesis block, the next block will be iteratively selected from all child blocks to add into the pivot chain. The rule of selection is to choose the child block that has the largest subtree. The parent edge of the new block should point to the last block of the current pivot chain.
  2. The blocks that are not pointed to by any parent or reference edge are called “leaf blocks”. The reference edge of the new block should point to all the remaining leaf blocks.

With these definitions for the edges, the ledger structure can be determined. After the block has been broadcast, all the nodes will eventually gain a DAG with a consistent structure, i.e., a consistent ledger.

After gaining consistent ledger, how should all the nodes determine a safe and consistent order for the blocks? Our core idea is that these nodes should at first determine a consistent pivot chain, and then a consistent order for the blocks can be decided based on this pivot chain.

Below is an example of how to determine a consistent block order.

img

The selection of the pivot chain also follows the Ghost protocol. I have mentioned this protocol while we were selecting the parent edge. Based on this protocol, we select Block C, E and H to join the pivot chain.

Based on the abovementioned rules for edge selection, the parent edge of the new block should point to H, while its reference edge should point to K, which is recognized as a leaf block. (Note: G was pointed to by the reference edge of H and therefore is not a leaf block)

Now that we have the mechanism for all nodes to reach a consensus on the pivot chain, how can the nodes reach a consensus on the block total order? To achieve this, we need to bring in a concept named ‘Epoch’. Each block on the pivot chain generates one Epoch. A block on a fork belongs to the Epoch of the first pivot chain block generated after it. For example, Block D belongs to Epoch of E because it is first referenced by Block E and that it is generated before Block E but not C. Based on this rule, we can acquire a consistent order as below:

img

Therefore, in Conflux, we can first generate an order for the Epochs and then determine the order of blocks within each Epoch using topological sorting. A draw between two blocks can be broken based on the difference of their Hash values. Thus, this is how the final order turns out to be in our example. To generate the order for all transactions, Conflux first generates a rough order based on the order of the blocks; then within each block, the order for the transactions can be easily generated based on their relative positions.

At last, let’s take a look at how to solve conflicts and duplicates in transactions. I will again use an example to demonstrate:

img

In this example, Transaction 2 and 3 are in conflict because, after the execution of Transaction 2, Account X will be left with not enough balance to fulfill Transaction 3. Considering in the total order of this transaction series, Transaction 3 occurs after Transaction 2, we would judge Transaction 3 invalid. In another situation, the same transaction may be packed into multiple concurrent blocks by different nodes, like Transaction 4. When this occurs, Conflux will only accept the first transaction in the total order and judge all the later repeated ones to be invalid.

Q&A:

Q1: How do you compare Conflux with PoS public blockchain projects?

A1: As we all understand, safety is the most essential requirement for a public blockchain, because the original purpose of a public blockchain is to construct a trustworthy system. Although PoS reduces the resource consumption of mining in PoW, it in the meanwhile brings in a lot of flaws that can be easily attacked, and therefore a lot of threats to safety. Up to now, there is no effective solution to this problem. Among attacks aiming specifically at PoS, Nothing-at-Stake Attack and Long-Range Attack are two well-known ones. For this reason, Conflux sticks to using the PoW-based consensus mechanism and overcomes the performance bottleneck by taking advantage of DAG. The adoption of this strategy was supported by the simulation results of our prototype testing system.

Q2: How long does Conflux require to confirm one transaction?

A2: In the case of no malicious attack, with the experimental parameter that each block of 4 MB size is generated in an average of 5 s, the confirmation time is about 8 min. If the system is under attack by 30% computing power, the confirmation time is about 16 min.

Q3: what is the TPS that Conflux can reach?

A3: When we discuss and compare TPS, we need to carefully note the experimental settings for testing. For example, in Bitcoin, the generation of each block needs to be broadcast to all miners, and each transaction needs to be verified by all people, which is called ‘verified by all miners’. In certain experiments testing Sharding programs, however, the pool of all transactions will be divided into 30 bins, and each node may only verify one of the bins, which we call ‘verified by a portion of miners’. Obviously, the total TPS of transactions with ‘verified by a portion of miners’ setting can be several times higher than ones with ‘verified by all miners’ setting.

On the level of consensus, Conflux can reach 4000–6000 TPS with full nodes verification, which may be slightly lowered in the actual deployment by practical factors like the execution of the contract and state reading/writing. Sharding technique is a very nice supplement to the consensus technology. In the future, Conflux has the plan to incorporate Sharding to achieve higher TPS under ‘verified by a portion of miners’ setting, which will provide our customers with more options in service.

Q4: What are the roadmap and corresponding timeline of Conflux?

A4: We plan to launch the test net after Chinese New Year, and the main net in the 3rd quarter of next year.

Q5: Will Conflux issue tokens? Or will it do mining?

A5: Conflux does not have ERC20. After the main net is launched, it will do mining.

Q6: Then will PoW or computing power of mining be involved here? Besides, what is the biggest difficulty here? What is the main technical problem you are facing?

A6. Conflux is an improvement of the Nakamoto Consensus, a PoW Consensus Mechanism, so of course, it will involve mining.

The main difficulty here is to design a high-efficiency and consistent sorting algorithm for the DAG ledger. This algorithm needs to ensure consistency among honest people, and in the meanwhile defend against various complicated attacks.

Q7: How does Conflux prevent double spend attack?

A7: This question is relatively complicated. Let’s take a look at a picture first.

img

In this picture, let’s first look at how an attacker can reverse a transaction in the ledger, like Transaction 4. To do this, the attacker needs to generate a double spend transaction for Transaction 4, pack it into a block and insert this block ahead of Block B in the block total order.

But it will be very difficult to do so, for two reasons. First, unless the attacker can modify the pivot chain, he simply cannot reverse the transaction, because the order of transactions is determined by the pivot chain. For example, if an attacker intends to insert a block near the front side of the blockchain, what he can do is to generate new blocks following the block of a very early Epoch. But as long as the new blocks are not on the pivot chain, they will end up belonging to a very late Epoch. This is because every time a new honest block is generated, it will drag the attacker’s blocks into a new Epoch using its reference edge.

The second reason is that, if the computing power the attacker possesses does not exceed 50%, he has no way to modify the pivot chain.

Q8: Do you have testing results for the Conflux CTPS?

A8: CTPS refers to the confirmed transactions per second. The TPS we derived from our experiments was indeed the CTPS, which is around 4000–6000.

Q9: Is there any specific hardware requirement for mining?

A9: We will choose to use GPU-friendly POW algorithms, so GPU devices will be good to use.

But it will be very difficult to do so, for two reasons. First, unless the attacker can modify the pivot chain, he simply cannot reverse the transaction, because the order of transactions is determined by the pivot chain. For example, if an attacker intends to insert a block near the front side of the blockchain, what he can do is to generate new blocks following the block of a very early Epoch. But as long as the new blocks are not on the pivot chain, they will end up belonging to a very late Epoch. This is because every time a new honest block is generated, it will drag the attacker’s blocks into a new Epoch using its reference edge.

Source: https://medium.com/@Confluxchain/ming-wu-from-conflux-use-dag-structure-to-enhance-the-throughput-rate-of-nakamoto-consensus-8d52ab72599c