Technical comments on the debate of the block size in BCH fork: is increasing the block size the right approach for throughput improvement?

07 March, 2019


During the BCH hard fork in Nov. 2018, Bitcoin SV wished to further increase the block size to 128 MB. However, Bitcoin ABC believed the original size of 32 MB to be large enough. Compared to the initial block size of 1 MB in Bitcoin, in less than two years, the block size of fork chains has been increased fiercely. Would that cause a problem? From the results of academic researches, we try to find an answer.

The history of block size increase:

As we all remember, in Bitcoin, its block size was limited to 1 MB. In the first few years, Bitcoin did not receive much attention, and the 1 MB size was seldom filled. Because of that, there was not much demand for increasing the block size.


However, as Bitcoin received more and more attention over the years, its throughput rate limit started to cause a big problem. With an average size of 250 bytes per transaction, Bitcoin can only process less than 7 transactions per second.

To improve the throughput, strategies, like segregate witness and increasing the block size limit to 2 MB, was raised. In Aug. 2017, BCH was forked from Bitcoin and set the block size limit as 8 MB, which was further increased to 32 MB in May 2018. During the fork this time, Bitcoin SV even proposed to increase the size to as large as 128 MB.

The impact of the block size limit on safety:

The academic field has started to investigate the impact of block size and block generation interval on safety since 2015. They reached the conclusion that briefly saying, the impact of block size limit and block generation interval on safety under the Longest Chain Rule can be summarized by a ratio value:

The time for a block to propagate throughout the network/block generation interval:

This ratio will affect the frequency of the orphan blocks, and further influence the computing power needed for a double spend attack. The larger the ratio value, the more orphan blocks, the less the needed computing power, and therefore the less safe the system will become.

(Note: Centralized computing power of the honest miners will increase the safety because when calculating the time for a block to propagate throughout the network, we can cut off the long tail. For example, when the propagation has cover nodes of 95% computing power, we can regard it as having reached the whole network.)

The persistent endeavor of Bitcoin in decreasing the block propagation time:

Compared with the network environment 10 years ago, the network bandwidth nowadays has increased avidly, which requires much less time to transmit the same amount of data. Moreover, in 2016, Bitcoin also reduced the block propagation time by realizing the so-named ‘Compact Block’.

In contrast to a full block which keeps all its transaction information, a compact block only records the short ID of each transaction (as small as 6 bytes). When a node mines a new block, it will only propagate the compact block in the network. The nodes that received the compact block will first try to restore the full block from their local transaction pool. Only when the restoration fails will they try to request the lost transaction information from neighboring nodes.

Compared to a 1-MB full block, a compact block is only 15 KB. It was reported that the success rate of a node directly restoring the intact block from a compact block is as high as 86%. This greatly decreased the time for a Bitcoin block to be propagated throughout the network. According to statistical data, from January to December 2016, the block propagation time has been reduced by six folds.

So, it is safe to increase the block size?

Compared to the initial period of Bitcoin, the time for a block to propagate throughout the network nowadays has been greatly reduced. Therefore, with regard to safety, a block size of 1 MB is now too conservative. Even when the block size is increased to 8 MB, we can still maintain a high safety comparable to Bitcoin 3–5 years ago.

However, a block size of 32 MB is worth cautious considerations. Although the use of Compact Block technique can still reduce the amount of propagated data to approximately 500 KB for each block, when the network is overwhelmed by transactions, a large amount of transactions might be “jammed on the road”, which will substantially reduce the success rate of full block restoration and eventually result in an overly long propagation time.

With that saying, a block size of 128 MB will be nearly insane. What is more horrifying is that the supporters of this proposal seemed to never consider the above problems. Based on the author’s rough estimation, the 128 MB block size will likely cause a serious safety problem unless having a centralized computing power. However, if the computing power is indeed centralized, then what exactly is its difference from a centralized system?

In summary, properly increasing the block size can indeed increase the throughput rate, but the unrestricted increase of the block size will definitely cause serious safety issues.

Weightest Chain Rule: address the dilemma of safety and efficiency:

The problem discussed above comes with a pre-requisition of following the Longest Chain Rule. Therefore, we may think from another perspective to solve it. The Weightest Chain Rule was invented by the GHOST consensus protocol, which can ensure that regardless of the block size and block generation rate, a double spend attack will always need at least 50% computing power to succeed.

The design of Conflux is based on an upgraded from the GHOST consensus protocol. By the use of DAG structure, it is able to reach a throughput rate of 1.6 MB data (equivalent to 6400 transactions) per second while maintaining high safety. This superb performance forms the solid consensus basis of creating a high-efficiency PoW public chain.


[1] Sompolinsky, Yonatan, and Aviv Zohar. “Secure high-rate transaction processing in bitcoin.” International Conference on Financial Cryptography and Data Security. Springer, Berlin, Heidelberg, 2015.

[2] Li, Chenxing, et al. “Scaling Nakamoto Consensus to Thousands of Transactions per Second.” arXiv preprint arXiv:1805.03870 (2018).