Bitcoin SV and Big Blocks – A Safe Path to Scaling
by Bitcoin SV
August 26, 2018 (13min read)
TLDR; We’re not forcing anyone to 128mb blocks, we’re simply encouraging miners to configure block size limits themselves and giving them test results so they make informed choices....

TLDR; We’re not forcing anyone to 128mb blocks, we’re simply encouraging miners to configure block size limits themselves and giving them test results so they make informed choices.


The announcement of the Bitcoin SV project has caused quite a stir and the spectre of 128mb blocks seems to have invoked quite a lot of fear and uncertainty in the community. This post will address some of those fears and attempts to clarify some points that have been frequently misstated in recent discourse.

There are a number of other issues to address in subsequent posts but this post will focus on block size as that appears to be the one that has the most attention.

Soft caps vs Hard caps

In most bitcoin implementations, there are two key settings that govern block size on the network:

  1. Soft cap: The maximum size block that a miner will MINE.
  2. Hard cap: The maximum size block that miner will ACCEPT from another miner.

Currently, most implementations default to a soft cap of 2mb and a hard cap of 32mb. Some miners have adjusted their soft cap higher than this with the largest block ever mined so far being 8mb on the mainnet and 32mb on testnet. The Gigablock testnet has gone an order of magnitude higher but that wasn’t with production code.

Arguably, the soft cap is what the miner should set according to what is most economical for it to mine. That is the balance point between revenue, computational cost of building blocks and validation time (a function of the miner’s hardware and software capacity). The hard cap is more of a fail-safe for two reasons: 1) It allows miners to accept blocks from other miners who might be economically able to miner larger blocks. These blocks may be more costly to validate but the miner is required to do so if they want to stay on the majority chain. 2) If no limit at all is in place (as software is currently architected), then the effective limit becomes the point where the node runs out of memory and crashes.

For the sake of maintaining consensus, these two values are necessarily quite a long way apart.

Capacity planning for fast confirmations

In order to ensure blocks never get full and result in the the delays and fee trauma that is typical for BTC users, it is necessary to set block caps well in excess of the network average. Let’s look at recent data:

From this recent data, we can see that with a 7 day average block size of ~ 100kb, there are numerous spikes that are multiples of that: two spikes that are 10 times the average, and one spike that is 40 times the average. These are short term spikes but sustained demand peaks are to be expected in several situations: Black Friday sales, market crashes etc. These are times when throughput is more important than ever. Many BTC users experienced severe frustration during volatile market events when they were unable to move coins to or from exchanges.

We must also factor in that block intervals are not a constant 10 minutes. If we want to cover at least 98% of block intervals we are in the vicinity of 45-55 minute blocks so to be safe we need another factor of 6x.

Based on the above, we would suggest a minimum soft cap of 240 times the current long term average block size. Currently, that equates to about 24mb.

The orphan game

Let’s look at what can happen when miners start playing with these settings.

So long as your soft cap is lower than everyone else’s hard cap, there is no risk of your blocks being orphaned due to excessive block size.

If you, as a miner, choose to raise your hard cap in excess of everyone else’s, nothing unusual will happen unless some miner (maybe you) a) sets their hard cap as high as yours, b) sets their soft cap higher than everyone else’s hard cap and c) there are enough transactions on the network between blocks to exceed other miner’s hard caps. If this happens then you’ll start building on the large block while everyone else will ignore it. Unless you have more than 50% hash power, sooner or later your chain will be overtaken and you’ll switch back to the small block chain.

If a group of miners in excess of 50% hash power raise their hard caps again, nothing unusual will happen unless some miner raises its soft cap above some other miner’s hard cap. In that case, the miner with the lower hard cap (assuming it is a minority) will get orphaned once a large block is mined and be left behind. That miner then has a choice (and strong motive) to raise its limit and, if necessary, upgrade its hardware/software.

As we can see, it is not in any miner’s interests to start mining bigger blocks unless it is reasonably sure the majority of other miners are willing and able to go along with it.

Poison blocks and antidotes

A commonly cited fear related to big blocks is the poison block attack. What if 51% of the network accept larger blocks and some malicious actor creates a big one just to fork the chain? We don’t consider the scenario where a minority accepts larger blocks as this will sort itself out automatically.

It is important to consider the difference between peak and sustained load on the network here. If we ever reach a point where a significant number of nodes cannot handle a sustained load generated by an attacker, then we have much bigger problems to worry about.

For a poison block attacker to create this load, the attacker would need to be continuously mining blocks larger than a significant portion of the network can handle. This requires not only significant sustained hashrate but the creation of many transactions. It can be argued that the attacker can mine its own free transactions to fill up the blocks. But if the attack is sustained, this becomes a red flag marking the attack blocks and other miners can take decisive action. The tools to do this (marking a block invalid) already exist but could be improved. The economics of bitcoin kick in here and the attacker has a choice between: (1) spending insane amounts of money to continue; or (2) giving up.

In the case of a peak load attack (either one or a short series of large blocks), the miners with the lower limit and the smaller hash rate have an obvious choice: raise the limit and accept the longer chain. Assuming their nodes have enough memory, they will process the large blocks; it may take a while, but they will catch up eventually and keep on mining. Perhaps with a bruised ego but certainly with a powerful incentive to up their game.

We should pay lip service to the possibility of a node running out of memory and crashing here. That is simply not a scenario the Bitcoin Cash network can afford to worry about. If miners allow the network to be held back by the lowest common denominator it will never be able to grow fast enough.

For a further discussion of poison blocks see here:

The power of default settings

To date, the Bitcoin Cash network’s block size has largely been governed by default settings. The reason for that has recently become clear to us here in the Bitcoin SV team. Since news of the Bitcoin SV project was announced, we have been contacted by a number of miners asking us to make the maximum block size configurable rather than just raising it. This was curious because both Bitcoin Unlimited and Bitcoin ABC (which we estimate runs on 90%+ of the mining nodes) both have configurable limits. However, whilst the configurable setting is prominent and loudly advertised in BU, in Bitcoin ABC, it is hidden in debug settings and poorly documented. It appears many miners did not even know it existed.

The end result is that default setting behaves in practice more like a hard coded setting, than what it is meant to be: a default which can be changed by miners. Defaults do have power and the culture among miners has in the past been to defer to developers on decisions (expressed as default settings) like this.

The power of miner’s choice

This is precisely what CoinGeek and other miners seek to change. Both the soft and hard caps are configuration items that enable miners to exercise the power of governance endowed upon them by the bitcoin system in proportion to their investment. Bitcoin SV supports this and whilst we have no choice but to set default values (you can’t have a configurable setting without them) we do not endorse those values as the best choice and we encourage miners to adjust them as they see fit.

Bitcoin SV changes explained

Contrary to what appears to be widely held but incorrect belief, Bitcoin SV has no intention to force its users to accept a particular setting. We are simply moving the configuration settings to a much more prominent place. We’ll also be making these values easier to see via the RPC interface and in a later release will make them both dynamically configurable via RPC (one already is).

With respect to defaults: we intend on each release to set the default soft cap to 240x the average block size + ~50%. The 50% is to account for growth between releases. In the first beta release, this will be set to 32mb (more precisely, 32,000,000 bytes).

As for the hard cap, the default value for the hard cap in the upcoming beta release will be 128,000,000 bytes (128mb).

Miners however will be free to change these numbers higher or lower as they see fit. If they choose to raise it higher before everyone else and accept the risk they may get an orphan block it is their choice to accept that risk.

But I hear people ask: Isn’t this dangerous? How do we know the software can handle it?

The “other” Gigablock Testnet initiative to the rescue

It is important to remember that the upcoming release is a beta release intended as a release candidate for an intensive testing cycle with a final release on October 15th (1 month before the November fork activation date).

The Gigiblock Testnet initiative was the result of a previous collaboration between nChain and Bitcoin Unlimited. After that collaboration ended, both organizations retained access to the outputs of the project. Prior to the inception of Bitcoin SV, nChain was already in the process of firing up our own version of the GBTN with a view to capacity testing Bitcoin ABC. That process is nearing completion and the intent is now to use the testnet to run repeated cycles of stress tests on Bitcoin SV nodes (and later on Teranodes) to determine the real limits of the software on well provisioned machines.

That is a key difference in approach to BU (whose approach was for a different purpose and equally valid). That these tests are focussed on production software, specifically for gaining insight into what each iteration of the software can safely handle. The Bitcoin SV project will of course be making the results of these ongoing tests available to all miners so they can use them to inform their decisions about block size caps and other configuration settings.

Before the final release (due on October 15th) sustained testing of both 32mb and 128mb blocks will have been completed to verify that the software is up to the task. Of course all other changes will be tested thoroughly during this period as well.

But what about 0-conf?

When orphan blocks are mentioned, there are often claims that it breaks 0-conf. This is a puzzling association. 0-conf works on the Bitcoin Cash network now but not as well as it could. There are a few improvements that could be made to defeat commonly discussed attacks but that is a subject for a different discussion.

If blocks get orphaned, transactions don’t simply disappear. The orphaned block has a sibling and likely those blocks share almost all the same transactions. Any that were in the orphaned block but not in the sibling are likely still floating around in mempools waiting to be mined into a subsequent block. If anything is actually affected, it is 1-conf and even then only minimally.

Consider that today we have roughly 1 orphan block per week. If 0-conf was really broken by orphans, then it’s already broken.

However, we acknowledge that 0-conf isn’t as close to perfect as it could be and where improvements can be made they will.

Technical challenges, quick wins and long term plans

The work done last year on the Gigablock Testnet initiative has gone a long way to improving our understanding of the bottlenecks in the current bitcoin software. We encourage BU to continue its work and the Bitcoin SV team will concurrently be doing its own Gigablock testnet work.

A few area’s we will be focussing on immediately:

  • Parallel validation
  • Parallel network IO
  • Faster UTXO lookups
  • Hardware accelerated signature validation (GPUs, FPGAs etc)
  • More efficient miner API
  • Tools to improve the small world network backbone between miners
  • Evaluating the ‘excessiveAcceptDepth’ mechanism from emergent consensus as an additional failsafe for miners running smaller hard caps.

Running in parallel to the Bitcoin SV project is nChain’s previously-announced Teranode project. Planned for longer-term deployment, Teranode will be an enterprise-level BCH full node implementation, employing a micro-services architecture approach to target terabyte+ block capacity. Teranode is still in early stages but its existence creates some valuable synergies between the two projects. Teranode’s focus is longer term. As a result, technologies like dedicated signature servers and Terab UTXO appliances are well within its scope. Obviously, much research from each project will have relevance and in some cases, practical application to the other.

There is significant work to do and the Bitcoin SV team is dedicated to make our central focus the two key pillars: scaling and 0-conf security as requested of us by miners. It can and will be done intelligently, safely and with informed decision-making. We believe in a global future for Bitcoin, and are committed to playing our part to achieve this future…