A (belated) Christmas present from Bitcoin SV team.
by Steve Shadders
December 24, 2020 (7min read)
It’s taken us a bit longer than we hoped, but the beta version of Bitcoin SV 1.0.7 (Dynastic) will be released in early January (hence the “belated” part of this article’s title)....

It’s taken us a bit longer than we hoped, but the beta version of Bitcoin SV 1.0.7 (Dynastic) will be released in early January (hence the “belated” part of this article’s title).

The Dynastic release is the result of almost a year of work to untangle a particularly nasty mess we inherited from Bitcoin Core. As such, it is a huge code change which requires more than the usual level of care in testing. But I think most BSV app devs will agree it was worth it. Here’s the first hint (hey it’s Christmas, we need to unwrap this slowly!).

This is an animated chart over time of submitting 2 million transactions to a Bitcoin SV node. The y-axis is the number of transactions accepted and the x-axis is seconds elapsed (sped up a bit). The orange line is the new beta version of SV, the blue is the previous 1.0.6 version. On the first chart you can see that 1.0.6 and 1.0.7 are almost identical. On the 2nd you can see a difference start to emerge and by the 3rd you can see that difference has become dramatic.

Why are there 3 charts side by side?  The answer is at the top. The three charts show sets of transactions made of descendant chains of length 1, 50 and 1000. What you’ll notice on the blue lines is the reason why we’ve been plagued with an ancestor chain limit of 25. The performance hit is exponential as the chain length grows. We’ll explain why later in this article. The nice bit is the orange line, which shows zero degradation when chain length increases, in fact it even speeds up. Which brings us to the part where we tear open the Christmas wrapping:

The 25 chained ancestor limit is history!

In Bitcoin SV 1.0.7 beta we have raised the default limit from 25 to 1000. We have tested much longer chains and the same linear performance is observed, in fact we see no obvious reason not to remove it completely other than an abundance of caution in an adversarial environment. However, rest assured, after a few months of this 1000 limit being in the wild our intent is to get rid of it altogether. In fact we expect removing it to even result in a slight performance increase as we don’t need to do the accounting of chain lengths anymore.

So why was there a limit in the first place and why did it take so long to remove?

The history of block building

In the beginning, bitcoind v0.1.0 did it fairly simply.  Once a second or so, it would take an in memory map of all the transactions it had received, check any new ones to ensure they met the minimum fee requirement and if so, add them to the block template which was basically an ordered list of valid transactions. It would then calculate a merkle tree from that set of transactions and build a block header to start doing proof of work upon. It could have been optimised but it was fairly efficient and didn’t do much unnecessary work.

Then along came a 1MB block size limit and ideas about a fee market. The idea was that limiting the block size would create demand for block space and drive up the transaction fees users would effectively bid to get included in a block.  But this created a new requirement for the Bitcoin software to manage. If a miner can’t select all the available transactions due to the 1MB limit, they need to ensure they select the ones that pay the highest fee rates to maximise their revenues, leaving the less lucrative transactions for another miner to pick up later.

The consequences of limiting block size

And thus was the block building code turned into a nightmare of spaghetti accounting and other policy based limitations. Doing this job of selecting high paying transactions is not as simple as it sounds. When you have unconfirmed ancestors and child-pays-for-parent (CPFP) to account for there is a lot of graph traversal and other horrible complexity involved. Much of this involves traversing these graphs of related transactions every time you add a new one which brings with it quadratic computation cost. That is, the bigger a set gets, the more expensive every operation on that set becomes and the result is exponential degradation in speed.  This is clearly visible in the above chart where we see that the rate we can get transactions to the mempool falls dramatically as it gets larger.

You can also observe the effects of this in the layout of blocks mined with Bitcoin Core, Bitcoin ABC and older versions of Bitcoin SV. The first transactions in blocks pay the highest fee rates and these get lower as you move toward the end of the block. This unfortunately removes the time ordering property of transactions in blocks which is a very useful feature of Bitcoin.

The solution

As is so often the case with Bitcoin, the solution to the problem was largely to just put it back the way it was. But this is easier said than done after 12 years of developer jiggery pokery on the code base.

Several releases of Bitcoin SV this year contained preparatory work that needed to be proven stable. Culminating in SV 1.0.6 where we replaced the default Legacy block assembler (LBA) module with the new Journaling block assembler (JBA) which returns to the simple model of:

  1. Is the transaction valid? – yes
  2. Does it pay our minimum fee? – yes
  3. Append it to the transaction list

You see when block size is assumed to be unbounded there’s no need to worry about how high a transaction’s fee is with respect to other transactions, you simply select them all if they are above your minimum. This makes block building embarrassingly simple but also has profound effects on how Bitcoin fee markets work. It changes the dynamic from users bidding the price up, to miners competing to offer the lowest price and thus encouraging volume.

The fee selection logic we mentioned above was heavily intertwined with the LBA which meant we couldn’t remove this logic without breaking the LBA. So the sensible course was to remove the LBA entirely and in order to do that, we needed the replacement JBA to be fully stable. But the fee logic doesn’t just touch the block assembler, it touches almost everything in Bitcoin SV with tendrils that reach deep into very sensitive code.  And some of that code affects the performance of the JBA.

And so the final step which we have achieved in the Dynastic release is to remove the LBA, clearing the path to removing the fee selection code and vastly simplifying the transaction selection logic.

Needless to say, this needed a lot of testing given the extent of the surgery it required. Our QA team is being forced to take a Christmas break this year after working through the holidays last year on Genesis (they’d probably work through the break if we let them because they’re rather a dedicated bunch). So we will have it out for beta testing in early January.

I really like this chart so I’m going to be indulgent and post it again. Not only because it solves this problem but because it demonstrates the structural principle that must be applied to solve all scaling challenges in Bitcoin:

  1. Don’t do work you don’t need to do
  2. If you think you need to do it, question yourself ruthlessly
  3. Keep the throughput/set size chart linear.

This particular problem is much easier to solve in Teranode than in Bitcoin SV since we don’t have to worry about surgically removing rubbish code whilst ensuring we don’t break anything. It’s simply a matter of applying such Satoshi-esque principles to the initial design. Satoshi was able to do it in alpha code.

Anyway, it’s a few weeks away so that will give app developers a bit more time to dream up ways to make use of longer chains. Like the explosion of script experimentation we saw post-Genesis I’m expecting to see a new wave of innovation with long transaction chains.

Merry Christmas from Team SV and stay safe.