[Bitcoin-ml] Solving the difficulty adjustment problem by asking the opposite question

Jan Nightingale turifex at protonmail.ch
Sat Aug 26 15:48:24 UTC 2017

I think the current approach - changing the difficulty based on past block times - is impossible to do right in the presence of >1 chains with the same POW. The 2-chain case is not even close to the worst one - what if there are 3? A realistic short-term possibility! In the future, one can imagine dozens of sha256 chains.

If one chain is more profitable, ALL profit-maximizing miners should switch.
After difficulty drops enough so that the chain stops being profitable, again, ALL profit-maximizing miners switch.
What this means: there's no solution. No matter what the difficulty algorithm is, enormous oscillations of hash power are unavoidable. It's a rabbit hole of introducing ever more complex strategies fixing the particular exploitation methods of previous methods.

Non-profit maximizing miners are impossible to model, as there are infinite possible incentives. There's also additional complexity arising due to miners' switching lag - but again, imo that's impossible to model as it depends on the particular mining operation.

Even if the 2-chain case could be solved this way (I'm skeptical) I don't believe it's going to work in the >2 case, making the system fragile and requiring another change.

For this reason, I propose going to the bare fundamentals and turning the difficulty system on its head. To quote Tom Harding "The combined BTC+BCC system is very complex.  I think more ideas need to be considered as well.". The idea is radical but imo the only approach that could actually work.

What are the fundamentals in my opinion?
- we want X new coins per Y minutes according to the inflation schedule - $desiredInflationPer10Minutes;
- we want the transaction capacity to be limited, equivalent to $currentBlockLimit per Y minutes.

Currently, Bitcoin, and to my knowledge all other POW coins, work by observing (1) reducing the relation between those variables to a constant, reducing the variables in the system to one - block time, and
(2) observing the block time on the live network and correcting the difficulty in the desired direction.

The system works because of an ephemeral network consensus - nodes refusing to accept a block that's too far in the future.

The proposal:
- Instead of per block, make the rewards directly connected to time. A block with a timestamp 600 seconds more than the previous one gets $desiredInflationPer10Minutes; a block with 300 seconds more gets $desiredInflationPer10Minutes/2 [0].
- Same for the block size limit - a block 300 seconds after the last one gets $currentBlockLimit/2, 150 seconds $currentBlockLimit/4 etc [0].
- No minimum difficulty - miners can mine whatever difficulty they want, however, the most valid chain rule stays the same - the highest cumulative difficulty.
- To prevent excessive flooding with empty blocks, consecutive blocks have to be at least 10 seconds apart. That's completely arbitrary, 10 seconds just seem sensible to me.

The desired outcome:
A stable equilibrium is reached and all oscillations are dampened. Miners have the incentive to estimate the amount of competing hash power at the moment and set their difficulty high enough so that there's a sufficiently low chance of getting orphaned, but low enough so that they are producing blocks.

A miner setting his difficulty too low will find out he's wasting hash power as all or most of his blocks are getting orphaned.
A miner setting his difficulty too high will find out he's wasting hash power as with a lower difficulty he could get more rewards that wouldn't be orphaned.

As a result, the system self-corrects very fast, incentivizing miners to use variables that are impossible to include in the pure past-block time difficulty adjustment: current ratio of block rewards compared to other chains and amount of competing hash power, possibly more.

What about block time? Is that a problem? In my opinion, no - on the contrary!
- a block header is only 80 bytes, so even one block every 10 seconds means 4800B-80B=4720B of 'waste' per 10 minutes - that's nothing. [1]
- block confirmations get to be almost continuous. There's some initial period, let's say 30 seconds, in which there's a high risk of orphans - but after that, confirmation strength grows very fast, potentially making on-chain payments safe enough for even large in-person shop payments! More useful as cash.
- there are probably going to be empty blocks even in the presence of unconfirmed transactions, but note that the block size limit is defined the same way - so there's no capacity lost! Imo there's no reason to prefer one 8MB block per 10 minutes to, for example, 8x1MB in the same 10 minutes.

Minor considerations:
- node DOS protection - a node accepting blocks under this scheme could find itself DOSed with lots of spurious low-difficulty chains. This problem already exists but would be exacerbated. However, that's not a consensus problem, so it can be dealt much more arbitrarily - by estimating 'reasonable' difficulty from external sources or other nodes.
[0] - there's a very minor problem arising due to integer division leading to loss of lowest bits. It could be ignored, leading to minusculely lower inflation and/or size. Instead, I think accumulating the errors is the right approach - ie. lost block reward appears as soon as it fits into the uint64 amount, similar for size.
[1] - the $blockLimit per 10 minutes could be computed independently of headers

Jan Nightingale
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-ml/attachments/20170826/d054377e/attachment-0001.html>

More information about the bitcoin-ml mailing list