[Bitcoin-ml] Difficulty adjustment

Amaury Séchet deadalnix at gmail.com
Fri Aug 25 19:25:05 UTC 2017


Hi all,

I was indeed expecting wild swings of hashrate between BCC and BTC, and
this isn't something we can completely prevent without both BTC and BCC
hashrate adjusting in timeframes equivalent with the ones of price swing.

We can make the situation better, obviously, but first i would like to do
some framing.

I don't think is an especially urgent problem to solve. While inflation end
up being higher in terms of number of coins that way, it also ensure that
the chain is fairly secure and is not significantly higher in term of USD,
EUR or whatever other currency we want to look at. in term of value, BCC's
inflation is not that much higher than BTC's .

Second, there is a lot of uncertainty in the space, at least until Nov, so
I'm not sure that changing the adjustment is a wise move before then.
Really, the whole thing is young and still settling down. For all we know
BCC will become the majority chain and the problem will be solved that way.

Third, changing this is a hard fork. The code check that the target is what
we expect and nothing more. This seems counter-intuitive that it needs to
check for the limit to not be higher than expected, after all, why would a
miner give it self more work than it needs to ? The response is simple: a
selfish miner could make sure to win a block race every time by doing a
fraction of a percent of additional work. This is a very good deal for the
selfish miner and we don't want that. A potential solution to that hard
fork situation is to make the proof of work an EC parameter. That would
allow for future changes in the difficulty function without hard fork while
simultaneously preventing a selfish miner from playing stupid games.

That being said, I would like to be proactive rather than reactive, so I
worked on the problem to make sure we can deploy something if/when we think
it is needed.

Most implementations are basing their mechanism on the block time, however,
this data is extremely chaotic and can also be faked to some extent by
miners. In the ideal world we want to estimate hashrate over several blocks
and then use this estimation to recompute a new target. Because the input
data is very noisy, we need to either do that over long timespan, but then
we adapt poorly to violent hashrate swings, or quickly, in which case the
difficulty output ends up being chaotic and may not be stable. Ideally,
we'd come up with a solution that rely on individual block time as little
as possible as this is the least stable data we have.

We compute hashrate using H = W / T where H is the hashrate, W the work
done and T the time required to do this work. By estimating the hashrate
for several blocks and feeding a low pass filter we get an estimation of
the hashrate. This tends to become unstable as T become small, because H
spike is a very non linear manner. I tried using IIR filter and moving
average but this doesn't provide the stability we are looking for.

For reference, this is what a moving average would look like : H = sum(Wi /
Ti) / N for i from 0 to N

But we can get rid of most of the T in the formula by using a weighted
average of the hashrate by block time. Using a weighted average does make
sense, it will just shift the correction from being done from a number of
block toward being done in some given amount of time, which may arguably be
better. Our formula then becomes:

H = sum(Wi) / sum(Ti)

Note that only the time of the first and last block now matters, which
makes the whole system significantly more stable. This come at the cost of
using a moving average, which isn't as good as an IIR, but the stability we
get out of this technique is very much worth it as we have to handle
chaotic and even adversarial input. By canceling out most block timestamps,
we greatly diminish the attack surface and ability for chaotic input to
create chaotic output.

We can use this to define a base target by computing the work done over the
past 2016 blocks and the time it took. Because individual block time
variation do not matter much over 2016 blocks, this gives a very stable
output. However, it is unable to adjust quickly when required. So we also
want to compute a fast target, on a much smaller timeframe. I found 130
minutes to work well and a minimum of 5 blocks to make sure we aren't
undersampling. Because we weighted the average based on block time, it is
also useful to not chose a specific number of block but rather focus on a
timeframe rather than a block count.

When the fast target is withing 25% of the base target, we can simply
assume it is variance and just use the base target. When it isn't, we are
either facing a large hashrate change, or a especially high variance. In
both cases, it is worth using the fast target to ensure more stability in
block issuance. For instance, if variance cause several blocks to come very
slow, then it is worth dropping the difficulty temporarily to make sure a
backlog doesn't form.

This give us a difficulty function that is very stable when hashrate is
stable, but that is able to nudge left and right to compensate for
variance, and that is also able to adjust very quickly to large hashrate
changes. I ran various simulations and it looks pretty good.

I attached simulation results with that mail. Not sure that'll work on the
ML. If it doesn't, then I'll have to figure out another way to share.

I simulated the following secnarii:
 - constant hashrate, only use the base target (baseavg)
 - constant hashrate (fastavg)
 - 90% of hashrate leave at block 5000 (unstep)
 - hashrate goes 10x after block 5000 (step)
 - 90% of the hashrate go back and forth every 100 blocks (swing)
 - miner generate fake timestamps, including a fair amount of timewarp
(fake).

The first 2016 blocks of each simulation are just returning a constant
difficulty in order to set historic data the algorithm can work on.

We can see that the algorithm behaves well in all scenarii. Compared to
alternative I tried, this is especially true when there is timewarp
involved as it cancels out most of the intermediary time. Doing things in a
time based manner make it so that we adapt a bit slower when going up in
term of numbers of blocks and a bit faster when going down, but similar it
matter of time, unless the drop is quite dramatic and we need more blocks
just to get data.

Anyway, I don't think we should rush anything on that front considering
what I explained at the beginning of that email. Anyway, that's probably a
lot to digest already so I'll stop here.



2017-08-25 15:36 GMT+02:00 Tom Zander via bitcoin-ml <
bitcoin-ml at lists.linuxfoundation.org>:

> On Friday, 25 August 2017 15:23:30 CEST Tom Harding wrote:
> > Block validity cannot be conditioned on local node's opinion of network
> > time at the moment it observes some event.  Difficulty must evolve
> > completely deterministically based on data in the header.
>
> Looks like my initial email had an unfinished paragraph that should have
> been
> removed which you responded to. Sorry about the confusion.
>
> The paragraph with the actual algo is deterministic as you suggested. I
> will
> repeat it here;
>
> «In case the MTP of the tip of the chain is less than an hour after the MTP
> of block Tip-15, the proof of work target is decreased by a third, or 33%.»
>
> --
> Tom Zander
> Blog: https://zander.github.io
> Vlog: https://vimeo.com/channels/tomscryptochannel
> _______________________________________________
> bitcoin-ml mailing list
> bitcoin-ml at lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-ml
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-ml/attachments/20170825/00497371/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: powdata.ods
Type: application/vnd.oasis.opendocument.spreadsheet
Size: 2919246 bytes
Desc: not available
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-ml/attachments/20170825/00497371/attachment-0001.ods>


More information about the bitcoin-ml mailing list