[Bitcoin-ml] Time to get agreement on new DAA for Bitcoin Cash (Eli)

Scott Roberts wordsgalore at gmail.com
Wed Nov 1 14:16:21 UTC 2017


My previous email will be more understandable with this factoid:
difficulty = hashrate  * k
where k is just a constant based on the desired solvetime and a
scaling factor, so you can always think in terms of:
difficulty = hashrate
for a given coin.



On Wed, Nov 1, 2017 at 10:02 AM, Scott Roberts <wordsgalore at gmail.com> wrote:
> I previously said difficulty is not open to debate or opinion. By this
> I mean a scientific measurement should be followed by a mathematical
> calculation.  We should measure current hashrate and then
> mathematically set difficulty based on that to get the desired average
> solvetime. The only discussion (debate and opinion) needed is to
> determine how to make the measurement and how to do the math. This
> coincidentally provides the perfect protection to hash attacks and
> delays (unless you change the Poisson by shifting consensus from POW
> to better time restrictions set by the nodes, i.e. Andrew Stone's
> idea.)
>
> The difficulty math should be:  difficulty = 600 * hashrate.  Hashrate
> = current difficulty / current solvetime.  This part is not open for
> debate. The problem is determining current hashrate because the only
> way to measure it is to see the network response to current
> difficulty, and it's current difficulty that we're looking for.  (I
> should mention schancel has an idea on how to get a true "current
> hashrate", but as with long tail and block reward adjustment  I
> consider it "for the future").  So the best we can do is to base the
> math on the hashrate on the previous difficulty.  The problem is
> random variation.  What observations and math should we use to
> estimate current hash rate?  This is open for debate, but it's not
> debatable in the sense that there is some provable optimum.
>
> With this background I want to "prove" Degnr8's WT with low N is the
> optimal formula. Bitcoin measures avg hashrate of past 2 weeks of
> blocks by using the perfectly correct D = sum (D) / sum(solvetimes)
> times a proportionality constant. So it is measuring hashrate as it
> was 1 week in the past, and only adjusts once every two weeks.
> Obviously this is not the best measure of current hashrate. So,
> everyone started using the same equation, but applying it every block
> and using a smaller N. There have been many attempts to improve this
> math, but every attempt I am aware of made it worse. The amount of
> time wasted trying to improvement it is incredible.  They go by a lot
> of fancy names. They often apply a "filter" to try to reduce the
> "noise", but they don't understand that the random variation is not
> real noise that has a forcing function that needs to be filtered out
> with something analogous to a capacitor and/or inductor on an
> electrical signal.  It can be noisy from miners jumping on and off or
> from network effects, but we have no way to estimate the nature of
> that noise in order to justify a specific filter design. The random
> variation is, as far as we can measure, precise data that needs to be
> included. Devs will also hurt the incontrovertible math by making
> asymmetrical changes such as preventing negative timestamps. They have
> their reasoning processes, but their reasoning is not as good as the
> required math.
>
> The simple average with low N which is the state of the art. But it
> has a problem: it is measuring the hashrate as it was N/2 blocks into
> the past.  Lower N helps, but there starts to be accidental variation
> that causes longer delays. Fear of this is greatly exaggerated.
> Filters do not help because the end result is not as good as just
> going to a longer N.  Degnr8's algo does not address the tradeoff
> between faster response and accidental changes in difficulty that
> occurs with low N.  But by letting the weighting factor for the blocks
> linearly reduce as they get further in the past, he's made possibly
> the best possible measurement of the hashrate as it stood in the
> PREVIOUS block rather than looking at N/2.  There might be a slightly
> more complex equation to make it a little more accurate, but if it
> overestimates what the previous, current, or future block hashrates
> are, it will send it into oscillations by overshooting, leaving it
> open to exploit that amplifies the oscillations. Industrial process
> controllers (PID controllers) do something better, but they depend on
> the process being stable in its characteristic, not something like
> miners seeking profit and thereby able to change how they react to the
> controller. In other words diff algos can't try to predict the future
> hashrate.  The best they can do is estimate what the hashrate was in
> the previous block.  In watching Degnr8's WT respond to step
> functions, it is a very linear increase. This is the hallmark of a
> controller that is NOT trying to predict the future. It responds
> faster than the simple equation, but does not overshoot or undershoot
> in any sense.
>
> This is the basis for me claiming there is no alternative to using
> Degnr8's TW.  The only thing to quibble over is the setting of N.  I'm
> down for 30.  I'm working on getting an exposition of what it looks
> like when small alts use N=16, 30, and 63.


More information about the bitcoin-ml mailing list