[bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)
daniele.pinna at gmail.com
Sun Dec 11 03:17:45 UTC 2016
How is the adverse scenario you describe different from a plain old 51%
attack? Each proposed protocol change where 51% or more of the network
can potentially game the rules and break the system should be considered
just as acceptable/unacceptable as another.
There comes a point where some form of basic honesty must be assumed on
behalf of participants benefiting from the system working properly and
Afterall, what magic line of code prohibits all miners from simultaneously
turning all their equipment off... just because?
Maybe this 'one':
"As long as a majority of CPU power is controlled by nodes that are not
cooperating to attack the network, they'll generate the longest chain and
outpace attackers. The network itself requires minimal structure."
Is there such a thing as an unrecognizable 51% attack? One where the
remaining 49% get dragged in against their will?
On Dec 10, 2016 6:39 PM, "Pieter Wuille" <pieter.wuille at gmail.com> wrote:
> On Sat, Dec 10, 2016 at 4:23 AM, Daniele Pinna via bitcoin-dev <
> bitcoin-dev at lists.linuxfoundation.org> wrote:
>> We have models for estimating the probability that a block is orphaned
>> given average network bandwidth and block size.
>> The question is, do we have objective measures of these two quantities?
>> Couldn't we target an orphan_rate < max_rate?
> Models can predict orphan rate given block size and network/hashrate
> topology, but you can't control the topology (and things like FIBRE hide
> the effect of block size on this as well). The result is that if you're
> purely optimizing for minimal orphan rate, you can end up with a single
> (conglomerate of) pools producing all the blocks. Such a setup has no
> propagation delay at all, and as a result can always achieve 0 orphans.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the bitcoin-dev