<div dir="auto">How is the adverse scenario you describe different from a plain old 51% attack? Each proposed protocol change where 51% or more of the network can potentially game the rules and break the system should be considered just as acceptable/unacceptable as another. <div dir="auto"><br></div><div dir="auto">There comes a point where some form of basic honesty must be assumed on behalf of participants benefiting from the system working properly and reliably. </div><div dir="auto"><br></div><div dir="auto">Afterall, what magic line of code prohibits all miners from simultaneously turning all their equipment off... just because? </div><div dir="auto"><br></div><div dir="auto">Maybe this 'one':</div><div dir="auto"><br></div><div dir="auto">"As long as a majority of CPU power is controlled by nodes that are not cooperating to attack the network, they'll generate the longest chain and outpace attackers. The network itself requires minimal structure."</div><div dir="auto"><br></div><div dir="auto">Is there such a thing as an unrecognizable 51% attack? One where the remaining 49% get dragged in against their will? </div><div dir="auto"><br></div><div dir="auto">Daniele </div></div><div class="gmail_extra"><br><div class="gmail_quote">On Dec 10, 2016 6:39 PM, "Pieter Wuille" <<a href="mailto:pieter.wuille@gmail.com">pieter.wuille@gmail.com</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">On Sat, Dec 10, 2016 at 4:23 AM, Daniele Pinna via bitcoin-dev <span dir="ltr"><<a href="mailto:bitcoin-dev@lists.linuxfoundation.org" target="_blank">bitcoin-dev@lists.linuxfounda<wbr>tion.org</a>></span> wrote:<br><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="auto"><div dir="auto"><span style="font-size:13.696px;font-family:sans-serif">We have models for estimating the probability that a block is orphaned given average network bandwidth and block size. </span><br></div><div dir="auto"><span style="font-size:13.696px;font-family:sans-serif"><br></span></div><div dir="auto"><span style="font-family:sans-serif;font-size:13.696px">The question is, do we have objective measures of these two quantities? Couldn't we target an orphan_rate < max_rate? </span><span style="font-size:13.696px;font-family:sans-serif"><br></span></div></div></blockquote><div><br></div><div>Models can predict orphan rate given block size and network/hashrate topology, but you can't control the topology (and things like FIBRE hide the effect of block size on this as well). The result is that if you're purely optimizing for minimal orphan rate, you can end up with a single (conglomerate of) pools producing all the blocks. Such a setup has no propagation delay at all, and as a result can always achieve 0 orphans.<br><br></div><div>Cheers,<br><br>-- <br></div><div>Pieter<br><br></div></div></div></div>
</blockquote></div></div>