<div dir="ltr"><div><span class="im">On Thu, Aug 6, 2015 at 5:06 PM, Gavin Andresen <span dir="ltr"><<a href="mailto:gavinandresen@gmail.com" target="_blank">gavinandresen@gmail.com</a>></span> wrote:<br></span><span class="im"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span>On Thu, Aug 6, 2015 at 10:53 AM, Pieter Wuille <span dir="ltr"><<a href="mailto:pieter.wuille@gmail.com" target="_blank">pieter.wuille@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">So
if we would have 8 MB blocks, and there is a sudden influx of users (or
settlement systems, who serve much more users) who want to pay high
fees (let's say 20 transactions per second) making the block chain
inaccessible for low fee transactions, and unreliable for medium fee
transactions (for any value of low, medium, and high), would you be ok
with that?</blockquote><div><br></div></span><div>Yes, that's fine. If
the network cannot handle the transaction volume that people want to pay
for, then the marginal transactions are priced out. That is true today
(otherwise ChangeTip would be operating on-blockchain), and will be true
forever.</div></div></div></div></blockquote><div><br></div></span><div>The
network can "handle" any size. I believe that if a majority of miners
forms SPV mining agreements, then they are no longer affected by the
block size, and benefit from making their blocks slow to validate for
others (as long as the fee is negligable compared to the subsidy). I'll
try to find the time to implement that in my simulator. Some hardware
for full nodes will always be able to validate and index the chain, so
nobody needs to run a pesky full node anymore and they can just use a
web API to validate payments.<br><br></div><div>Being able the "handle" a
particular rate is not a boolean question. It's a question of how much
security, centralization, and risk for systemic error we're willing to
tolerate. These are not things you can just observe, so let's keep
talking about the risks, and find a solution that we agree on.<br><br></div><span class="im"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
If so, why is 8 MB good but 1 MB not? To me, they're a small constant
factor that does not fundamentally improve the scale of the system.</blockquote><div><br></div></span><div>"better
is better" -- I applaud efforts to fundamentally improve the
scalability of the system, but I am an old, cranky, pragmatic engineer
who has seen that successful companies tackle problems that arise and
are willing to deploy not-so-perfect solutions if they help whatever
short-term problem they're facing.</div></div></div></div></blockquote><div><br></div></span><div>I
don't believe there is a short-term problem. If there is one now, there
will be one too at 8 MB blocks (or whatever actual size blocks are
produced).<br> <br></div><span class="im"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I dislike the outlook of "being forever locked at the same scale" while
technology evolves, so my proposal tries to address that part. It
intentionally does not try to improve a small factor, because I don't
think it is valuable.</blockquote></span></div><br>I think consensus is against you on that point.</div></div></blockquote><div><br></div></span><div>Maybe. But I believe that it is essential to not take unnecessary risks, and find a non-controversial solution.<br><br></div>-- <br></div>Pieter<br><br></div>