[Bitcoin-development] Proposed alternatives to the 20MB step function
steven.pine at gmail.com
Thu May 28 16:30:34 UTC 2015
I would support a dynamic block size increase as outlined. I have a few
Is scaling by average block size the best and easiest method, why not scale
by transactions confirmed instead? Anyone can write and relay a
transaction, and those are what we want to scale for, why not measure it
I would prefer changes every 2016 blocks, it is a well known change and a
reasonable time period for planning on changes. Two weeks is plenty fast,
especially at a 50% rate increase, in a few months the block size could be
Daily change to size seems confusing especially considering that max block
size will be dipping up and down. Also if something breaks trying to fix it
in a day seems problematic. The hard fork database size difference error
comes to mind. Finally daily 50% increases could quickly crowd out smaller
nodes if changes happen too quickly to adapt for.
> Date: Thu, 28 May 2015 11:53:41 -0400
> From: Gavin Andresen <gavinandresen at gmail.com>
> To: Matt Whitlock <bip at mattwhitlock.name>
> Cc: Bitcoin Dev <bitcoin-development at lists.sourceforge.net>
CABsx9T3-zxCAagAS0megd06xvG5n-3tUL9NUK9TT3vt7XNL9Tg at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
> On Fri, May 8, 2015 at 3:20 AM, Matt Whitlock <bip at mattwhitlock.name>
> > Between all the flames on this list, several ideas were raised that did
> > not get much attention. I hereby resubmit these ideas for consideration
> > discussion.
> > - Perhaps the hard block size limit should be a function of the actual
> > block sizes over some trailing sampling period. For example, take the
> > median block size among the most recent 2016 blocks and multiply it by
> > This allows Bitcoin to scale up gradually and organically, rather than
> > having human beings guessing at what is an appropriate limit.
> A lot of people like this idea, or something like it. It is nice and
> simple, which is really important for consensus-critical code.
> With this rule in place, I believe there would be more "fee pressure"
> (miners would be creating smaller blocks) today. I created a couple of
> histograms of block sizes to infer what policy miners are ACTUALLY
> following today with respect to block size:
> Last 1,000 blocks:
> Notice a big spike at 750K -- the default size for Bitcoin Core.
> This graph might be misleading, because transaction volume or fees might
> not be high enough over the last few days to fill blocks to whatever limit
> miners are willing to mine.
> So I graphed a time when (according to statoshi.info) there WERE a lot of
> transactions waiting to be confirmed:
> That might also be misleading, because it is possible there were a lot of
> transactions waiting to be confirmed because miners who choose to create
> small blocks got lucky and found more blocks than normal. In fact, it
> looks like that is what happened: more smaller-than-normal blocks were
> found, and the memory pool backed up.
> So: what if we had a dynamic maximum size limit based on recent history?
> The average block size is about 400K, so a 1.5x rule would make the max
> block size 600K; miners would definitely be squeezing out transactions /
> putting pressure to increase transaction fees. Even a 2x rule (implying
> 800K max blocks) would, today, be squeezing out transactions / putting
> pressure to increase fees.
> Using a median size instead of an average means the size can increase or
> decrease more quickly. For example, imagine the rule is "median of last
> 2016 blocks" and 49% of miners are producing 0-size blocks and 51% are
> producing max-size blocks. The median is max-size, so the 51% have total
> control over making blocks bigger. Swap the roles, and the median is
> Because of that, I think using an average is better-- it means the max
> will change (up or down) more slowly.
> I also think 2016 blocks is too long, because transaction volumes change
> quicker than that. An average over 144 blocks (last 24 hours) would be
> better able to handle increased transaction volume around major holidays,
> and would also be able to react more quickly if an economically irrational
> attacker attempted to flood the network with fee-paying transactions.
> So my straw-man proposal would be: max size 2x average size over last 144
> blocks, calculated at every block.
> There are a couple of other changes I'd pair with that consensus change:
> + Make the default mining policy for Bitcoin Core neutral-- have its
> block size be the average size, so miners that don't care will "go along
> with the people who do care."
> + Use something like Greg's formula for size instead of bytes-on-the-wire,
> to discourage bloating the UTXO set.
> When I've proposed (privately, to the other core committers) some dynamic
> algorithm the objection has been "but that gives miners complete control
> over the max block size."
> I think that worry is unjustified right now-- certainly, until we have
> size-independent new block propagation there is an incentive for miners to
> keep their blocks small, and we see miners creating small blocks even when
> there are fee-paying transactions waiting to be confirmed.
> I don't even think it will be a problem if/when we do have
> new block propagation, because I think the combination of the random
> of block-finding plus a dynamic limit as described above will create a
> healthy system.
> If I'm wrong, then it seems to me the miners will have a very strong
> incentive to, collectively, impose whatever rules are necessary (maybe a
> soft-fork to put a hard cap on block size) to make the system healthy
> Gavin Andresen
> -------------- next part --------------
> An HTML attachment was scrubbed...
> Bitcoin-development mailing list
> Bitcoin-development at lists.sourceforge.net
> End of Bitcoin-development Digest, Vol 48, Issue 122
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the bitcoin-dev