[bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)

Bram Cohen bram at bittorrent.com
Sun Dec 11 01:07:06 UTC 2016


Miners individually have an incentive to include every transaction they can
when they mine a block, but they also sometimes have an incentive to
collectively cooperate to reduce throughput to make more money as a group.
Under schemes where limits can be adjusted both possibilities must be taken
into account.

On Sat, Dec 10, 2016 at 4:40 PM, James Hilliard via bitcoin-dev <
bitcoin-dev at lists.linuxfoundation.org> wrote:

> Miners in general are naturally incentivized to always mine max size
> blocks to maximize transaction fees simply because there is very
> little marginal cost to including extra transactions(there will always
> be a transaction backlog of some sort available to mine since demand
> for block space is effectively unbounded as fees approach 0 and they
> can even mine their own transactions without any fees). This proposal
> would almost certainly cause runaway block size growth and encourage
> much more miner centralization.
>
> On Sat, Dec 10, 2016 at 6:26 PM, t. khan via bitcoin-dev
> <bitcoin-dev at lists.linuxfoundation.org> wrote:
> > Miners 'gaming' the Block75 system -
> > There is no financial incentive for miners to attempt to game the Block75
> > system. Even if it were attempted and assuming the goal was to create
> bigger
> > blocks, the maximum possible increase would be 25% over the previous
> block
> > size. And, that size would only last for two weeks before readjusting
> down.
> > It would cost them more in transaction fees to stuff the network than
> they
> > could ever make up. To game the system, they'd have to game it forever
> with
> > no possibility of profit.
> >
> > Blocks would get too big -
> > Eventually, blocks would get too big, but only if bandwidth stopped
> > increasing and the cost of disk space stopped decreasing. Otherwise, the
> > incremental adjustments made by Block75 (especially in combination with
> > SegWit) wouldn't break anyone's connection or result in significantly
> more
> > orphaned blocks.
> >
> > The frequent and small adjustments made by Block75 have the added
> benefit of
> > being more easily adapted to, both psychologically and technologically,
> with
> > regards to miners/node operators.
> >
> > -t.k
> >
> > On Sat, Dec 10, 2016 at 5:44 AM, s7r via bitcoin-dev
> > <bitcoin-dev at lists.linuxfoundation.org> wrote:
> >>
> >> t. khan via bitcoin-dev wrote:
> >> > BIP Proposal - Managing Bitcoin’s block size the same way we do
> >> > difficulty (aka Block75)
> >> >
> >> > The every two-week adjustment of difficulty has proven to be a
> >> > reasonably effective and predictable way of managing how quickly
> blocks
> >> > are mined. Bitcoin needs a reasonably effective and predictable way of
> >> > managing the maximum block size.
> >> >
> >> > It’s clear at this point that human beings should not be involved in
> the
> >> > determination of max block size, just as they’re not involved in
> >> > deciding the difficulty.
> >> >
> >> > Instead of setting an arbitrary max block size (1MB, 2MB, 8MB, etc.)
> or
> >> > passing the decision to miners/pool operators, the max block size
> should
> >> > be adjusted every two weeks (2016 blocks) using a system similar to
> how
> >> > difficulty is calculated.
> >> >
> >> > Put another way: let’s stop thinking about what the max block size
> >> > should be and start thinking about how full we want the average block
> to
> >> > be regardless of size. Over the last year, we’ve had averages of 75%
> or
> >> > higher, so aiming for 75% full seems reasonable, hence naming this
> >> > concept ‘Block75’.
> >> >
> >> > The target capacity over 2016 blocks would be 75%. If the last 2016
> >> > blocks are more than 75% full, add the difference to the max block
> size.
> >> > Like this:
> >> >
> >> > MAX_BLOCK_BASE_SIZE = 1000000
> >> > TARGET_CAPACITY = 750000
> >> > AVERAGE_OVER_CAP = average block size of last 2016 blocks minus
> >> > TARGET_CAPACITY
> >> >
> >> > To check if a block is valid, ≤ (MAX_BLOCK_BASE_SIZE +
> AVERAGE_OVER_CAP)
> >> >
> >> > For example, if the last 2016 blocks are 85% full (average block is
> 850
> >> > KB), add 10% to the max block size. The new max block size would be
> >> > 1,100 KB until the next 2016 blocks are mined, then reset and
> >> > recalculate. The 1,000,000 byte limit that exists currently would
> >> > remain, but would effectively be the minimum max block size.
> >> >
> >> > Another two weeks goes by, the last 2016 blocks are again 85% full,
> but
> >> > now that means they average 935 KB out of the 1,100 KB max block size.
> >> > This is 93.5% of the 1,000,000 byte limit, so 18.5% would be added to
> >> > that to make the new max block size of 1,185 KB.
> >> >
> >> > Another two weeks passes. This time, the average block is 1,050 KB.
> The
> >> > new max block size is calculated to 1,300 KB (as blocks were 105%
> full,
> >> > minus the 75% capacity target, so 30% added to max block size).
> >> >
> >> > Repeat every 2016 blocks, forever.
> >> >
> >> > If Block75 had been applied at the difficulty adjustment on November
> >> > 18th, the max block size would have been 1,080KB, as the average block
> >> > during that period was 83% full, so 8% is added to the 1,000KB limit.
> >> > The current size, after the December 2nd adjustment would be 1,150K.
> >> >
> >> > Block75 would allow the max block size to grow (or shrink) in response
> >> > to transaction volume, and does so predictably, reasonably quickly,
> and
> >> > in a method that prevents wild swings in block size or transaction
> fees.
> >> > It attempts to keep blocks at 75% total capacity over each two week
> >> > period, the same way difficulty tries to keep blocks mined every ten
> >> > minutes. It also keeps blocks as small as possible.
> >> >
> >> > Thoughts?
> >> >
> >> > -t.k.
> >> >
> >>
> >> I like the idea. It is good wrt growing the max. block size
> >> automatically without human action, but the main problem (or question)
> >> is not how to grow this number, it is what number can the network
> >> handle, considering both miners and users. While disk space requirements
> >> might not be a big problem, block propagation time is. The time required
> >> for a block to propagate in the network (or at least to all the miners)
> >> is directly dependent of its size.  If blocks take too much time to
> >> propagate in the network, the orphan rate will increase in unpredictable
> >> ways. For example if the internet speed in China is worse than in
> >> Europe, and miners in China have more than 50% of the hashing power,
> >> blocks mined by European miners might get orphaned.
> >>
> >> The system as described can also be gamed, by filling the network with
> >> transactions. Miners have the monetary interest to include as many
> >> transactions as possible in a block in order to collect the fees.
> >> Regardless how you think about it, there has to be a maximum block size
> >> that the network will allow as a consensus rule. Increasing it
> >> dynamically based on transaction volume will reach a point where the
> >> number got big enough that it broke things. Bitcoin, because its
> >> fundamental design, can scale by using offchain solutions.
> >>
> >>
> >> _______________________________________________
> >> bitcoin-dev mailing list
> >> bitcoin-dev at lists.linuxfoundation.org
> >> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> >>
> >
> >
> > _______________________________________________
> > bitcoin-dev mailing list
> > bitcoin-dev at lists.linuxfoundation.org
> > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> >
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-dev at lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20161210/d2b72e8b/attachment-0001.html>


More information about the bitcoin-dev mailing list