[Bitcoin-development] Block Size Increase

Wladimir J. van der Laan laanwj at gmail.com
Thu May 7 11:20:43 UTC 2015


On Wed, May 06, 2015 at 10:12:14PM +0000, Matt Corallo wrote:

> Personally, I'm rather strongly against any commitment to a block size
> increase in the near future. Long-term incentive compatibility requires
> that there be some fee pressure, and that blocks be relatively
> consistently full or very nearly full. What we see today are
> transactions enjoying next-block confirmations with nearly zero pressure
> to include any fee at all (though many do because it makes wallet code
> simpler).

I'm weakly against a block size increase in the near future. Some arguments follow. For sake of brevity, this ignores the inherent practical and political issues in scheduling a hardfork.

Against:

1. The everyone-verifies-everything approach inherently doesn't scale well. Yes, it is possible to increase the capacity, within limits, without completely destroying the system, but if scaling turns out to be a success, even a 20-fold increase is just a drop in the bucket (this will not make a decentralized Changetip possible). The gain will always be linear, at a total cost that scales in the number of (full node) users times the block size. The whole idea of everyone verifying the whole world's bus tickets and coffee purchases seems ludicrous to me. For true scaling, as well as decentralized microtransactions, the community needs to work on non-centralized 'level 2' protocols for transactions, for example the Lightning network.

  I prefer not to rely on faith that 'Moore's law' - which isn't a physical law but a trend - will save us. And it doesn't so much apply to communication bandwidth as its techniques are more diverse. E.g. for Bitsat, 20MB blocks will push the limit.

2a. Pushing up bandwidth and CPU usage will, inevitably, push people at the lower end away from running their own full nodes. Mind you, the sheer number of full nodes is not the issue, but Bitcoin's security model is based on being able to verify the shared consensus on one's own. A lot of recent development effort went into making the node software less heavy. Yes, one could switch to SPV, but that is a serious privacy compromise. In the worst case people will feel forced to move to webwallets.

  That was about sustained bandwidth - syncing a new node from scratch through the internet will become unwieldy sooner - this can be worked around with UTXO snapshots, but doing this in a way that doesn't completely pull the rug under the security model needs research (I'm aware that this could be construed the other way, that such a solution will be needed eventually and fast block chain growth just accelerates it).

2b. The bandwidth bound for just downloading blocks is ~4GB per month now, it will be ~52GB per month. Behind Tor and other anonimity networks, nodes will reveal themselves by the sheer volume of data transferred even to only download the block chain. This may already be the case, but will become worse. It may even become harmful to Tor itself.

3a. The costs are effectively externalized to users. I, hence, don't like calling the costs "trivial". I don't like making this decision for others, I'm not convinced that they are trivial for everyone. Besides, this will increase supply of block chain space, and thus push down transaction fees, but at cost to *all users*, even users that hardly do any transactions. Hence it favors those that generate lots of transactions (is that a perverse incentive? not sure, but it needs to be weighted in any decision).

3b. A mounting fee pressure, resulting in a true fee market where transactions compete to get into blocks, results in urgency to develop decentralized off-chain solutions. I'm afraid increasing the block size will kick this can down the road and let people (and the large Bitcoin companies) relax, until it's again time for a block chain increase, and then they'll rally Gavin again, never resulting in a smart, sustainable solution but eternal awkward discussions like this.

4. We haven't solved the problem of arbitrary data storage, and increasing the block size would compound that problem. More data storage more storage available for the same price - and up to 20x faster growth of the UTXO set, which is permanent (more externalization). More opportunity for pedonazis to insert double-plus ungood data, exposing users to possible legal ramifications.

For:

1. First, the obvious: It gives some breathing room in a year (or whenever the hard fork is planned). If necessary, it will allow more transactions to be on-chain for a while longer while other solutions are being implemented.

2. *Allowing* 20MB blocks does not mean miners will immediately start making them. Many of them aren't even filling up to the 1MB limit right now, probably due to latency/stale block issues. This makes objection 2 milder, which is about *worst case* bandwidth, as well as 4, as the end result may not be 20MB blocks filled with arbitrary "junk".

3. Investment in off-chain solutions is guided by not only fee pressure, but also other reasons such as speed of confirmation, which is unreliable on-chain. This eases objection 3b. 

Whew, this grew longer than I expected. To conclude, I understand the advantages of scaling, I do not doubt a block size increase will *work* Although there may be unforseen issues, I'm confident they'll be resolved. However, it may well make Bitcoin less useful for what sets it apart from other systems in the first place: the possibility for people to run their own own "bank" without special investment in connectivity and computing hardware. 
Also the politics aspect (at some point it becomes a question of who decides for who? who is excluded? all those human decisions...) of this I don't like in the least. Possibly unavoidable at some point, but that's something *I*'d like to kick down the road.

Wladimir





More information about the bitcoin-dev mailing list