[Bitcoin-development] Reducing the block rate instead of increasing the maximum block size

Peter Todd pete at petertodd.org
Mon May 11 10:34:02 UTC 2015

On Mon, May 11, 2015 at 04:03:29AM -0300, Sergio Lerner wrote:
> Arguments against reducing the block interval
> 1. It will encourage centralization, because participants of mining
> pools will loose more money because of excessive initial block template
> latency, which leads to higher stale shares
> When a new block is solved, that information needs to propagate
> throughout the Bitcoin network up to the mining pool operator nodes,
> then a new block header candidate is created, and this header must be
> propagated to all the mining pool users, ether by a push or a pull
> model. Generally the mining server pushes new work units to the
> individual miners. If done other way around, the server would need to
> handle a high load of continuous work requests that would be difficult
> to distinguish from a DDoS attack. So if the server pushes new block
> header candidates to clients, then the problem boils down to increasing
> bandwidth of the servers to achieve a tenfold increase in work
> distribution. Or distributing the servers geographically to achieve a
> lower latency. Propagating blocks does not require additional CPU
> resources, so mining pools administrators would need to increase
> moderately their investment in the server infrastructure to achieve
> lower latency and higher bandwidth, but I guess the investment would be low.

It's *way* easier to buy more bandwidth that it is to get lower latency.

After all, getting to the other side of the planet via fiber takes at
*minimum* 100ms simply due to the speed of light; routing overheads
approximately double or triple that for all but highly specialized and
very, very expensive, networking services. Bandwidth simply can't fix
the speed of light.

It's also not at all realistic or desirable to assume connectivity in a
single hop, so you can again multiply that base latency by 2-5 times.

And on top of *that* you have to take into account latency from hasher
to mining pool - time that the hashing power isn't working on the new
block because they're work unit hasn't been updated matters just as much
as the time to get that block to the pool in the first place. Being
forced to reduce that latency is very damaging to the ecosystem as
you're making it more profitable to keep hashing power centralized.

In any case, even with 10 minute blocks pools already pay a lot of
attention to latency... Why make that problem 10x worse?

> 2. It will increase the probability of a block-chain split
> The convergence of the network relies on the diminishing probability of
> two honest miners creating simultaneous competing blocks chains. To
> increase the competition chain, competing blocks must be generated in
> almost simultaneously (in the same time window approximately bounded by
> the network average block propagation delay). The probability of a block
> competition decreases exponentially with the number of blocks. In fact,
> the probability of a sustained competition on ten 1-minute blocks is one
> million times lower than the probability of a competition of one
> 10-minute block. So even if the competition probability of six 1-minute
> blocks is higher than of six ten-minute blocks, this does not imply
> reducing the block rate increases this chance, but on the contrary, 
> reduces it.

Can you explain your reasoning here in detail?

> 4. Reducing the block propagation time on the average case is good, but
> what happen in the worse case?
> Most methods proposed to reduce the block propagation delay do it only
> on the average case. Any kind of block compression relies on both
> parties sharing some previous information. In the worse case it's true
> that a miner can create and try to broadcast a block that takes too much
> time to verify or bandwidth to transmit. This is currently true on the
> Bitcoin network. Nevertheless there is no such incentive for miners,
> since they will be shooting on their own foots. Peter Todd has argued
> that the best strategy for miners is actually to reach 51% of the
> network, but not more. In other words, to exclude the slowest 49%

Actually the correct figure is less than ~30%:


> percent. But this strategy of creating bloated blocks is too risky in
> practice, and surely doomed to fail, as network conditions dynamically 
> change.

They dynamically change? Source?

Remember that the strategy still gives you a benefit if you simply
target, say, 75% rather than the minimum threshold.

> Also it would be perceived as an attack to the network, and the
> miner (if it is a public mining pool) would be probably blacklisted.

How do you see that blacklisting actually being done?

Equally, it's easy to portray such mining as being "for the good of
Bitcoin" - "we're just making transaction cheap! tough luck if your
shitty pool can't keep up" This is quite unlike selfish mining.

> 7. There has been insufficient testing and/or insufficient research into
> technical/economic implications or reducing the block rate
> This is partially true. in the GHOST paper, this has been analyzed, and
> the problem was shown to be solvable for block intervals of just a few
> seconds.

GHOST works radically differently than a linear blockchain, and it's not
clear that it actually has the correct economic incentives.

> These networking optimizations ( O(1) propagation using headers-first or
> IBLTs), can be added later.

Keep in mind that miners already use optimized propagation techniques,
like p2pool's implementation or Matt Corallo's block relaying network.

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 650 bytes
Desc: Digital signature
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20150511/7490b435/attachment.sig>

More information about the bitcoin-dev mailing list