[Bitcoin-ml] Partial UTXO tree as commitment

Tomas tomas at tomasvdw.nl
Tue Sep 5 23:42:27 UTC 2017


Thanks for your feedback Amaury. Answering reordered inline.


On Tue, Sep 5, 2017, at 21:42, Amaury Séchet via bitcoin-ml wrote:
> A few quick remarks:
>  - If the sharding is based on a constant, the size of the proof or
>    the number of shard grows with the size of the UTXO set. A nice
>    trick is to grow the number of shard based on the square root of
>    the size of the UTXO set. Now number of shards and proof size both
>    grow in a sub linear manner with the size of the UTXO set.
This constant is indeed what I proposed initially on slack, and your
suggestion is exactly what I propose in my mail above with
bucketsize =~  sqrt(txcount) * 64

Such that both the bucket size and the bucket count growing at
O(sqrt(utxo-size))
>  - If we have a way to upgrade the way we compute the set (see first
>    point) I'd rather go for the simplest option at first.
Clearly, using a set of hashes is more complicated than a single hash
but only slightly as the more complicated parts are ensuring keeping
sync between the set or hash and the UTXO set.
The added complexity is mostly *reversed *when attempting to implement
full node UTXO set syncing. Surely we want some form of chunking to sync
2gb and although we could use a snapshot for a single download, we can't
easily implement full nodes to deliver a chunk of the utxo state of 2
blocks back. A bucketed approach solved these problems.
> - This is maybe good enough, but far from perfect. I think we still
>   want to have a mechanism to update the UTXO set format no matter
>   what. There is a ton of work done on accumulator these days, so it's
>   not completely crazy to think that a better solution may be around
>   the corner.
It's quite possible, which is why I added a version. 

Yet, I think we should not put too much faith in CS innovation here.
I've learned that I should learn algorithms in order to learn how to
build algorithms.
As for the UTXO-set, the *primary* use case requires fast updates and
lookups. The secondary use case is these proofs. I offer an algorithm
that cheaply amends the first use case with the second. LevelDB works
quite well here for the primary use case, though RocksDB may improve it,
possible reversal through a spent tree may improve its concurrency, and
tweaked custom algorithms may also. Adding the proofs and full node
syncing cheaply without effecting these efficient data structures seems
like a big win.
I don't think we should expect some new accumulator algorithm to be the
perfect match for our UTXO set requirements. If you have a hunch for an
algorithm that could either amend the primary utxo db in a more
effective way than I propose, or replace the utxo database with
something efficient enough for our primary use case and fulfilling our
secondariness, I certainly would like to hear.
As it stands, I believe an array of  bucketed rolling hashes offers too
much benefits over a single rolling hash just to pick the latter in the
name of simplicity.

Regards,
Tomas 
bitcrust


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-ml/attachments/20170906/722ab57f/attachment.html>


More information about the bitcoin-ml mailing list