<div dir="ltr"><div>> Doesn't the current BIP157 protocol have each filter commit to the filter</div><div>> for the previous block?</div><div><br></div><div>Yep!</div><div><br></div><div>> If that's the case, shouldn't validating the commitment at the tip of the</div><div>> chain (or buried back whatever number of blocks that the SPV client trusts)</div><div>> obliviate the need to validate the commitments for any preceeding blocks in</div><div>> the SPV trust model?</div><div><br></div><div>Yeah, just that there'll be a gap between the p2p version, and when it's</div><div>ultimately committed.</div><div><br></div><div>> It seems like you're claiming better security here without providing any</div><div>> evidence for it.</div><div><br></div><div>What I mean is that one allows you to fully verify the filter, while the</div><div>other allows you to only validate a portion of the filter and requires other</div><div>added heuristics. </div><div><br></div><div>> In the case of prevout+output filters, when a client receives advertisements</div><div>> for different filters from different peers, it:</div><div><br></div><div>Alternatively, they can decompress the filter and at least verify that</div><div>proper _output scripts_ have been included. Maybe this is "good enough"</div><div>until its committed. If a command is added to fetch all the prev outs along</div><div>w/ a block (which would let you do another things like verify fees), then</div><div>they'd be able to fully validate the filter as well.</div><div><br></div><div>-- Laolu</div><div><br></div><br><div class="gmail_quote"><div dir="ltr">On Sat, Jun 9, 2018 at 3:35 AM David A. Harding <<a href="mailto:dave@dtrt.org">dave@dtrt.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On Fri, Jun 08, 2018 at 04:35:29PM -0700, Olaoluwa Osuntokun via bitcoin-dev wrote:<br>
> 2. Since the coinbase transaction is the first in a block, it has the<br>
> longest merkle proof path. As a result, it may be several hundred bytes<br>
> (and grows with future capacity increases) to present a proof to the<br>
> client.<br>
<br>
I'm not sure why commitment proof size is a significant issue. Doesn't<br>
the current BIP157 protocol have each filter commit to the filter for<br>
the previous block? If that's the case, shouldn't validating the<br>
commitment at the tip of the chain (or buried back whatever number of<br>
blocks that the SPV client trusts) obliviate the need to validate the<br>
commitments for any preceeding blocks in the SPV trust model?<br>
<br>
> Depending on the composition of blocks, this may outweigh the gains<br>
> had from taking advantage of the additional compression the prev outs<br>
> allow.<br>
<br>
I think those are unrelated points. The gain from using a more<br>
efficient filter is saved bytes. The gain from using block commitments<br>
is SPV-level security---that attacks have a definite cost in terms of<br>
generating proof of work instead of the variable cost of network<br>
compromise (which is effectively free in many situations).<br>
<br>
Comparing the extra bytes used by block commitments to the reduced bytes<br>
saved by prevout+output filters is like comparing the extra bytes used<br>
to download all blocks for full validation to the reduced bytes saved by<br>
only checking headers and merkle inclusion proofs in simplified<br>
validation. Yes, one uses more bytes than the other, but they're<br>
completely different security models and so there's no normative way for<br>
one to "outweigh the gains" from the other.<br>
<br>
> So should we optimize for the ability to validate in a particular<br>
> model (better security), or lower bandwidth in this case?<br>
<br>
It seems like you're claiming better security here without providing any<br>
evidence for it. The security model is "at least one of my peers is<br>
honest." In the case of outpoint+output filters, when a client receives<br>
advertisements for different filters from different peers, it:<br>
<br>
1. Downloads the corresponding block<br>
2. Locally generates the filter for that block<br>
3. Kicks any peers that advertised a different filter than what it<br>
generated locally<br>
<br>
This ensures that as long as the client has at least one honest peer, it<br>
will see every transaction affecting its wallet. In the case of<br>
prevout+output filters, when a client receives advertisements for<br>
different filters from different peers, it:<br>
<br>
1. Downloads the corresponding block and checks it for wallet<br>
transactions as if there had been a filter match<br>
<br>
This also ensures that as long as the client has at least one honest<br>
peer, it will see every transaction affecting its wallet. This is<br>
equivilant security.<br>
<br>
In the second case, it's possible for the client to eventually<br>
probabalistically determine which peer(s) are dishonest and kick them.<br>
The most space efficient of these protocols may disclose some bits of<br>
evidence for what output scripts the client is looking for, but a<br>
slightly less space-efficient protocol simply uses randomly-selected<br>
outputs saved from previous blocks to make the probabalistic<br>
determination (rather than the client's own outputs) and so I think<br>
should be quite private. Neither protocol seems significantly more<br>
complicated than keeping an associative array recording the number of<br>
false positive matches for each peer's filters.<br>
<br>
-Dave<br>
</blockquote></div></div>