<div dir="ltr"><div>> An example of that cost is you arguing against specifying and supporting the</div><div>> design that is closer to one that would be softforked, which increases the</div><div>> time until we can make these filters secure because it</div><div>> slows convergence on the design of what would get committed</div><div><br></div><div>Agreed, since the commitment is just flat out better, and also also less</div><div>code to validate compared to the cross p2p validation, the filter should be</div><div>as close to the committed version. This way, wallet and other apps don't</div><div>need to modify their logic in X months when the commitment is rolled out.</div><div><br></div><div>> Great point, but it should probably exclude coinbase OP_RETURN output.</div><div>> This would exclude the current BIP141 style commitment and likely any</div><div>> other.</div><div><br></div><div>Definitely. I chatted offline with sipa recently, and he suggested this as</div><div>well. Upside is that the filters will get even smaller, and also the first</div><div>filter type becomes even more of a "barebones" wallet filter. If folks</div><div>reaally want to also search OP_RETURN in the filter (as no widely deployed</div><div>applications I know of really use it), then an additional filter type can be</div><div>added in the future. It would need to be special cased to filter out the</div><div>commitment itself.</div><div><br></div><div>Alright, color me convinced! I'll further edit my open BIP 158 PR to:</div><div><br></div><div> * exclude all OP_RETURN </div><div> * switch to prev scripts instead of outpoints</div><div> * update the test vectors to include the prev scripts from blocks in</div><div> addition to the block itself</div><div><br></div><div>-- Laolu</div><div><br></div><br><div class="gmail_quote"><div dir="ltr">On Sat, Jun 9, 2018 at 8:45 AM Gregory Maxwell <<a href="mailto:greg@xiph.org">greg@xiph.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">> So what's the cost in using<br>
> the current filter (as it lets the client verify the filter if they want to,<br>
<br>
An example of that cost is you arguing against specifying and<br>
supporting the design that is closer to one that would be softforked,<br>
which increases the time until we can make these filters secure<br>
because it slows convergence on the design of what would get<br>
committed.<br>
<br>
>> I don't agree at all, and I can't see why you say so.<br>
><br>
> Sure it doesn't _have_ to, but from my PoV as "adding more commitments" is<br>
> on the top of every developers wish list for additions to Bitcoin, it would<br>
> make sense to coordinate on an "ultimate" extensible commitment once, rather<br>
> than special case a bunch of distinct commitments. I can see arguments for<br>
> either really.<br>
<br>
We have an extensible commitment style via BIP141 already. I don't see<br>
why this in particular demands a new one.<br>
<br>
> 1. The current filter format (even moving to prevouts) cannot be committed<br>
> in this fashion as it indexes each of the coinbase output scripts. This<br>
> creates a circular dependency: the commitment is modified by the<br>
> filter,<br>
<br>
Great point, but it should probably exclude coinbase OP_RETURN output.<br>
This would exclude the current BIP141 style commitment and likely any<br>
other.<br>
<br>
Should I start a new thread on excluding all OP_RETURN outputs from<br>
BIP-158 filters for all transactions? -- they can't be spent, so<br>
including them just pollutes the filters.<br>
<br>
> 2. Since the coinbase transaction is the first in a block, it has the<br>
> longest merkle proof path. As a result, it may be several hundred bytes<br>
> (and grows with future capacity increases) to present a proof to the<br>
<br>
If 384 bytes is a concern, isn't 3840 bytes (the filter size<br>
difference is in this ballpark) _much_ more of a concern? Path to the<br>
coinbase transaction increases only logarithmically so further<br>
capacity increases are unlikely to matter much, but the filter size<br>
increases linearly and so it should be much more of a concern.<br>
<br>
> In regards to the second item above, what do you think of the old Tier Nolan<br>
> proposal [1] to create a "constant" sized proof for future commitments by<br>
> constraining the size of the block and placing the commitments within the<br>
> last few transactions in the block?<br>
<br>
I think it's a fairly ugly hack. esp since it requires that mining<br>
template code be able to stuff the block if they just don't know<br>
enough actual transactions-- which means having a pool of spendable<br>
outputs in order to mine, managing private keys, etc... it also<br>
requires downstream software not tinker with the transaction count<br>
(which I wish it didn't but as of today it does). A factor of two<br>
difference in capacity-- if you constrain to get the smallest possible<br>
proof-- is pretty stark, optimal txn selection with this cardinality<br>
constraint would be pretty weird. etc.<br>
<br>
If the community considers tree depth for proofs like that to be such<br>
a concern to take on technical debt for that structure, we should<br>
probably be thinking about more drastic (incompatible) changes... but<br>
I don't think it's actually that interesting.<br>
<br>
> I don't think its fair to compare those that wish to implement this proposal<br>
> (and actually do the validation) to the legacy SPV software that to my<br>
> knowledge is all but abandoned. The project I work on that seeks to deploy<br>
<br>
Yes, maybe it isn't. But then that just means we don't have good information.<br>
<br>
When a lot of people were choosing electrum over SPV wallets when<br>
those SPV wallets weren't abandoned, sync time was frequently cited as<br>
an actual reason. BIP158 makes that worse, not better. So while I'm<br>
hopeful, I'm also somewhat sceptical. Certainly things that reduce<br>
the size of the 158 filters make them seem more likely to be a success<br>
to me.<br>
<br>
> too difficult to implement "full" validation, as they're bitcoin developers<br>
> with quite a bit of experience.<br>
<br>
::shrugs:: Above you're also arguing against fetching down to the<br>
coinbase transaction to save a couple hundred bytes a block, which<br>
makes it impossible to validate a half dozen other things (including<br>
as mentioned in the other threads depth fidelity of returned proofs).<br>
There are a lot of reasons why things don't get implemented other than<br>
experience! :)<br>
</blockquote></div></div>