From cezary.dziemian at gmail.com Fri Feb 2 13:36:10 2018 From: cezary.dziemian at gmail.com (Cezary Dziemian) Date: Fri, 2 Feb 2018 14:36:10 +0100 Subject: [Lightning-dev] channel_reserve_satoshis? Message-ID: Hello, When we send open_channel, how we can communicate other party that we would like him to put into channel some of his funds? Is this what is "channel_reserve_satoshis" field for? Best regards, Cezary -------------- next part -------------- An HTML attachment was scrubbed... URL: From ZmnSCPxj at protonmail.com Fri Feb 2 14:07:33 2018 From: ZmnSCPxj at protonmail.com (ZmnSCPxj) Date: Fri, 02 Feb 2018 09:07:33 -0500 Subject: [Lightning-dev] channel_reserve_satoshis? In-Reply-To: References: Message-ID: Good morning Cezary, > When we send open_channel, how we can communicate other party that we would like him to put into channel some of his funds? There is no way to do that as of BOLT v1.0 There are too many issues to allow channel opening by somebody else to ask your node to commit money into channels. 1. If I ask you to put 10.0 BTC from you to put into a channel I make, and you accept, I know you have at least 10.0 BTC lying around. 2. I might open a channel to you and ask you to put in money, then when you have committed the money into the channel, disconnect my node and reformat its hard disk, so that you are forced to use a unilateral close on your side, and locking your funds due to the unilateral close. Even if there is a rule that I must commit at least the same amount as you, a richer attacker can still lock up the funds of a poorer victim. In general such dual-funded channels require some measure of trust between you and your counterparty due to the above issues, at least that you know that the one initiating the opening will not suddenly disappear. Such trust issues can be mitigated by simply disallowing dual-funding by default on your node, and requiring you to explicitly allow multi-funding, once, for a particular amount, coming from a particular peer. But in any case, for now it is not defined in BOLT v1.0. > Is this what is "channel_reserve_satoshis" field for? No. `channel_reserve_satoshis` is different. It is the amount that each of you should keep on the channel, once the channel state has moved from "all of the funds is assigned to the opener of the channel." The reason for this field is below. 1. Suppose I open a 1BTC channel to you. We agree to a `channel_reserve_satoshis` amounting to 0.1BTC. The initial channel state is (me=1.0BTC, you=0BTC) 2. This means I can make 9 payments of 0.1BTC each, so that the channel state is now (me=0.1BTC, you = 0.9BTC). 3. The `channel_reserve_satoshis` means I cannot pay further to you, i.e. I cannot move the channel state to (me=0BTC, you=1BTC).. 4. Suppose we allowed this (me=0BTC, you=1BTC) state. Then it is costless for me to attempt to steal -- after all, I have 0 money on the channel and there is nothing to punish me with. Even if I steal, and you detect it, I lose nothing because I own nothing on the channel. 5. But if the channel is constrained, so that I need to keep 0.1BTC on the channel, then stealing attempts have a cost. If you detect me, I stand to lose 0.1BTC. If you have a better than 90% chance of detecting me, say 91%, a mere 9% chance of 0.9BTC payoff is not enough to counterbalance the 91% chance of losing 0.1BTC I currently have on the channel 6. In short, the `channel_reserve_satoshis` ensures we do not have costless theft. Regards, ZmnSCPxj -------------- next part -------------- An HTML attachment was scrubbed... URL: From cezary.dziemian at gmail.com Fri Feb 2 16:48:47 2018 From: cezary.dziemian at gmail.com (Cezary Dziemian) Date: Fri, 2 Feb 2018 17:48:47 +0100 Subject: [Lightning-dev] channel_reserve_satoshis? In-Reply-To: References: Message-ID: Thank you very much for answer. Thanks for explanation about 'channel_reserve_satoshis'. I misunderstood ' channel_reserve_satoshis ' with 'dust_limit'. Lets say I would like to receive ln payments. How can I do this, without locking funds on the other side of channel? Best regards, Cezary 2018-02-02 15:07 GMT+01:00 ZmnSCPxj : > Good morning Cezary, > > When we send open_channel, how we can communicate other party that we > would like him to put into channel some of his funds? > > > There is no way to do that as of BOLT v1.0 > > There are too many issues to allow channel opening by somebody else to ask > your node to commit money into channels. > > 1. If I ask you to put 10.0 BTC from you to put into a channel I make, > and you accept, I know you have at least 10.0 BTC lying around. > 2. I might open a channel to you and ask you to put in money, then when > you have committed the money into the channel, disconnect my node and > reformat its hard disk, so that you are forced to use a unilateral close on > your side, and locking your funds due to the unilateral close. Even if > there is a rule that I must commit at least the same amount as you, a > richer attacker can still lock up the funds of a poorer victim. > > In general such dual-funded channels require some measure of trust between > you and your counterparty due to the above issues, at least that you know > that the one initiating the opening will not suddenly disappear. > > Such trust issues can be mitigated by simply disallowing dual-funding by > default on your node, and requiring you to explicitly allow multi-funding, > once, for a particular amount, coming from a particular peer. But in any > case, for now it is not defined in BOLT v1.0. > > Is this what is "channel_reserve_satoshis" field for? > > > No. `channel_reserve_satoshis` is different. It is the amount that each > of you should keep on the channel, once the channel state has moved from > "all of the funds is assigned to the opener of the channel." > > The reason for this field is below. > > 1. Suppose I open a 1BTC channel to you. We agree to a > `channel_reserve_satoshis` amounting to 0.1BTC. The initial channel state > is (me=1.0BTC, you=0BTC) > 2. This means I can make 9 payments of 0.1BTC each, so that the channel > state is now (me=0.1BTC, you = 0.9BTC). > 3. The `channel_reserve_satoshis` means I cannot pay further to you, i.e. > I cannot move the channel state to (me=0BTC, you=1BTC).. > 4. Suppose we allowed this (me=0BTC, you=1BTC) state. Then it is > costless for me to attempt to steal -- after all, I have 0 money on the > channel and there is nothing to punish me with. Even if I steal, and you > detect it, I lose nothing because I own nothing on the channel. > 5. But if the channel is constrained, so that I need to keep 0.1BTC on > the channel, then stealing attempts have a cost. If you detect me, I stand > to lose 0.1BTC. If you have a better than 90% chance of detecting me, say > 91%, a mere 9% chance of 0.9BTC payoff is not enough to counterbalance the > 91% chance of losing 0.1BTC I currently have on the channel > 6. In short, the `channel_reserve_satoshis` ensures we do not have > costless theft. > > Regards, > ZmnSCPxj > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rusty at rustcorp.com.au Fri Feb 2 05:48:46 2018 From: rusty at rustcorp.com.au (Rusty Russell) Date: Fri, 02 Feb 2018 16:18:46 +1030 Subject: [Lightning-dev] Suggestion: Add optional IP address field to invoice format In-Reply-To: <7c86de62-bb03-830a-51ef-f834d9a41a38@gmail.com> References: <7c86de62-bb03-830a-51ef-f834d9a41a38@gmail.com> Message-ID: <87372k3ps1.fsf@rustcorp.com.au> Ignatius Rivaldi writes: > Hi, > > I think that there is a potential problem for sellers to accept > lightning network. They need someone to open a channel with them that is > filled with bitcoins so that they can start receiving bitcoins from > other LN users. But what if a buyer can simultaneously open a channel > and pay the seller? In order to do that they need to know the IP address > of the seller and how much bitcoins they need to pay, so that they can > push the appropriate amount of bitcoins to the seller side, satisfying > the seller's transaction. But currently we opened a channel using the > pubkey at ip format, which doesn't have amount information, and then we pay > them using lntb... format, which doesn't have IP address information. The DNS seeds have this information if you are bootstrapping, or you can connect to any other node and get information on the network as a whole. Connecting to the first recipient is one strategy, but not clearly the best if they're not reasonably connected: if you know the topology you can make a more informed decision. It's not a bad idea to add an 'a' field, but I'd rather hold off, as it *is* something of a layering violation. Cheers, Rusty. From conner at lightning.engineering Sat Feb 3 02:20:01 2018 From: conner at lightning.engineering (Conner Fromknecht) Date: Sat, 03 Feb 2018 02:20:01 +0000 Subject: [Lightning-dev] QuickMaths for Onions: Linear Construction of Sphinx Shared-Secrets Message-ID: Hello everyone, While working on some upgrades to our lightning-onion repo [1], roasbeef pointed out that all of our implementations use a quadratic algorithm to iteratively apply the intermediate blinding factors. I spent some time working on a linear algorithm that reduces the total number of scalar multiplications. Overall, our packet construction benchmarks showed an 8x speedup, from 37ms to 4.5ms, and now uses ~70% less memory. The diff is only ~15 LOC, and thought this would be a useful optimization for our implementations to have. I can make a PR that updates the example source in lightning-rfc if there is interest. A description, along with the modified source, can be found in my PR to lightning-onion [2]. The correctness of the output has been verified against the (updated) BOLT 4 test vector [3]. [1] https://github.com/lightningnetwork/lightning-onion [2] https://github.com/lightningnetwork/lightning-onion/pull/18 [3] https://github.com/lightningnetwork/lightning-rfc/pull/372 Cheers, Conner -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim.posen at gmail.com Sun Feb 4 09:02:47 2018 From: jim.posen at gmail.com (Jim Posen) Date: Sun, 4 Feb 2018 01:02:47 -0800 Subject: [Lightning-dev] QuickMaths for Onions: Linear Construction of Sphinx Shared-Secrets In-Reply-To: References: Message-ID: Nice work! I reread the relevant section in BOLT 4 and it is written in a way to suggest the quadratic time algorithm. I have opened a PR to update the recommendation and reference code: https://github.com/lightningnetwork/lightning-rfc/pull/374. On Fri, Feb 2, 2018 at 6:20 PM, Conner Fromknecht < conner at lightning.engineering> wrote: > Hello everyone, > > While working on some upgrades to our lightning-onion repo [1], roasbeef pointed > out that all of our implementations use a quadratic algorithm to > iteratively apply the intermediate blinding factors. > > I spent some time working on a linear algorithm that reduces the total > number of scalar multiplications. Overall, our packet construction > benchmarks showed an 8x speedup, from 37ms to 4.5ms, and now uses ~70% less > memory. The diff is only ~15 LOC, and thought this would be a > useful optimization for our implementations to have. I can make a PR that > updates the example source in lightning-rfc if there is interest. > > A description, along with the modified source, can be found in my PR to > lightning-onion [2]. The correctness of the output has been verified > against the (updated) BOLT 4 test vector [3]. > > [1] https://github.com/lightningnetwork/lightning-onion > [2] https://github.com/lightningnetwork/lightning-onion/pull/18 > [3] https://github.com/lightningnetwork/lightning-rfc/pull/372 > > Cheers, > Conner > > _______________________________________________ > Lightning-dev mailing list > Lightning-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ZmnSCPxj at protonmail.com Sun Feb 4 09:08:24 2018 From: ZmnSCPxj at protonmail.com (ZmnSCPxj) Date: Sun, 04 Feb 2018 04:08:24 -0500 Subject: [Lightning-dev] channel_reserve_satoshis? In-Reply-To: References: Message-ID: Good morning Cezary, > Lets say I would like to receive ln payments. How can I do this, without locking funds on the other side of channel? 1. Do the Blockstream Store route: do it early enough, and people will make channels to you, because, they want to try out Lightning Network quickly. 2. Publish the node and contact details (IP or TOR onion service) and hope people are excited enough about your product to open a channel to you. 3. In all likelihood, some service later will offer deals like "up to 300mBTC receive for only 1mBTC! At least 3 months channel alive!" for new upcoming businesses. 4. Ask a friend to channel to you. Regards, ZmnSCPxj -------------- next part -------------- An HTML attachment was scrubbed... URL: From cezary.dziemian at gmail.com Sun Feb 4 11:06:43 2018 From: cezary.dziemian at gmail.com (Cezary Dziemian) Date: Sun, 4 Feb 2018 12:06:43 +0100 Subject: [Lightning-dev] channel_reserve_satoshis? In-Reply-To: References: Message-ID: Thanks for answer ZmnSCPxj, I think first two options are for those, who want to earn some money for payments fee. The option that can be interesting for for some business like coffee is third option. Do you agree with me? > 3. In all likelihood, some service later will offer deals like "up to 300mBTC receive for only 1mBTC! At least 3 months channel alive!" for new upcoming businesses. It seams the best option for new businesses, but what if such new business would like to have channel with 300mBTC on both side. Let's say this is ATM. The owner of ATM need to be able to receive and sending funds. Without possibility of both-side funding channels this is quite hard to establish such balanced channel. ATM owner needs to send 300 mBTC as on-side transaction to hub, and then hub could open channel with 600 mBTC capacity and send back 300mBTC to ATM owner though this new channel. This requires trust to hub. I know, LN is in early stage, but I'm very surprised that both sides cannot fund channel. Maybe this is because at the beginning LN was presented with such option. The only reason are trust issues that you described before, or maybe there are also some technical issues to implement such functionality? Do you predict this will be added to BOLT and implemented in the future? Best regards, Cezary 2018-02-04 10:08 GMT+01:00 ZmnSCPxj : > Good morning Cezary, > > > Lets say I would like to receive ln payments. How can I do this, without > locking funds on the other side of channel? > > 1. Do the Blockstream Store route: do it early enough, and people will > make channels to you, because, they want to try out Lightning Network > quickly. > > 2. Publish the node and contact details (IP or TOR onion service) and hope > people are excited enough about your product to open a channel to you. > > 3. In all likelihood, some service later will offer deals like "up to > 300mBTC receive for only 1mBTC! At least 3 months channel alive!" for new > upcoming businesses. > > 4. Ask a friend to channel to you. > > Regards, > ZmnSCPxj > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ZmnSCPxj at protonmail.com Sun Feb 4 12:36:14 2018 From: ZmnSCPxj at protonmail.com (ZmnSCPxj) Date: Sun, 04 Feb 2018 07:36:14 -0500 Subject: [Lightning-dev] channel_reserve_satoshis? In-Reply-To: References: Message-ID: Good morning Cezary, > I think first two options are for those, who want to earn some money for payments fee. The option that can be interesting for for some business like coffee is third option. Do you agree with me? > >> 3. In all likelihood, some service later will offer deals like "up to 300mBTC receive for only 1mBTC! At least 3 months channel alive!" for new upcoming businesses. > > It seams the best option for new businesses, but what if such new business would like to have channel with 300mBTC on both side. Let's say this is ATM. The owner of ATM need to be able to receive and sending funds. Without possibility of both-side funding channels this is quite hard to establish such balanced channel. ATM owner needs to send 300 mBTC as on-side transaction to hub, and then hub could open channel with 600 mBTC capacity and send back 300mBTC to ATM owner though this new channel. This requires trust to hub. Then do not trust a single hub. Instead, have incoming 300mBTC from one hub, then make an outgoing 300mBTC to another hub. Encourages more hubs also. This also makes your node a potential routing node, improving network connectivity. > I know, LN is in early stage, but I'm very surprised that both sides cannot fund channel. Maybe this is because at the beginning LN was presented with such option. The only reason are trust issues that you described before, or maybe there are also some technical issues to implement such functionality? Do you predict this will be added to BOLT and implemented in the future? There is already an issue regarding this. For now, priority is actual implementation of payments. Dual-funding channels can be emulated by having some hubbing service make channels to you, while you make a channel to some other hub (i.e. make two channels). Such an emulation is superior to dual-funding as it allows you to potentially become some alternate route if other routes become congested, letting you earn some small amount; compare this to a single dual-funding channel that, by itself, cannot be used to for routing. Another thing is that we can make "circular superhubs" if small groups of us cooperate. The smallest 3-circle superhub has 3 members A B C. A opens channel to B, B open channels t C, C open channels to A. Each channel is the same capacity. If each of you has one out-channel other than on the circular superhub, any of A B C can spend to any node that any of them have an out-channel to. Similarly, each of you can receive via any in-channel any of you happen to have. Join a few such small communities and you can be well-connected enough to send and receive reasonably seamlessly to anyone on the network. Regards, ZmnSCPxj > Best regards, > Cezary > > 2018-02-04 10:08 GMT+01:00 ZmnSCPxj : > >> Good morning Cezary, >>> Lets say I would like to receive ln payments. How can I do this, without locking funds on the other side of channel? >> >> 1. Do the Blockstream Store route: do it early enough, and people will make channels to you, because, they want to try out Lightning Network quickly. >> >> 2. Publish the node and contact details (IP or TOR onion service) and hope people are excited enough about your product to open a channel to you. >> >> 3. In all likelihood, some service later will offer deals like "up to 300mBTC receive for only 1mBTC! At least 3 months channel alive!" for new upcoming businesses. >> >> 4. Ask a friend to channel to you. >> >> Regards, >> ZmnSCPxj -------------- next part -------------- An HTML attachment was scrubbed... URL: From abhisharm at gmail.com Sun Feb 4 18:21:48 2018 From: abhisharm at gmail.com (Abhishek Sharma) Date: Sun, 4 Feb 2018 13:21:48 -0500 Subject: [Lightning-dev] An Idea to Improve Connectivity of the Graph Message-ID: Hello all, I am not sure if this is the right place for this, but I have been thinking about the lightning network and how it could be modified so that fewer total channels would need to be open. I had the idea for a specific kind of transaction, in which three parties commit their funds all at once, and are able to move their funds between the three open channels between them. I will give a rough overview of my idea and give an example that I think illustrates how it could improve users' ability to route their transactions. Say that three parties, A, B, and C, create a special commitment transaction on the network that creates three open channels between each of them with a pre-specified balance in each channel. Now, these channels would be lightning network channels, and so the three of them could transact with each other and modify balances in their individual channels at will. However, this special agreement between the three of them also has the property than they can move their funds *between *channels, provided they have the permission of the counterparty to the channel they move their funds from, and then presents this to the other counterparty to show that funds have been moved. 1.) A, B, and C each create a commitment transaction, committing .5 BTC (3 BTC in total) on their end of each of their channels. 2.) A, B, and C transact normally using the lightning protocol. After some amount of time, the channel balances are as follows: channel AB: A - 0.75, B - 0.25 channel BC: B - 0.4, C - 0.6, channel AC: A - 0, C: 1.0 3.) A would like to send .5 BTC to C, however she does not have enough funds in that channel to do so. It's also not possible for her to route her transaction through B, as B only has .4 in his channel with C. However, she does have those funds in her channel with B, and so asks for B's permission (in the form of a signed balance state that includes the hash of the previous balance), to move those funds over to her account with C. She gets this signed slip from B, and then presents it to C. 4.) A, B, and C continue trading on their update balances. 5.) When they wish to close out their channels, they all post the last signed balance statements each of them has. Say, for example, A and B were to collude and trade on their old balance (of .75 and .25) after Bsigning the statement that A was 'moving' funds to C. If A and C were trading on their new balances, C has proof of both A and B's collusion, and she can present the signed slip which said that A was moving funds to AC and so the total balance on A and B's channel should've summed to 0.5. In this event, All funds in all three channels are forfeited to C. I believe this works because, in virtue of being able to make inferences based on her own channel balances, C always knows (if she is following the protocol) exactly how much should be in channel AB. and can prove this. If there were 4 parties, C couldn't prove on her own that some set of parties colluded to trade on an old balance. Now, I'll show why such a mechanism can be useful. Now, assume that there are parties A, B, C, D, and E, and the following channels and balances exist (with the ones marked by a * part of the special three-way commitment): AB*: A - 1.0, B - 0 BC*: B - 0, C - 1.0 AC*: A - 0, C - 1.0 AD: D - 1.0, A - 0 CE: C - 1.0, E - 0 Now suppose D wishes to send E 1.0 BTC. With the current channel structure, this isn't possible in lightning without opening a new channel and waiting for the network to verify it. However, A can ask B to move her 1.0 in channel AB to channel AC (with maybe a very nominal fee to incentivise this), thereby enabling D to route 1.0 BTC from A to C and finally to E. I would appreciate your feedback on this idea and any questions you may have for further explanation. Best Regards, Abhishek Sharma Brown University Computer Science '18 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ZmnSCPxj at protonmail.com Sun Feb 4 21:41:10 2018 From: ZmnSCPxj at protonmail.com (ZmnSCPxj) Date: Sun, 04 Feb 2018 16:41:10 -0500 Subject: [Lightning-dev] An Idea to Improve Connectivity of the Graph In-Reply-To: References: Message-ID: <_jS-uMec5G1MY-bHNxlT80vcgJcEqvBvkGaoVGpJqZleIAKHBiMKognosenyigwMhZW8e-85Sfd_0GHJW2eckULUJjYol5nZtT61cTuCUAg=@protonmail.com> Good morning Abhishek Sharma, While the goal of the idea is good, can you provide more details on the Bitcoin transactions? Presumably the on-chain anchor is a 3-of-3 multisig UTXO, what is the transaction that spends that? What do Lightning commitment transactions spend? Can you draw a graph of transaction chains that ensure correct operation of this idea? Have you seen Burchert-Decker-Wattenhofer Channel Factories? https://www.tik.ee.ethz.ch/file/a20a865ce40d40c8f942cf206a7cba96/Scalable_Funding_Of_Blockchain_Micropayment_Networks%20(1).pdf What is the difference between your idea and the Burchert-Decker-Wattenhofer Channel Factories? Regards, ZmnSCPxj Sent with [ProtonMail](https://protonmail.com) Secure Email. -------- Original Message -------- On February 4, 2018 6:21 PM, Abhishek Sharma wrote: > Hello all, > I am not sure if this is the right place for this, but I have been thinking about the lightning network and how it could be modified so that fewer total channels would need to be open. I had the idea for a specific kind of transaction, in which three parties commit their funds all at once, and are able to move their funds between the three open channels between them. I will give a rough overview of my idea and give an example that I think illustrates how it could improve users' ability to route their transactions. > > Say that three parties, A, B, and C, create a special commitment transaction on the network that creates three open channels between each of them with a pre-specified balance in each channel. Now, these channels would be lightning network channels, and so the three of them could transact with each other and modify balances in their individual channels at will. However, this special agreement between the three of them also has the property than they can move their funds between channels, provided they have the permission of the counterparty to the channel they move their funds from, and then presents this to the other counterparty to show that funds have been moved. > > 1.) A, B, and C each create a commitment transaction, committing .5 BTC (3 BTC in total) on their end of each of their channels. > 2.) A, B, and C transact normally using the lightning protocol. After some amount of time, the channel balances are as follows: > channel AB: A - 0.75, B - 0.25 > channel BC: B - 0.4, C - 0.6, > channel AC: A - 0, C: 1.0 > 3.) A would like to send .5 BTC to C, however she does not have enough funds in that channel to do so. It's also not possible for her to route her transaction through B, as B only has .4 in his channel with C. However, she does have those funds in her channel with B, and so asks for B's permission (in the form of a signed balance state that includes the hash of the previous balance), to move those funds over to her account with C. She gets this signed slip from B, and then presents it to C. > 4.) A, B, and C continue trading on their update balances. > 5.) When they wish to close out their channels, they all post the last signed balance statements each of them has. > Say, for example, A and B were to collude and trade on their old balance (of .75 and .25) after Bsigning the statement that A was 'moving' funds to C. If A and C were trading on their new balances, C has proof of both A and B's collusion, and she can present the signed slip which said that A was moving funds to AC and so the total balance on A and B's channel should've summed to 0.5. In this event, All funds in all three channels are forfeited to C. > > I believe this works because, in virtue of being able to make inferences based on her own channel balances, C always knows (if she is following the protocol) exactly how much should be in channel AB. and can prove this. If there were 4 parties, C couldn't prove on her own that some set of parties colluded to trade on an old balance. > > Now, I'll show why such a mechanism can be useful. > Now, assume that there are parties A, B, C, D, and E, and the following channels and balances exist (with the ones marked by a * part of the special three-way commitment): > AB*: A - 1.0, B - 0 > BC*: B - 0, C - 1.0 > AC*: A - 0, C - 1.0 > AD: D - 1.0, A - 0 > CE: C - 1.0, E - 0 > Now suppose D wishes to send E 1.0 BTC. With the current channel structure, this isn't possible in lightning without opening a new channel and waiting for the network to verify it. However, A can ask B to move her 1.0 in channel AB to channel AC (with maybe a very nominal fee to incentivise this), thereby enabling D to route 1.0 BTC from A to C and finally to E. > > I would appreciate your feedback on this idea and any questions you may have for further explanation. > > Best Regards, > Abhishek Sharma > Brown University > Computer Science '18 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ap at coinomat.com Mon Feb 5 11:50:12 2018 From: ap at coinomat.com (Alex P) Date: Mon, 5 Feb 2018 14:50:12 +0300 Subject: [Lightning-dev] Manual channel funding Message-ID: <79a99cf9-19c1-3292-a832-854811639296@coinomat.com> Hello! At the moment there is no option to choose outputs to fund channel manually. Moreover, there is no way to fund channel with "all available funds". That's weird, I set up a channel and tried to use "all I ave", and got is a transaction on blockchain with the output for 980 SAT: https://chain.so/tx/BTC/bc144507a85900d0fc0318cc54a4bcb29542bfcd543e7acf9f00061f03c997e5 To my opinions at least there should be an option "take fee from funding amount", and may be an option to choose exact outputs to spend. Any ideas? From decker.christian at gmail.com Mon Feb 5 13:02:38 2018 From: decker.christian at gmail.com (Christian Decker) Date: Mon, 05 Feb 2018 14:02:38 +0100 Subject: [Lightning-dev] Improving the initial gossip sync Message-ID: <874lmvy4gh.fsf@gmail.com> Hi everyone When we started drafting the specification we decided to postpone the topology syncronization mechanism until we have a better picture of the kind of loads that are to be expected in the network, e.g., churn and update rate, and instead implement a trivial gossip protocol to distribute the topology updates. This includes the dreaded initial synchonization dump that has caused some issues lately to all implementations, given that we dump several thousands of updates, that may require block metadata (short channel ID to txid conversion) lookup and a UTXO lookup (is this channel still active?). During the last call we decided to go for an incremental improvement, rather than a full synchronization mechanism (IBLT, rsync, ...). So let's discuss how that improvement could look like. In the following I'll describe a very simple extension based on a highwater mark for updates, and I think Pierre has a good proposal of his own, that I'll let him explain. We already have the `initial_routing_sync` feature bit, which (if implemented) allows disabling the initial gossip synchronization, and onyl forwarding newly received gossip messages. I propose adding a new feature bit (6, i.e., bitmask 0x40) indicating that the `init` message is extended with a u32 `gossip_timestamp`, interpreted as a UNIX timestamp. The `gossip_timestamp` is the lowest `channel_update` and `node_announcement` timestamp the recipient is supposed to send, any older update or announcement is to be skipped. This allows the `init` sender to specify how far back the initial synchronization should go. The logic to forward announcements thus follows this outline: - Set `gossip_timestamp` for this peer - Iterate through all `channel_update`s that have a timestamp that is newer than the `gossip_timestamp` (skipping replaced ones as per BOLT 07) - For each `channel_update` fetch the corresponding `channel_announcement` and the endpoints `node_announcement`. - Forward the messages in the correct order, i.e., - `channel_announcement`, then `channel_update`, and then `node_announcement` The feature bit is even, meaning that it is required from the peer, since we extend the `init` message itself, and a peer that does not support this feature would be unable to parse any future extensions to the `init` message. Alternatively we could create a new `set_gossip_timestamp` message that is only sent if both endpoints support this proposal, but that could result in duplicate messages being delivered between the `init` and the `set_gossip_timestamp` message and it'd require additional messages. `gossip_timestamp` is rather flexible, since it allows the sender to specify its most recent update if it believes it is completely caught up, or send a slightly older timestamp to have some overlap for currently broadcasting updates, or send the timestamp the node was last connected with the network, in the case of prolonged downtime. The reason I'm using timestamp and not the blockheight in the short channel ID is that we already use the timestamp for pruning. In the blockheight based timestamp we might ignore channels that were created, then not announced or forgotten, and then later came back and are now stable. I hope this rather simple proposal is sufficient to fix the short-term issues we are facing with the initial sync, while we wait for a real sync protocol. It is definitely not meant to allow perfect synchronization of the topology between peers, but then again I don't believe that is strictly necessary to make the routing successful. Please let me know what you think, and I'd love to discuss Pierre's proposal as well. Cheers, Christian From decker.christian at gmail.com Mon Feb 5 13:15:28 2018 From: decker.christian at gmail.com (Christian Decker) Date: Mon, 05 Feb 2018 14:15:28 +0100 Subject: [Lightning-dev] Manual channel funding In-Reply-To: <79a99cf9-19c1-3292-a832-854811639296@coinomat.com> References: <79a99cf9-19c1-3292-a832-854811639296@coinomat.com> Message-ID: <871shzy3v3.fsf@gmail.com> Hi Alex, not sure what the context of your question. It doesn't appear to be protocol related, but rather an issue with the interface that the implementations expose. If that is the case, I'd suggest filing an issue with the respective implementation. Cheers, Christian Alex P writes: > Hello! > > At the moment there is no option to choose outputs to fund channel > manually. Moreover, there is no way to fund channel with "all available > funds". That's weird, I set up a channel and tried to use "all I ave", > and got is a transaction on blockchain with the output for 980 SAT: > > https://chain.so/tx/BTC/bc144507a85900d0fc0318cc54a4bcb29542bfcd543e7acf9f00061f03c997e5 > > To my opinions at least there should be an option "take fee from funding > amount", and may be an option to choose exact outputs to spend. > > Any ideas? > > > _______________________________________________ > Lightning-dev mailing list > Lightning-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev From decker.christian at gmail.com Mon Feb 5 13:21:54 2018 From: decker.christian at gmail.com (Christian Decker) Date: Mon, 05 Feb 2018 14:21:54 +0100 Subject: [Lightning-dev] An Idea to Improve Connectivity of the Graph In-Reply-To: <_jS-uMec5G1MY-bHNxlT80vcgJcEqvBvkGaoVGpJqZleIAKHBiMKognosenyigwMhZW8e-85Sfd_0GHJW2eckULUJjYol5nZtT61cTuCUAg=@protonmail.com> References: <_jS-uMec5G1MY-bHNxlT80vcgJcEqvBvkGaoVGpJqZleIAKHBiMKognosenyigwMhZW8e-85Sfd_0GHJW2eckULUJjYol5nZtT61cTuCUAg=@protonmail.com> Message-ID: <87y3k7wozx.fsf@gmail.com> I'd also like to point out that the way we do state invalidations in Lightning is not really suited for multi-party negotiations beyond 2 parties. The number of potential reactions to a party cheating grows exponentially in the number of parties in the contract, which is the reason the Channel Factories paper relies on the Duplex Micropayment Channel construction instead of the retaliation construction in LN. Furthermore I'm also not exactly clear how we could retaliate misbehavior on one channel in the other channel if they are logically independent. Without this you could potentially re-allocate your funds to another channel and then attempt to cheat, without it costing your funds. Cheers, Christian ZmnSCPxj via Lightning-dev writes: > Good morning Abhishek Sharma, > > While the goal of the idea is good, can you provide more details on the Bitcoin transactions? Presumably the on-chain anchor is a 3-of-3 multisig UTXO, what is the transaction that spends that? What do Lightning commitment transactions spend? Can you draw a graph of transaction chains that ensure correct operation of this idea? > > Have you seen Burchert-Decker-Wattenhofer Channel Factories? https://www.tik.ee.ethz.ch/file/a20a865ce40d40c8f942cf206a7cba96/Scalable_Funding_Of_Blockchain_Micropayment_Networks%20(1).pdf What is the difference between your idea and the Burchert-Decker-Wattenhofer Channel Factories? > > Regards, > ZmnSCPxj > > Sent with [ProtonMail](https://protonmail.com) Secure Email. > > -------- Original Message -------- > On February 4, 2018 6:21 PM, Abhishek Sharma wrote: > >> Hello all, >> I am not sure if this is the right place for this, but I have been thinking about the lightning network and how it could be modified so that fewer total channels would need to be open. I had the idea for a specific kind of transaction, in which three parties commit their funds all at once, and are able to move their funds between the three open channels between them. I will give a rough overview of my idea and give an example that I think illustrates how it could improve users' ability to route their transactions. >> >> Say that three parties, A, B, and C, create a special commitment transaction on the network that creates three open channels between each of them with a pre-specified balance in each channel. Now, these channels would be lightning network channels, and so the three of them could transact with each other and modify balances in their individual channels at will. However, this special agreement between the three of them also has the property than they can move their funds between channels, provided they have the permission of the counterparty to the channel they move their funds from, and then presents this to the other counterparty to show that funds have been moved. >> >> 1.) A, B, and C each create a commitment transaction, committing .5 BTC (3 BTC in total) on their end of each of their channels. >> 2.) A, B, and C transact normally using the lightning protocol. After some amount of time, the channel balances are as follows: >> channel AB: A - 0.75, B - 0.25 >> channel BC: B - 0.4, C - 0.6, >> channel AC: A - 0, C: 1.0 >> 3.) A would like to send .5 BTC to C, however she does not have enough funds in that channel to do so. It's also not possible for her to route her transaction through B, as B only has .4 in his channel with C. However, she does have those funds in her channel with B, and so asks for B's permission (in the form of a signed balance state that includes the hash of the previous balance), to move those funds over to her account with C. She gets this signed slip from B, and then presents it to C. >> 4.) A, B, and C continue trading on their update balances. >> 5.) When they wish to close out their channels, they all post the last signed balance statements each of them has. >> Say, for example, A and B were to collude and trade on their old balance (of .75 and .25) after Bsigning the statement that A was 'moving' funds to C. If A and C were trading on their new balances, C has proof of both A and B's collusion, and she can present the signed slip which said that A was moving funds to AC and so the total balance on A and B's channel should've summed to 0.5. In this event, All funds in all three channels are forfeited to C. >> >> I believe this works because, in virtue of being able to make inferences based on her own channel balances, C always knows (if she is following the protocol) exactly how much should be in channel AB. and can prove this. If there were 4 parties, C couldn't prove on her own that some set of parties colluded to trade on an old balance. >> >> Now, I'll show why such a mechanism can be useful. >> Now, assume that there are parties A, B, C, D, and E, and the following channels and balances exist (with the ones marked by a * part of the special three-way commitment): >> AB*: A - 1.0, B - 0 >> BC*: B - 0, C - 1.0 >> AC*: A - 0, C - 1.0 >> AD: D - 1.0, A - 0 >> CE: C - 1.0, E - 0 >> Now suppose D wishes to send E 1.0 BTC. With the current channel structure, this isn't possible in lightning without opening a new channel and waiting for the network to verify it. However, A can ask B to move her 1.0 in channel AB to channel AC (with maybe a very nominal fee to incentivise this), thereby enabling D to route 1.0 BTC from A to C and finally to E. >> >> I would appreciate your feedback on this idea and any questions you may have for further explanation. >> >> Best Regards, >> Abhishek Sharma >> Brown University >> Computer Science '18 > _______________________________________________ > Lightning-dev mailing list > Lightning-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev From fabrice.drouin at acinq.fr Mon Feb 5 15:08:22 2018 From: fabrice.drouin at acinq.fr (Fabrice Drouin) Date: Mon, 5 Feb 2018 16:08:22 +0100 Subject: [Lightning-dev] Improving the initial gossip sync In-Reply-To: <874lmvy4gh.fsf@gmail.com> References: <874lmvy4gh.fsf@gmail.com> Message-ID: Hi, On 5 February 2018 at 14:02, Christian Decker wrote: > Hi everyone > > The feature bit is even, meaning that it is required from the peer, > since we extend the `init` message itself, and a peer that does not > support this feature would be unable to parse any future extensions to > the `init` message. Alternatively we could create a new > `set_gossip_timestamp` message that is only sent if both endpoints > support this proposal, but that could result in duplicate messages being > delivered between the `init` and the `set_gossip_timestamp` message and > it'd require additional messages. We chose the other aproach and propose to use an optional feature > The reason I'm using timestamp and not the blockheight in the short > channel ID is that we already use the timestamp for pruning. In the > blockheight based timestamp we might ignore channels that were created, > then not announced or forgotten, and then later came back and are now > stable. Just to be clear, you propose to use the timestamp of the most recent channel updates to filter the associated channel announcements ? > I hope this rather simple proposal is sufficient to fix the short-term > issues we are facing with the initial sync, while we wait for a real > sync protocol. It is definitely not meant to allow perfect > synchronization of the topology between peers, but then again I don't > believe that is strictly necessary to make the routing successful. > > Please let me know what you think, and I'd love to discuss Pierre's > proposal as well. > > Cheers, > Christian Our idea is to group channel announcements by "buckets", create a filter for each bucket, exchange and use them to filter out channel announcements. We would add a new `use_channel_announcement_filters` optional feature bit (7 for example), and a new `channel_announcement_filters` message. When a node that supports channel announcement filters receives an `init` message with the `use_channel_announcement_filters` bit set, it sends back its channel filters. When a node that supports channel announcement filters receives a`channel_announcement_filters` message, it uses it to filter channel announcements (and, implicitly ,channel updates) before sending them. The filters we have in mind are simple: - Sort announcements by short channel id - Compute a marker height, which is `144 * ((now - 7 * 144) / 144)` (we round to multiples of 144 to make sync easier) - Group channel announcements that were created before this marker by groups of 144 blocks - Group channel announcements that were created after this marker by groups of 1 block - For each group, sort and concatenate all channel announcements short channel ids and hash the result (we could use sha256, or the first 16 bytes of the sha256 hash) The new `channel_announcement_filters` would then be a list of (height, hash) pairs ordered by increasing heights. This implies that implementation can easily sort announcements by short channel id, which should not be very difficult. An additional step could be to send all short channel ids for all groups for which the group hash did not match. Alternatively we could use smarter filters The use case we have in mind is mobile nodes, or more generally nodes which are often offline and need to resync very often. Cheers, Fabrice From laolu32 at gmail.com Tue Feb 6 05:26:30 2018 From: laolu32 at gmail.com (Olaoluwa Osuntokun) Date: Tue, 06 Feb 2018 05:26:30 +0000 Subject: [Lightning-dev] AMP: Atomic Multi-Path Payments over Lightning Message-ID: Hi Y'all, A common question I've seen concerning Lightning is: "I have five $2 channels, is it possible for me to *atomically* send $6 to fulfill a payment?". The answer to this question is "yes", provided that the receiver waits to pull all HTLC's until the sum matches their invoice. Typically, one assumes that the receiver will supply a payment hash, and the sender will re-use the payment hash for all streams. This has the downside of payment hash re-use across *multiple* payments (which can already easily be correlated), and also has a failure mode where if the sender fails to actually satisfy all the payment flows, then the receiver can still just pull the monies (and possibly not disperse a service, or w/e). Conner Fromknecht and I have come up with a way to achieve this over Lightning while (1) not re-using any payment hashes across all payment flows, and (2) adding a *strong* guarantee that the receiver won't be paid until *all* partial payment flows are extended. We call this scheme AMP (Atomic Multi-path Payments). It can be experimented with on Lightning *today* with the addition of a new feature bit to gate this new feature. The beauty of the scheme is that it requires no fundamental changes to the protocol as is now, as the negotiation is strictly *end-to-end* between sender and receiver. TL;DR: we repurpose some unused space in the onion per-hop payload of the onion blob to signal our protocol (and deliver some protocol-specific data), then use additive secret sharing to ensure that the receiver can't pull the payment until they have enough shares to reconstruct the original pre-image. Protocol Goals ============== 1. Atomicity: The logical transaction should either succeed or fail in entirety. Naturally, this implies that the receiver should not be unable to settle *any* of the partial payments, until all of them have arrived. 2. Avoid Payment Hash Reuse: The payment preimages validated by the consensus layer should be distinct for each partial payment. Primarily, this helps avoid correlation of the partial payments, and ensures that malicious intermediaries straddling partial payments cannot steal funds. 3. Order Invariance: The protocol should be forgiving to the order in which partial payments arrive at the destination, adding robustness in the face of delays or routing failures. 4. Non-interactive Setup: It should be possible for the sender to perform an AMP without directly coordinating with the receiving node. Predominantly, this means that the *sender* is able to determine the number of partial payments to use for a particular AMP, which makes sense since they will be the one fronting the fees for the cost of this parameter. Plus, we can always turn a non-interactive protocol into an interactive one for the purposes of invoicing. Protocol Benefits ================= Sending pay payments predominantly over an AMP-like protocol has several clear benefits: - Eliminates the constraint that a single path from sender to receiver with sufficient directional capacity. This reduces the pressure to have larger channels in order to support larger payment flows. As a result, the payment graph be very diffused, without sacrificing payment utility - Reduces strain from larger payments on individual paths, and allows the liquidity imbalances to be more diffuse. We expect this to have a non-negligible impact on channel longevity. This is due to the fact that with usage of AMP, payment flows are typically *smaller* meaning that each payment will unbalance a channel to a lesser degree that with one giant flow. - Potential fee savings for larger payments, contingent on there being a super-linear component to routed fees. It's possible that with modifications to the fee schedule, it's actually *cheaper* to send payments over multiple flows rather than one giant flow. - Allows for logical payments larger than the current maximum value of an individual payment. Atm we have a (temporarily) limit on the max payment size. With AMP, this can be side stepped as each flow can be up the max size, with the sum of all flows exceeding the max. - Given sufficient path diversity, AMPs may improve the privacy of LN Intermediaries are now unaware to how much of the total payment they are forwarding, or even if they are forwarding a partial payment at all. - Using smaller payments increases the set of possible paths a partial payment could have taken, which reduces the effectiveness of static analysis techniques involving channel capacities and the plaintext values being forwarded. Protocol Overview ================== This design can be seen as a generalization of the single, non-interactive payment scheme, that uses decoding of extra onion blobs (EOBs?) to encode extra data for the receiver. In that design, the extra data includes a payment preimage that the receiver can use to settle back the payment. EOBs and some method of parsing them are really the only requirement for this protocol to work. Thus, only the sender and receiver need to implement this feature in order for it to function, which can be announced using a feature bit. First, let's review the current format of the per-hop payload for each node described in BOLT-0004. ?????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ?Realm (1 byte) ?Next Addr (8 bytes)?Amount (8 bytes)?Outgoing CLTV (4 bytes)?Unused (12 bytes)? HMAC (32 bytes) ? ?????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ?????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ??????????????????? ?65 Bytes Per Hop ? ??????????????????? Currently, *each* node gets a 65-byte payload. We use this payload to give each node instructions on *how* to forward a payment. We tell each node: the realm (or chain to forward on), then next node to forward to, the amount to forward (this is where fees are extracted by forwarding out less than in), the outgoing CLTV (allows verification that the prior node didn't modify any values), and finally an HMAC over the entire thing. Two important points: 1. We have 12 bytes for each hop that are currently unpurposed and can be used by application protocols to signal new interpretation of bytes and also deliver additional encrypted+authenticated data to *each* hop. 2. The protocol currently has a hard limit of 20-hops. With this feature we ensure that the packet stays fixed sized during processing in order to avoid leaking positional information. Typically most payments won't use all 20 hops, as a result, we can use the remaining hops to stuff in *even more* data. Protocol Description ==================== The solution we propose is Atomic Multi-path Payments (AMPs). At a high level, this leverages EOBs to deliver additive shares of a base preimage, from which the payment preimages of partial payments can be derived. The receiver can only construct this value after having received all of the partial payments, satisfying the atomicity constraint. The basic protocol: Primitives ========== Let H be a CRH function. Let || denote concatenation. Let ^ denote xor. Sender Requirements =================== The parameters to the sending procedure are a random identifier ID, the number of partial payments n, and the total payment value V. Assume the sender has some way of dividing V such that V = v_1 + ? + v_n. To begin, the sender builds the base preimage BP, from which n partial preimages will be derived. Next, the sender samples n additive shares s_1, ?, s_n, and takes the sum to compute BP = s_1 ^ ? ^ s_n. With the base preimage created, the sender now moves on to constructing the n partial payments. For each i in [1,n], the sender deterministically computes the partial preimage r_i = H(BP || i), by concatenating the sequence number i to the base preimage and hashing the result. Afterwards, it applies H to determine the payment hash to use in the i?th partial payment as h_i = H(r_i). Note that that with this preimage derivation scheme, once the payments are pulled each pre-image is distinct and indistinguishable from any other. With all of the pieces in place, the sender initiates the i?th payment by constructing a route to the destination with value v_i and payment hash h_i. The tuple (ID, n, s_i) is included in the EOB to be opened by the receiver. In order to include the three tuple within the per-hop payload for the final destination, we repurpose the _first_ byte of the un-used padding bytes in the payload to signal version 0x01 of the AMP protocol (note this is a PoC outline, we would need to standardize signalling of these 12 bytes to support other protocols). Typically this byte isn't set, so the existence of this means that we're (1) using AMP, and (2) the receiver should consume the _next_ hop as well. So if the payment length is actually 5, the sender tacks on an additional dummy 6th hop, encrypted with the _same_ shared secret for that hop to deliver the e2e encrypted data. Note, the sender can retry partial payments just as they would normal payments, since they are order invariant, and would be indistinguishable from regular payments to intermediaries in the network. Receiver Requirements ===================== Upon the arrival of each partial payment, the receiver will iteratively reconstruct BP, and do some bookkeeping to figure out when to settle the partial payments. During this reconstruction process, the receiver does not need to be aware of the order in which the payments were sent, and in fact nothing about the incoming partial payments reveals this information to the receiver, though this can be learned after reconstructing BP. Each EOB is decoded to retrieve (ID, n, s_i), where i is the unique but unknown index of the incoming partial payment. The receiver has access to persistent key-value store DB that maps ID to (n, c*, BP*), where c* represents the number of partial payments received, BP* is the sum of the received additive shares, and the superscript * denotes that the value is being updated iteratively. c* and BP* both have initial values of 0. In the basic protocol, the receiver cache?s the first n it sees, and verifies that all incoming partial payments have the same n. The receiver should reject all partial payments if any EOB deviates. Next, the we update our persistent store with DB[ID] = (n, c* + 1, BP* ^ s_i), advancing the reconstruction by one step. If c* + 1 < n, there are still more packets in flight, so we sit tight. Otherwise, the receiver assumes all partial payments have arrived, and can being settling them back. Using the base preimage BP = BP* ^ s_i from our final iteration, the receiver can re-derive all n partial preimages and payment hashes, using r_i = H(BP || i) and h_i = H(r_i) simply through knowledge of n and BP. Finally, the receiver settles back any outstanding payments that include payment hash h_i using the partial preimage r_i. Each r_i will appear random due to the nature of H, as will it?s corresponding h_i. Thus, each partial payment should appear uncorrelated, and does not reveal that it is part of an AMP nor the number of partial payments used. Non-interactive to Interactive AMPs =================================== Sender simply receives an ID and amount from the receiver in an invoice before initiating the protocol. The receiver should only consider the invoice settled if the total amount received in partial payments containing ID matches or exceeds the amount specified in the invoice. With this variant, the receiver is able to map all partial payments to a pre-generated invoice statement. Additive Shares vs Threshold-Shares =================================== The biggest reason to use additive shares seems to be atomicity. Threshold shares open the door to some partial payments being settled, even if others are left in flight. Haven?t yet come up with a good reason for using threshold schemes, but there seem to be plenty against it. Reconstruction of additive shares can be done iteratively, and is win for the storage and computation requirements on the receiving end. If the sender decides to use fewer than n partial payments, the remaining shares could be included in the EOB of the final partial payment to allow the sender to reconstruct sooner. Sender could also optimistically do partial reconstruction on this last aggregate value. Adaptive AMPs ============= The sender may not always be aware of how many partial payments they wish to send at the time of the first partial payment, at which point the simplified protocol would require n to be chosen. To accommodate, the above scheme can be adapted to handle a dynamically chosen n by iteratively constructing the shared secrets as follows. Starting with a base preimage BP, the key trick is that the sender remember the difference between the base preimage and the sum of all partial preimages used so far. The relation is described using the following equations: X_0 = 0 X_i = X_{i-1} ^ s_i X_n = BP ^ X_{n-1} where if n=1, X_1 = BP, implying that this is in fact a generalization of the single, non-interactive payment scheme mentioned above. For i=1, ..., n-1, the sender sends s_i in the EOB, and X_n for the n-th share. Iteratively reconstructing s_1 ^ ?. ^ s_{n-1} ^ X_n = BP, allows the receiver to compute all relevant r_i = H(BP || i) and h_i = H(r_i). Lastly, the final number of partial payments n could be signaled in the final EOB, which would also serve as a sentinel value for signaling completion. In response to DOS vectors stemming from unknown values of n, implementations could consider advertising a maximum value for n, or adopting some sort of framing pattern for conveying that more partial payments are on the way. We can further modify our usage of the per-hop payloads to send (H(BP), s_i) to consume most of the EOB sent from sender to receiver. In this scenario, we'd repurpose the 11-bytes *after* our signalling byte in the unused byte section to store the payment ID (which should be unique for each payment). In the case of a non-interactive payment, this will be unused. While for interactive payments, this will be the ID within the invoice. To deliver this slimmer 2-tuple, we'll use 32-bytes for the hash of the BP, and 32-bytes for the partial pre-image share, leaving an un-used byte in the payload. Cross-Chain AMPs ================ AMPs can be used to pay a receiver in multiple currencies atomically...which is pretty cool :D Open Research Questions ======================= The above is a protocol sketch to achieve atomic multi-path payments over Lightning. The details concerning onion blob usage serves as a template that future protocols can draw upon in order to deliver additional data to *any* hop in the route. However, there are still a few open questions before something like this can be feasibly deployed. 1. How does the sender decide how many chunked payments to send, and the size of each payment? - Upon a closer examination, this seems to overlap with the task of congestion control within TCP. The sender may be able to utilize inspired heuristics to gauge: (1) how large the initial payment should be and (2) how many subsequent payments may be required. Note that if the first payment succeeds, then the exchange is over in a signal round. 2. How can AMP and HORNET be composed? - If we eventually integrate HORNET, then a distinct communications sessions can be established to allow the sender+receiver to exchange up-to-date partial payment information. This may allow the sender to more accurately size each partial payment. 3. Can the sender's initial strategy be governed by an instance of the Push-relabel max flow algo? 4. How does this mesh with the current max HTLC limit on a commitment? - ATM, we have a max limit on the number of active HTLC's on a particular commitment transaction. We do this, as otherwise it's possible that the transaction is too large, and exceeds standardness w.r.t transaction size. In a world where most payments use an AMP-like protocol, then overall ant any given instance there will be several pending HTLC's on commitments network wise. This may incentivize nodes to open more channels in order to support the increased commitment space utilization. Conclusion ========== We've presented a design outline of how to integrate atomic multi-path payments (AMP) into Lightning. The existence of such a construct allows a sender to atomically split a payment flow amongst several individual payment flows. As a result, larger channels aren't as important as it's possible to utilize one total outbound payment bandwidth to send several channels. Additionally, in order to support the increased load, internal routing nodes are incensed have more active channels. The existence of AMP-like payments may also increase the longevity of channels as there'll be smaller, more numerous payment flows, making it unlikely that a single payment comes across unbalances a channel entirely. We've also showed how one can utilize the current onion packet format to deliver additional data from a sender to receiver, that's still e2e authenticated. -- Conner && Laolu -------------- next part -------------- An HTML attachment was scrubbed... URL: From ZmnSCPxj at protonmail.com Tue Feb 6 07:12:09 2018 From: ZmnSCPxj at protonmail.com (ZmnSCPxj) Date: Tue, 06 Feb 2018 02:12:09 -0500 Subject: [Lightning-dev] AMP: Atomic Multi-Path Payments over Lightning In-Reply-To: References: Message-ID: <5wLocWe7fe1tQiYbyBQSDWFHEcseLbFzb_Q2eZdE-PtIsxKP684-MjVB4iruYtMHcSK4U2A8MGuN3f0PkaqmMmuP0Tef0Bl6ibA2JWUvQ8M=@protonmail.com> Good morning Laolu, This is excellent work! Some minor comments... > (Atomic Multi-path Payments). It can be experimented with on Lightning > *today* with the addition of a new feature bit to gate this new > feature. The beauty of the scheme is that it requires no fundamental changes > to the protocol as is now, as the negotiation is strictly *end-to-end* > between sender and receiver. I think, a `globalfeatures` odd bit could be used for this. As it is end-ot-end, `localfeatures` is not appropriate. > - Potential fee savings for larger payments, contingent on there being a > super-linear component to routed fees. It's possible that with > modifications to the fee schedule, it's actually *cheaper* to send > payments over multiple flows rather than one giant flow. I believe, currently, fees have not this super-linear component. Indeed, the existence of per-hop fees (`fee_base_msat`) means, splitting the payment over multiple flows will be, very likely, more expensive, compared to using a single flow. Tiny roundoffs in computing the proportional fees (`fee_proportional_millionths`) may make smaller flows give a slight fee advantage, but I think the multiplication of per-hop fees will dominate. > - Using smaller payments increases the set of possible paths a partial > payment could have taken, which reduces the effectiveness of static > analysis techniques involving channel capacities and the plaintext > values being forwarded. Strongly agree! > In order to include the three tuple within the per-hop payload for the final > destination, we repurpose the _first_ byte of the un-used padding bytes in > the payload to signal version 0x01 of the AMP protocol (note this is a PoC > outline, we would need to standardize signalling of these 12 bytes to > support other protocols). I believe the `realm` byte is intended for this. Intermediate nodes do not need to understand realm bytes that are understood by other nodes in the route, including the realm bytes understood by the final destination, as intermediate nodes cannot, indeed, read the hop data of other nodes. Thus, you can route over nodes that are unaware of AMP, and only provide an AMP realm byte to the destination node, who, is able to reconstruct this your AMP data as per your algorithm. Indeed, the `realm` byte controls the interpretation of the rest of the 65-byte packet. If you define, instead, a separate `realm` that is understood by the destination node, you can redefine the entire 64 bytes of the final hop data as you wish. If we support AMP only at final payees, we can completely redefine the 64 bytes in the final hop data for the new AMP `realm`, and not consume the next hop (which would reduce route length by 1). (If we want to support multiple routes converging to an intermediate node, then continue routing to a different final node after routes have merged (i.e. A->B->C->D, and A->E->C->D, with the payment being merged by C, who forwards the combination to D), then we need to follow the current hop data format, but I think supporting AMP at final payees is actually enough... AMP at intermediate nodes might not be used often enough by senders for it to matter, as taking advantage of that seems more complex than just asking your routing algo to provide you multiple routes to a destination, which you are probably already doing) ---- Overall, good work I think. Regards, ZmnSCPxj -------------- next part -------------- An HTML attachment was scrubbed... URL: From cezary.dziemian at gmail.com Tue Feb 6 15:28:37 2018 From: cezary.dziemian at gmail.com (Cezary Dziemian) Date: Tue, 6 Feb 2018 16:28:37 +0100 Subject: [Lightning-dev] channel_reserve_satoshis? In-Reply-To: References: Message-ID: Thank you very much for answer! This is quite good way to replace both-funding channels by such "superhub". It would be even easier if I could open more then single channel between both parties, but I saw this is not possible in c-lightning. Could you tell me what is the reason and do you plan to add this possibility in the future? In LND opening multiple channels is possible, but is this compatible with c-lightning then? Best Regards, Cezary 2018-02-04 13:36 GMT+01:00 ZmnSCPxj : > Good morning Cezary, > > > I think first two options are for those, who want to earn some money for > payments fee. The option that can be interesting for for some business like > coffee is third option. Do you agree with me? > > > 3. In all likelihood, some service later will offer deals like "up to > 300mBTC receive for only 1mBTC! At least 3 months channel alive!" for new > upcoming businesses. > > It seams the best option for new businesses, but what if such new business > would like to have channel with 300mBTC on both side. Let's say this is > ATM. The owner of ATM need to be able to receive and sending funds. Without > possibility of both-side funding channels this is quite hard to establish > such balanced channel. ATM owner needs to send 300 mBTC as on-side > transaction to hub, and then hub could open channel with 600 mBTC capacity > and send back 300mBTC to ATM owner though this new channel. This requires > trust to hub. > > > Then do not trust a single hub. Instead, have incoming 300mBTC from one > hub, then make an outgoing 300mBTC to another hub. Encourages more hubs > also. This also makes your node a potential routing node, improving > network connectivity. > > > I know, LN is in early stage, but I'm very surprised that both sides > cannot fund channel. Maybe this is because at the beginning LN was > presented with such option. The only reason are trust issues that you > described before, or maybe there are also some technical issues to > implement such functionality? Do you predict this will be added to BOLT and > implemented in the future? > > > There is already an issue regarding this. For now, priority is actual > implementation of payments. Dual-funding channels can be emulated by > having some hubbing service make channels to you, while you make a channel > to some other hub (i.e. make two channels). Such an emulation is superior > to dual-funding as it allows you to potentially become some alternate route > if other routes become congested, letting you earn some small amount; > compare this to a single dual-funding channel that, by itself, cannot be > used to for routing. > > Another thing is that we can make "circular superhubs" if small groups of > us cooperate. The smallest 3-circle superhub has 3 members A B C. A opens > channel to B, B open channels t C, C open channels to A. Each channel is > the same capacity. If each of you has one out-channel other than on the > circular superhub, any of A B C can spend to any node that any of them have > an out-channel to. Similarly, each of you can receive via any in-channel > any of you happen to have. Join a few such small communities and you can > be well-connected enough to send and receive reasonably seamlessly to > anyone on the network. > > Regards, > ZmnSCPxj > > > Best regards, > Cezary > > 2018-02-04 10:08 GMT+01:00 ZmnSCPxj : > >> Good morning Cezary, >> >> > Lets say I would like to receive ln payments. How can I do this, >> without locking funds on the other side of channel? >> >> 1. Do the Blockstream Store route: do it early enough, and people will >> make channels to you, because, they want to try out Lightning Network >> quickly. >> >> 2. Publish the node and contact details (IP or TOR onion service) and >> hope people are excited enough about your product to open a channel to you. >> >> 3. In all likelihood, some service later will offer deals like "up to >> 300mBTC receive for only 1mBTC! At least 3 months channel alive!" for new >> upcoming businesses. >> >> 4. Ask a friend to channel to you. >> >> Regards, >> ZmnSCPxj >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robban at robtex.com Tue Feb 6 16:53:43 2018 From: robban at robtex.com (Robert Olsson) Date: Tue, 6 Feb 2018 18:53:43 +0200 Subject: [Lightning-dev] channel rebalancing support kind of exists already? Message-ID: Hello Let's say Bob opens a channel to Alice for 2BTC Carol then opens a channel to Bob for 2BTC. Alice and Carol are already connected to Others (and/or eachother even) The network and channel balances will look like this: Alice 0--2 Bob 0--2 Carol | | +----- OTHERS ------+ Bob for some reason wants the channels to be balanced, so he has some better redundancy and it looks better. So hypothetically Bob solves this by paying himself an invoice of 1BTC and making sure the route goes out thru Alice and comes back via Carol. Bob pays fees so he isn't ashamed if it disturbs the other balances in the network. Should he care? Alice 1--1 Bob 1--1 Carol | | +----- OTHERS ------+ Now Bob has two nice balanced channels, meaning he has better connectivity in both directions. Doesn't the protocol already support that kind of solutions, and all we need is a function in the CLI allowing Bob to pay to himself, and specify which two channels he would like to balance? Maybe even make it automatically balance. Is this a good idea of something to support, and/or Is there a risk the entire network will start doing this and it will start oscillating? Best regards Robert Olsson -------------- next part -------------- An HTML attachment was scrubbed... URL: From aleksej at spidermail.tk Tue Feb 6 17:19:14 2018 From: aleksej at spidermail.tk (Aleksej) Date: Tue, 06 Feb 2018 18:19:14 +0100 Subject: [Lightning-dev] channel rebalancing support kind of exists already? In-Reply-To: References: Message-ID: <64377caf.AEAAScyO3BEAAAAAAAAAAAHPbO0AAABGqAQAAAAAAAotPgBaeeOY@mailjet.com> Hi Yeah, you can always refund your channels thorugh other channels. I don't think however that it would be usually necessary to balance funds on the channel to be equal. I always assumed that a typical user would have perhaps one channel where he receives funds (employer) and others for spending (stores). In order to refund them, he would simply spend funds thorugh channels that are more unbalanced in direction where the user is "owned" coins. And of course, the other way around, employer would be able to pay the employee throgh channels he has with stores where he owns the money. In conclusion, I don't think rebalnacing would need to be a sperate transaction. This could simply be done automatically when the user sends or receives his usual transactions. I am not sure about all the diffuclties regarding routing in LN. Hopefully all of this can be done safely, reliably and quickly. Best regards, Aleksej On Tue, 2018-02-06 at 18:53 +0200, Robert Olsson wrote: > Hello > > Let's say Bob opens a channel to Alice for 2BTC > Carol then opens a channel to Bob for 2BTC. > Alice and Carol are already connected to Others (and/or eachother > even) > The network and channel balances will look like this: > > Alice 0--2 Bob 0--2 Carol > ? |? ? ? ? ? ? ? ? ? ?| > ? +----- OTHERS ------+? > > Bob for some reason wants the channels to be balanced, so he has some > better redundancy and it looks better. > > So hypothetically Bob solves this by paying himself an invoice of > 1BTC and making sure the route goes out thru Alice and comes back via > Carol. Bob pays fees so he isn't ashamed if it disturbs the other > balances in the network. Should he care? > ? > Alice 1--1 Bob 1--1 Carol > ? |? ? ? ? ? ? ? ? ? ?| > ? +----- OTHERS ------+? > > Now Bob has two nice balanced channels, meaning he has better > connectivity in both directions. > > Doesn't the protocol already support that kind of solutions, and all > we need is a function in the CLI allowing Bob to pay to himself, and > specify which two channels he would like to balance? > > Maybe even make it automatically balance. > > Is this a good idea of something to support, and/or Is there a risk > the entire network will start doing this and it will start > oscillating? > > Best regards > Robert Olsson > > > _______________________________________________ > Lightning-dev mailing list > Lightning-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From robban at robtex.com Tue Feb 6 19:16:02 2018 From: robban at robtex.com (Robert Olsson) Date: Tue, 6 Feb 2018 21:16:02 +0200 Subject: [Lightning-dev] channel rebalancing support kind of exists already? In-Reply-To: <64377caf.AEAAScyO3BEAAAAAAAAAAAHPbO0AAABGqAQAAAAAAAotPgBaeeOY@mailjet.com> References: <64377caf.AEAAScyO3BEAAAAAAAAAAAHPbO0AAABGqAQAAAAAAAotPgBaeeOY@mailjet.com> Message-ID: Hi Aleksej, Yes i was talking about rebalancing without blockchains. and that there is a need for rebalancing, since things routed thru you can also affect balances an a surprising fashion. A function to avoid routing too much thru your channels would be nice too. Consider a scenario where your employer opens a channel to you, and send your salary. You can then go shopping and use the channel via you employer, but after a while you want some more capacity, or less fees, or redundancy in case your employers node is offline. So you open a new one directly to walmart with a tx because you plan to go there after work, and go there often. Now it turns out your employer also buys stuff from walmart, so they pay them via your channel to walmart and uses up most of it. So when you go to walmart to shop, you notice your brand new channel with them is already used up so you will have to route it back thru your employer, however they are of course currently doing maintenance on their node. Your redundancy is gone. And if they were up, your fee saving idea with a direct walmart channel would have been gone. So, I think a function to "refuse routing over this channel if it would result in less than X% of capacity" and "automatically balance this channel to have at least X% of capacity" would be very useful features, and i think they don't have to be extremely hard to implement over current protocol. Best regards Robert Olsson On Tue, Feb 6, 2018 at 7:19 PM, Aleksej wrote: > Hi > > Yeah, you can always refund your channels thorugh other channels. > I don't think however that it would be usually necessary to balance funds > on the channel to be equal. > I always assumed that a typical user would have perhaps one channel where > he receives funds (employer) and others for spending (stores). > In order to refund them, he would simply spend funds thorugh channels that > are more unbalanced in direction where the user is "owned" coins. > And of course, the other way around, employer would be able to pay the > employee throgh channels he has with stores where he owns the money. > > In conclusion, I don't think rebalnacing would need to be a sperate > transaction. > This could simply be done automatically when the user sends or receives > his usual transactions. > I am not sure about all the diffuclties regarding routing in LN. Hopefully > all of this can be done safely, reliably and quickly. > > Best regards, > Aleksej > > On Tue, 2018-02-06 at 18:53 +0200, Robert Olsson wrote: > > Hello > > Let's say Bob opens a channel to Alice for 2BTC > Carol then opens a channel to Bob for 2BTC. > Alice and Carol are already connected to Others (and/or eachother even) > The network and channel balances will look like this: > > Alice 0--2 Bob 0--2 Carol > | | > +----- OTHERS ------+ > > Bob for some reason wants the channels to be balanced, so he has some > better redundancy and it looks better. > > So hypothetically Bob solves this by paying himself an invoice of 1BTC and > making sure the route goes out thru Alice and comes back via Carol. Bob > pays fees so he isn't ashamed if it disturbs the other balances in the > network. Should he care? > > Alice 1--1 Bob 1--1 Carol > | | > +----- OTHERS ------+ > > Now Bob has two nice balanced channels, meaning he has better connectivity > in both directions. > > Doesn't the protocol already support that kind of solutions, and all we > need is a function in the CLI allowing Bob to pay to himself, and specify > which two channels he would like to balance? > > Maybe even make it automatically balance. > > Is this a good idea of something to support, and/or Is there a risk the > entire network will start doing this and it will start oscillating? > > Best regards > Robert Olsson > > > _______________________________________________ > Lightning-dev mailing listLightning-dev at lists.linuxfoundation.orghttps://lists.linuxfoundation.org/mailman/listinfo/lightning-dev > > > > _______________________________________________ > Lightning-dev mailing list > Lightning-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From laolu32 at gmail.com Wed Feb 7 00:03:45 2018 From: laolu32 at gmail.com (Olaoluwa Osuntokun) Date: Wed, 07 Feb 2018 00:03:45 +0000 Subject: [Lightning-dev] AMP: Atomic Multi-Path Payments over Lightning In-Reply-To: <5wLocWe7fe1tQiYbyBQSDWFHEcseLbFzb_Q2eZdE-PtIsxKP684-MjVB4iruYtMHcSK4U2A8MGuN3f0PkaqmMmuP0Tef0Bl6ibA2JWUvQ8M=@protonmail.com> References: <5wLocWe7fe1tQiYbyBQSDWFHEcseLbFzb_Q2eZdE-PtIsxKP684-MjVB4iruYtMHcSK4U2A8MGuN3f0PkaqmMmuP0Tef0Bl6ibA2JWUvQ8M=@protonmail.com> Message-ID: Hi ZmnSCPxj, > This is excellent work! Thanks! > I think, a `globalfeatures` odd bit could be used for this. As it is > end-ot-end, `localfeatures` is not appropriate. Yep, it would need to be a global feature bit. In the case that we're sending to a destination which isn't publicly advertised, then perhaps an extension to BOLT-11 could be made to signal receiver support. > I believe, currently, fees have not this super-linear component Yep they don't. Arguably, we should also have a component that scales according to the proposed CLTV value of the outgoing HTLC. At Scaling Bitcoin Stanford, Aviv Zohar gave a talked titled "How to Charge Lightning" where the authors analyzed the possible evolution of fees on the network (and also suggested adding this super-linear component to extend the lifetime of channels). However, the talk itself focused on a very simple "mega super duper hub" topology. Towards the end he alluded to a forthcoming paper that had more comprehensive analysis of more complex topologies. I look forward to the publication of their finalized work. > Indeed, the existence of per-hop fees (`fee_base_msat`) means, splitting > the payment over multiple flows will be, very likely, more expensive, > compared to using a single flow. Well it's still to be seen how the fee structure on mainnet emerges once the network is still fully bootstrapped. AFAIK, most running on mainnet atm are using the default fee schedules for their respective implementations. For example, the default fee_base_msat for lnd is 1000 msat (1 satoshi). > I believe the `realm` byte is intended for this. The realm byte is meant to signal "forward this to the dogecoin channel". ATM, we just default to 0 as "Bitcoin". However, the byte itself only really need significance between the sender and the intermediate node. So there isn't necessarily pressure to have a globally synchronized set of realm bytes. > Thus, you can route over nodes that are unaware of AMP, and only provide > an AMP realm byte to the destination node, who, is able to reconstruct this > your AMP data as per your algorithm. Yes, the intermediate nodes don't need to be aware of the end-to-end protocol. For the final hop, there are actually 53 free bytes (before one needs to signal the existence of EOBs): * 1 byte realm * 8 bytes next addr (all zeroes to signal final dest) * 32 bytes hmac (also all zeroes for the final dest) * 12 bytes padding So any combo of these bytes can be used to signal more advanced protocols to the final destination. A correction from the prior email description: > We can further modify our usage of the per-hop payloads to send > (H(BP), s_i) to consume most of the EOB sent from sender to receiver. This should actually be (H(s_0 || s_1 || ...), s_i). So we still allow them to check this finger print to see if they have all the final shares, but don't allow them to preemptively pull all the payments. -- Laolu On Mon, Feb 5, 2018 at 11:12 PM ZmnSCPxj wrote: > Good morning Laolu, > > This is excellent work! > > Some minor comments... > > > (Atomic Multi-path Payments). It can be experimented with on Lightning > *today* with the addition of a new feature bit to gate this new > feature. The beauty of the scheme is that it requires no fundamental > changes > to the protocol as is now, as the negotiation is strictly *end-to-end* > between sender and receiver. > > > I think, a `globalfeatures` odd bit could be used for this. As it is > end-ot-end, `localfeatures` is not appropriate. > > - Potential fee savings for larger payments, contingent on there being a > super-linear component to routed fees. It's possible that with > modifications to the fee schedule, it's actually *cheaper* to send > payments over multiple flows rather than one giant flow. > > > I believe, currently, fees have not this super-linear component. Indeed, > the existence of per-hop fees (`fee_base_msat`) means, splitting the > payment over multiple flows will be, very likely, more expensive, compared > to using a single flow. Tiny roundoffs in computing the proportional fees > (`fee_proportional_millionths`) may make smaller flows give a slight fee > advantage, but I think the multiplication of per-hop fees will dominate. > > > - Using smaller payments increases the set of possible paths a partial > payment could have taken, which reduces the effectiveness of static > analysis techniques involving channel capacities and the plaintext > values being forwarded. > > > Strongly agree! > > > In order to include the three tuple within the per-hop payload for the > final > destination, we repurpose the _first_ byte of the un-used padding bytes in > the payload to signal version 0x01 of the AMP protocol (note this is a PoC > outline, we would need to standardize signalling of these 12 bytes to > support other protocols). > > > I believe the `realm` byte is intended for this. Intermediate nodes do > not need to understand realm bytes that are understood by other nodes in > the route, including the realm bytes understood by the final destination, > as intermediate nodes cannot, indeed, read the hop data of other nodes. > Thus, you can route over nodes that are unaware of AMP, and only provide an > AMP realm byte to the destination node, who, is able to reconstruct this > your AMP data as per your algorithm. > > Indeed, the `realm` byte controls the interpretation of the rest of the > 65-byte packet. If you define, instead, a separate `realm` that is > understood by the destination node, you can redefine the entire 64 bytes of > the final hop data as you wish. > > If we support AMP only at final payees, we can completely redefine the 64 > bytes in the final hop data for the new AMP `realm`, and not consume the > next hop (which would reduce route length by 1). > > (If we want to support multiple routes converging to an intermediate node, > then continue routing to a different final node after routes have merged > (i.e. A->B->C->D, and A->E->C->D, with the payment being merged by C, who > forwards the combination to D), then we need to follow the current hop data > format, but I think supporting AMP at final payees is actually enough... > AMP at intermediate nodes might not be used often enough by senders for it > to matter, as taking advantage of that seems more complex than just asking > your routing algo to provide you multiple routes to a destination, which > you are probably already doing) > > ---- > > Overall, good work I think. > > Regards, > ZmnSCPxj > -------------- next part -------------- An HTML attachment was scrubbed... URL: From laolu32 at gmail.com Wed Feb 7 00:24:02 2018 From: laolu32 at gmail.com (Olaoluwa Osuntokun) Date: Wed, 07 Feb 2018 00:24:02 +0000 Subject: [Lightning-dev] Improving the initial gossip sync In-Reply-To: References: <874lmvy4gh.fsf@gmail.com> Message-ID: Hi Y'all, Definitely agree that we need a stop-gap solution to fix the naive table dump on initial connect. I've been sketching out some fancier stuff, but we would need time to properly tune the fanciness, and I'm inclined to get out a stop-gap solution asap. On testnet, the zombie churn is pretty bad atm. It results in uneasily wasted bandwidth as the churn is now almost constant. There still exist some very old testnet nodes our there it seems. Beyond the zombie churn, with the size of the testnet graph, we're forced to send tens of thousands of messages (even if we're already fully synced) upon initial connect, so very wasteful over all. So I think the primary distinction between y'alls proposals is that cdecker's proposal focuses on eventually synchronizing all the set of _updates_, while Fabrice's proposal cares *only* about the newly created channels. It only cares about new channels as the rationale is that if once tries to route over a channel with a state channel update for it, then you'll get an error with the latest update encapsulated. Christian wrote: > I propose adding a new feature bit (6, i.e., bitmask 0x40) indicating that > the `init` message is extended with a u32 `gossip_timestamp`, interpreted as > a UNIX timestamp. As the `init` message solely contains two variably sized byte slices, I don't think we can actually safely extend it in this manner. Instead, a new message is required, where the semantics of the feature bit _require_ the other side to send it directly after receiving the `init` message from the other side. Aside from that, overall I like the simplicity of the protocol: it eliminates both the zombie churn, and the intensive initial connection graph dump without any extra messaging overhead (for reconciliation, etc). Fabrice wrote: > Just to be clear, you propose to use the timestamp of the most recent > channel updates to filter the associated channel announcements ? I think he's actually proposing just a general update horizon in which vertexes+edges with a lower time stamp just shouldn't be set at all. In the case of an old zombie channel which was resurrected, it would eventually be re-propagated as the node on either end of the channel should broadcast a fresh update along with the original chan ann. > When a node that supports channel announcement filters receives > a`channel_announcement_filters` message, it uses it to filter channel > announcements (and, implicitly ,channel updates) before sending them This seems to assume that both nodes have a strongly synchronized view of the network. Otherwise, they'll fall back to sending everything that went on during the entire epoch regularly. It also doesn't address the zombie churn issue as they may eventually send you very old channels you'll have to deal with (or discard). > The use case we have in mind is mobile nodes, or more generally nodes > which are often offline and need to resync very often. How far back would this go? Weeks, months, years? FWIW this approach optimizes for just learning of new channels instead of learning of the freshest state you haven't yet seen. -- Laolu On Mon, Feb 5, 2018 at 7:08 AM Fabrice Drouin wrote: > Hi, > > On 5 February 2018 at 14:02, Christian Decker > wrote: > > Hi everyone > > > > The feature bit is even, meaning that it is required from the peer, > > since we extend the `init` message itself, and a peer that does not > > support this feature would be unable to parse any future extensions to > > the `init` message. Alternatively we could create a new > > `set_gossip_timestamp` message that is only sent if both endpoints > > support this proposal, but that could result in duplicate messages being > > delivered between the `init` and the `set_gossip_timestamp` message and > > it'd require additional messages. > > We chose the other aproach and propose to use an optional feature > > > The reason I'm using timestamp and not the blockheight in the short > > channel ID is that we already use the timestamp for pruning. In the > > blockheight based timestamp we might ignore channels that were created, > > then not announced or forgotten, and then later came back and are now > > stable. > > Just to be clear, you propose to use the timestamp of the most recent > channel updates to filter > the associated channel announcements ? > > > I hope this rather simple proposal is sufficient to fix the short-term > > issues we are facing with the initial sync, while we wait for a real > > sync protocol. It is definitely not meant to allow perfect > > synchronization of the topology between peers, but then again I don't > > believe that is strictly necessary to make the routing successful. > > > > Please let me know what you think, and I'd love to discuss Pierre's > > proposal as well. > > > > Cheers, > > Christian > > Our idea is to group channel announcements by "buckets", create a > filter for each bucket, exchange and use them to filter out channel > announcements. > > We would add a new `use_channel_announcement_filters` optional feature > bit (7 for example), and a new `channel_announcement_filters` message. > > When a node that supports channel announcement filters receives an > `init` message with the `use_channel_announcement_filters` bit set, it > sends back its channel filters. > > When a node that supports channel announcement filters receives > a`channel_announcement_filters` message, it uses it to filter channel > announcements (and, implicitly ,channel updates) before sending them. > > The filters we have in mind are simple: > - Sort announcements by short channel id > - Compute a marker height, which is `144 * ((now - 7 * 144) / 144)` > (we round to multiples of 144 to make sync easier) > - Group channel announcements that were created before this marker by > groups of 144 blocks > - Group channel announcements that were created after this marker by > groups of 1 block > - For each group, sort and concatenate all channel announcements short > channel ids and hash the result (we could use sha256, or the first 16 > bytes of the sha256 hash) > > The new `channel_announcement_filters` would then be a list of > (height, hash) pairs ordered by increasing heights. > > This implies that implementation can easily sort announcements by > short channel id, which should not be very difficult. > An additional step could be to send all short channel ids for all > groups for which the group hash did not match. Alternatively we could > use smarter filters > > The use case we have in mind is mobile nodes, or more generally nodes > which are often offline and need to resync very often. > > Cheers, > Fabrice > _______________________________________________ > Lightning-dev mailing list > Lightning-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From conner at lightning.engineering Wed Feb 7 02:14:45 2018 From: conner at lightning.engineering (Conner Fromknecht) Date: Wed, 07 Feb 2018 02:14:45 +0000 Subject: [Lightning-dev] AMP: Atomic Multi-Path Payments over Lightning In-Reply-To: References: <5wLocWe7fe1tQiYbyBQSDWFHEcseLbFzb_Q2eZdE-PtIsxKP684-MjVB4iruYtMHcSK4U2A8MGuN3f0PkaqmMmuP0Tef0Bl6ibA2JWUvQ8M=@protonmail.com> Message-ID: Hi ZmnSCPxj and Laolu, > Indeed, the existence of per-hop fees (`fee_base_msat`) means, splitting the > payment over multiple flows will be, very likely, more expensive, compared to > using a single flow. As Laolu pointed out, we have yet to see how fees evolve on mainnet or what will emerge as a sane, default fee schedules. I agree that if the same proportional fee is used across all partial payments, then it could certainly be more expensive. However, it could also be the case that you were paying a needlessly high proportional fee to begin with, because paths of sufficient capacity to the destination were scarce. In an AMP world, there will be an abundance of channels that can route small, partial payments, which may itself drive down the competitive fee rate for smaller payments. Just a hypothesis, we shall see where supply meets demand! At the end of the day, the user can always fall back to regular payment if they expect to end up paying more fees using an AMP. > (If we want to support multiple routes converging to an intermediate node, > then continue routing to a different final node after routes have merged (i.e. > A->B->C->D, and A->E->C->D, with the payment being merged by C, who forwards > the combination to D), then we need to follow the current hop data format, but > I think supporting AMP at final payees is actually enough... I think this is an interesting idea, sounds maybe like a recursive/hierarchical AMP? The ability to merge the payments seems like it would result in a decent privacy leak, as I believe an intermediary would have enough evidence to prove that two payments were merged/correlated. Simple traffic analysis would also reveal a discrepancy in the number of incoming and outgoing packets, and possibly other observable differences in routing (some) AMPs vs regular payments. FWIW the current proposal allows the paths of partial payments to overlap, in such a scenario C would just forward the HTLCs independently. One could send them all along the same path if they desired! I'm assuming the intent here is to try and reduce total fees? Minor correction^2: > This should actually be (H(s_0 || s_1 || ...), s_i). This assumes the receiver knows the indexes of each share. Without this knowledge they would have to brute force all orderings to check the fingerprint. To maintain order invariance on the receiving end, I would propose sending (0, s_i) for the first n-1 partial payments, and then (n, s_i) on the final one. As in the description of the basic AMP scheme, the receiver maintains a persistent count of how many partial payments have been received for ID. If the receiver does not get the last payment last, the receiver just waits until all n have been received before deciding that its reconstructed value is BP. The receiver can verify they've received the correct BP and n by rederiving the partial preimages r_i = H(BP || i) and checking that there are n outstanding payments, one for each h_i = H(r_i). This also saves the receiving node n additional hash invocations. -Conner On Tue, Feb 6, 2018 at 4:04 PM Olaoluwa Osuntokun wrote: > Hi ZmnSCPxj, > > > This is excellent work! > > Thanks! > > > I think, a `globalfeatures` odd bit could be used for this. As it is > > end-ot-end, `localfeatures` is not appropriate. > > Yep, it would need to be a global feature bit. In the case that we're > sending to a destination which isn't publicly advertised, then perhaps an > extension to BOLT-11 could be made to signal receiver support. > > > I believe, currently, fees have not this super-linear component > > Yep they don't. Arguably, we should also have a component that scales > according to the proposed CLTV value of the outgoing HTLC. At Scaling > Bitcoin Stanford, Aviv Zohar gave a talked titled "How to Charge Lightning" > where the authors analyzed the possible evolution of fees on the network > (and also suggested adding this super-linear component to extend the > lifetime of channels). However, the talk itself focused on a very simple > "mega super duper hub" topology. Towards the end he alluded to a > forthcoming > paper that had more comprehensive analysis of more complex topologies. I > look forward to the publication of their finalized work. > > > Indeed, the existence of per-hop fees (`fee_base_msat`) means, splitting > > the payment over multiple flows will be, very likely, more expensive, > > compared to using a single flow. > > Well it's still to be seen how the fee structure on mainnet emerges once > the > network is still fully bootstrapped. AFAIK, most running on mainnet atm are > using the default fee schedules for their respective implementations. For > example, the default fee_base_msat for lnd is 1000 msat (1 satoshi). > > > I believe the `realm` byte is intended for this. > > The realm byte is meant to signal "forward this to the dogecoin channel". > ATM, we just default to 0 as "Bitcoin". However, the byte itself only > really > need significance between the sender and the intermediate node. So there > isn't necessarily pressure to have a globally synchronized set of realm > bytes. > > > Thus, you can route over nodes that are unaware of AMP, and only provide > > an AMP realm byte to the destination node, who, is able to reconstruct > this > > your AMP data as per your algorithm. > > Yes, the intermediate nodes don't need to be aware of the end-to-end > protocol. For the final hop, there are actually 53 free bytes (before one > needs to signal the existence of EOBs): > > * 1 byte realm > * 8 bytes next addr (all zeroes to signal final dest) > * 32 bytes hmac (also all zeroes for the final dest) > * 12 bytes padding > > So any combo of these bytes can be used to signal more advanced protocols > to > the final destination. > > > A correction from the prior email description: > > > We can further modify our usage of the per-hop payloads to send > > (H(BP), s_i) to consume most of the EOB sent from sender to receiver. > > This should actually be (H(s_0 || s_1 || ...), s_i). So we still allow them > to check this finger print to see if they have all the final shares, but > don't allow them to preemptively pull all the payments. > > > -- Laolu > > > On Mon, Feb 5, 2018 at 11:12 PM ZmnSCPxj wrote: > >> Good morning Laolu, >> >> This is excellent work! >> >> Some minor comments... >> >> >> (Atomic Multi-path Payments). It can be experimented with on Lightning >> *today* with the addition of a new feature bit to gate this new >> feature. The beauty of the scheme is that it requires no fundamental >> changes >> to the protocol as is now, as the negotiation is strictly *end-to-end* >> between sender and receiver. >> >> >> I think, a `globalfeatures` odd bit could be used for this. As it is >> end-ot-end, `localfeatures` is not appropriate. >> >> - Potential fee savings for larger payments, contingent on there being a >> super-linear component to routed fees. It's possible that with >> modifications to the fee schedule, it's actually *cheaper* to send >> payments over multiple flows rather than one giant flow. >> >> >> I believe, currently, fees have not this super-linear component. Indeed, >> the existence of per-hop fees (`fee_base_msat`) means, splitting the >> payment over multiple flows will be, very likely, more expensive, compared >> to using a single flow. Tiny roundoffs in computing the proportional fees >> (`fee_proportional_millionths`) may make smaller flows give a slight fee >> advantage, but I think the multiplication of per-hop fees will dominate. >> >> >> - Using smaller payments increases the set of possible paths a partial >> payment could have taken, which reduces the effectiveness of static >> analysis techniques involving channel capacities and the plaintext >> values being forwarded. >> >> >> Strongly agree! >> >> >> In order to include the three tuple within the per-hop payload for the >> final >> destination, we repurpose the _first_ byte of the un-used padding bytes in >> the payload to signal version 0x01 of the AMP protocol (note this is a PoC >> outline, we would need to standardize signalling of these 12 bytes to >> support other protocols). >> >> >> I believe the `realm` byte is intended for this. Intermediate nodes do >> not need to understand realm bytes that are understood by other nodes in >> the route, including the realm bytes understood by the final destination, >> as intermediate nodes cannot, indeed, read the hop data of other nodes. >> Thus, you can route over nodes that are unaware of AMP, and only provide an >> AMP realm byte to the destination node, who, is able to reconstruct this >> your AMP data as per your algorithm. >> >> Indeed, the `realm` byte controls the interpretation of the rest of the >> 65-byte packet. If you define, instead, a separate `realm` that is >> understood by the destination node, you can redefine the entire 64 bytes of >> the final hop data as you wish. >> >> If we support AMP only at final payees, we can completely redefine the 64 >> bytes in the final hop data for the new AMP `realm`, and not consume the >> next hop (which would reduce route length by 1). >> >> (If we want to support multiple routes converging to an intermediate >> node, then continue routing to a different final node after routes have >> merged (i.e. A->B->C->D, and A->E->C->D, with the payment being merged by >> C, who forwards the combination to D), then we need to follow the current >> hop data format, but I think supporting AMP at final payees is actually >> enough... AMP at intermediate nodes might not be used often enough by >> senders for it to matter, as taking advantage of that seems more complex >> than just asking your routing algo to provide you multiple routes to a >> destination, which you are probably already doing) >> >> ---- >> >> Overall, good work I think. >> >> Regards, >> ZmnSCPxj >> > _______________________________________________ > Lightning-dev mailing list > Lightning-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ZmnSCPxj at protonmail.com Wed Feb 7 07:13:20 2018 From: ZmnSCPxj at protonmail.com (ZmnSCPxj) Date: Wed, 07 Feb 2018 02:13:20 -0500 Subject: [Lightning-dev] channel rebalancing support kind of exists already? In-Reply-To: References: Message-ID: <5-r0g7Pox46kFZU_pNSirAPIwcDNSbYytEvrP1GSKsc6azP7ZfjBl0-cfHPRCVaxUIuxrhe4TSk2Jzc9tJFIlUAojPzGljm6EsOvgSr0wVU=@protonmail.com> Good Morning Robert, Yes, this already is possible, but is not implemented by any implementation to my knowledge at this point. Note that "balance" is not necessarily a property you might desire for your channels. In your example, under the "unbalanced" case, Bob can pay a 1.5BTC invoice, but in the "balanced" case Bob can no longer pay that 1.5BTC invoice. Of course, once AMP is possible then this consideration is not an issue. Regards, ZmnSCPxj Sent with [ProtonMail](https://protonmail.com) Secure Email. -------- Original Message -------- On February 7, 2018 12:53 AM, Robert Olsson wrote: > Hello > > Let's say Bob opens a channel to Alice for 2BTC > Carol then opens a channel to Bob for 2BTC. > Alice and Carol are already connected to Others (and/or eachother even) > The network and channel balances will look like this: > > Alice 0--2 Bob 0--2 Carol > | | > +----- OTHERS ------+ > > Bob for some reason wants the channels to be balanced, so he has some better redundancy and it looks better. > > So hypothetically Bob solves this by paying himself an invoice of 1BTC and making sure the route goes out thru Alice and comes back via Carol. Bob pays fees so he isn't ashamed if it disturbs the other balances in the network. Should he care? > > Alice 1--1 Bob 1--1 Carol > | | > +----- OTHERS ------+ > > Now Bob has two nice balanced channels, meaning he has better connectivity in both directions. > > Doesn't the protocol already support that kind of solutions, and all we need is a function in the CLI allowing Bob to pay to himself, and specify which two channels he would like to balance? > > Maybe even make it automatically balance. > > Is this a good idea of something to support, and/or Is there a risk the entire network will start doing this and it will start oscillating? > > Best regards > Robert Olsson -------------- next part -------------- An HTML attachment was scrubbed... URL: From ZmnSCPxj at protonmail.com Wed Feb 7 08:07:15 2018 From: ZmnSCPxj at protonmail.com (ZmnSCPxj) Date: Wed, 07 Feb 2018 03:07:15 -0500 Subject: [Lightning-dev] channel_reserve_satoshis? In-Reply-To: References: Message-ID: Good Morning Cezary, > This is quite good way to replace both-funding channels by such "superhub". It would be even easier if I could open more then single channel between both parties, but I saw this is not possible in c-lightning. From a risk perspective, you have increased risk in general if you open channels to few nodes, compared to the case where you open channels to more nodes. Thus if you can afford to open many channels, you would prefer to participate in multiple different circular superhubs, each with a different set of participants, rather than repeatedly opening many channels to the same participant in a single superhub. > Could you tell me what is the reason and do you plan to add this possibility in the future? The current code tends to couple "channel" to "peer" a little too much, and it is going to take quite some work to uncouple them. I believe cdecker has plans to add this in the future, as there are a few comments in the code from him pointing to bits where this decoupling needs to be implemented. > In LND opening multiple channels is possible, but is this compatible with c-lightning then? No, currently the c-lightning daemon will reject the multiple channel attempt from LND. LND can form multiple channels with peer LND. Regards, ZmnSCPxj > Best Regards, > Cezary > > 2018-02-04 13:36 GMT+01:00 ZmnSCPxj : > >> Good morning Cezary, >> >>> I think first two options are for those, who want to earn some money for payments fee. The option that can be interesting for for some business like coffee is third option. Do you agree with me? >>> >>>> 3. In all likelihood, some service later will offer deals like "up to 300mBTC receive for only 1mBTC! At least 3 months channel alive!" for new upcoming businesses. >>> >>> It seams the best option for new businesses, but what if such new business would like to have channel with 300mBTC on both side. Let's say this is ATM. The owner of ATM need to be able to receive and sending funds. Without possibility of both-side funding channels this is quite hard to establish such balanced channel. ATM owner needs to send 300 mBTC as on-side transaction to hub, and then hub could open channel with 600 mBTC capacity and send back 300mBTC to ATM owner though this new channel. This requires trust to hub. >> >> Then do not trust a single hub. Instead, have incoming 300mBTC from one hub, then make an outgoing 300mBTC to another hub. Encourages more hubs also. This also makes your node a potential routing node, improving network connectivity. >> >>> I know, LN is in early stage, but I'm very surprised that both sides cannot fund channel. Maybe this is because at the beginning LN was presented with such option. The only reason are trust issues that you described before, or maybe there are also some technical issues to implement such functionality? Do you predict this will be added to BOLT and implemented in the future? >> >> There is already an issue regarding this. For now, priority is actual implementation of payments. Dual-funding channels can be emulated by having some hubbing service make channels to you, while you make a channel to some other hub (i.e. make two channels). Such an emulation is superior to dual-funding as it allows you to potentially become some alternate route if other routes become congested, letting you earn some small amount; compare this to a single dual-funding channel that, by itself, cannot be used to for routing. >> >> Another thing is that we can make "circular superhubs" if small groups of us cooperate. The smallest 3-circle superhub has 3 members A B C. A opens channel to B, B open channels t C, C open channels to A. Each channel is the same capacity. If each of you has one out-channel other than on the circular superhub, any of A B C can spend to any node that any of them have an out-channel to. Similarly, each of you can receive via any in-channel any of you happen to have. Join a few such small communities and you can be well-connected enough to send and receive reasonably seamlessly to anyone on the network. >> >> Regards, >> ZmnSCPxj >> >>> Best regards, >>> Cezary >>> >>> 2018-02-04 10:08 GMT+01:00 ZmnSCPxj : >>> >>>> Good morning Cezary, >>>>> Lets say I would like to receive ln payments. How can I do this, without locking funds on the other side of channel? >>>> >>>> 1. Do the Blockstream Store route: do it early enough, and people will make channels to you, because, they want to try out Lightning Network quickly. >>>> >>>> 2. Publish the node and contact details (IP or TOR onion service) and hope people are excited enough about your product to open a channel to you. >>>> >>>> 3. In all likelihood, some service later will offer deals like "up to 300mBTC receive for only 1mBTC! At least 3 months channel alive!" for new upcoming businesses. >>>> >>>> 4. Ask a friend to channel to you. >>>> >>>> Regards, >>>> ZmnSCPxj -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim.posen at gmail.com Wed Feb 7 08:36:35 2018 From: jim.posen at gmail.com (Jim Posen) Date: Wed, 7 Feb 2018 00:36:35 -0800 Subject: [Lightning-dev] AMP: Atomic Multi-Path Payments over Lightning In-Reply-To: References: <5wLocWe7fe1tQiYbyBQSDWFHEcseLbFzb_Q2eZdE-PtIsxKP684-MjVB4iruYtMHcSK4U2A8MGuN3f0PkaqmMmuP0Tef0Bl6ibA2JWUvQ8M=@protonmail.com> Message-ID: This is a really neat idea. This is a question about non-interactive payments in general, but is there any way to get a proof of payment? With regular invoices, knowledge of the preimage serves as cryptographic proof that the payment was delivered. On Feb 6, 2018 6:26 PM, "Conner Fromknecht" wrote: > > Hi ZmnSCPxj and Laolu, > > > Indeed, the existence of per-hop fees (`fee_base_msat`) means, > splitting the > > payment over multiple flows will be, very likely, more expensive, > compared to > > using a single flow. > > As Laolu pointed out, we have yet to see how fees evolve on mainnet or > what will > emerge as a sane, default fee schedules. I agree that if the same > proportional > fee is used across all partial payments, then it could certainly be more > expensive. > > However, it could also be the case that you were paying a needlessly high > proportional fee to begin with, because paths of sufficient capacity to the > destination were scarce. In an AMP world, there will be an abundance of > channels > that can route small, partial payments, which may itself drive down the > competitive fee rate for smaller payments. Just a hypothesis, we shall see > where > supply meets demand! > > At the end of the day, the user can always fall back to regular payment if > they > expect to end up paying more fees using an AMP. > > > (If we want to support multiple routes converging to an intermediate > node, > > then continue routing to a different final node after routes have merged > (i.e. > > A->B->C->D, and A->E->C->D, with the payment being merged by C, who > forwards > > the combination to D), then we need to follow the current hop data > format, but > > I think supporting AMP at final payees is actually enough... > > I think this is an interesting idea, sounds maybe like a > recursive/hierarchical > AMP? The ability to merge the payments seems like it would result in a > decent privacy > leak, as I believe an intermediary would have enough evidence to prove > that two > payments were merged/correlated. Simple traffic analysis would also reveal > a > discrepancy in the number of incoming and outgoing packets, and possibly > other > observable differences in routing (some) AMPs vs regular payments. > > FWIW the current proposal allows the paths of partial payments to overlap, > in such a scenario C would just forward the HTLCs independently. One could > send > them all along the same path if they desired! I'm assuming the intent here > is to > try and reduce total fees? > > Minor correction^2: > > > This should actually be (H(s_0 || s_1 || ...), s_i). > > This assumes the receiver knows the indexes of each share. Without this > knowledge they would have to brute force all orderings to check the > fingerprint. > > To maintain order invariance on the receiving end, I would propose sending > (0, s_i) for the first n-1 partial payments, and then (n, s_i) on the > final one. > As in the description of the basic AMP scheme, the receiver maintains a > persistent count of how many partial payments have been received for ID. > If the > receiver does not get the last payment last, the receiver just waits until > all n > have been received before deciding that its reconstructed value is BP. > > The receiver can verify they've received the correct BP and n by > rederiving the > partial preimages r_i = H(BP || i) and checking that there are n > outstanding > payments, one for each h_i = H(r_i). This also saves the receiving node n > additional hash invocations. > > -Conner > > On Tue, Feb 6, 2018 at 4:04 PM Olaoluwa Osuntokun > wrote: > >> Hi ZmnSCPxj, >> >> > This is excellent work! >> >> Thanks! >> >> > I think, a `globalfeatures` odd bit could be used for this. As it is >> > end-ot-end, `localfeatures` is not appropriate. >> >> Yep, it would need to be a global feature bit. In the case that we're >> sending to a destination which isn't publicly advertised, then perhaps an >> extension to BOLT-11 could be made to signal receiver support. >> >> > I believe, currently, fees have not this super-linear component >> >> Yep they don't. Arguably, we should also have a component that scales >> according to the proposed CLTV value of the outgoing HTLC. At Scaling >> Bitcoin Stanford, Aviv Zohar gave a talked titled "How to Charge >> Lightning" >> where the authors analyzed the possible evolution of fees on the network >> (and also suggested adding this super-linear component to extend the >> lifetime of channels). However, the talk itself focused on a very simple >> "mega super duper hub" topology. Towards the end he alluded to a >> forthcoming >> paper that had more comprehensive analysis of more complex topologies. I >> look forward to the publication of their finalized work. >> >> > Indeed, the existence of per-hop fees (`fee_base_msat`) means, splitting >> > the payment over multiple flows will be, very likely, more expensive, >> > compared to using a single flow. >> >> Well it's still to be seen how the fee structure on mainnet emerges once >> the >> network is still fully bootstrapped. AFAIK, most running on mainnet atm >> are >> using the default fee schedules for their respective implementations. For >> example, the default fee_base_msat for lnd is 1000 msat (1 satoshi). >> >> > I believe the `realm` byte is intended for this. >> >> The realm byte is meant to signal "forward this to the dogecoin channel". >> ATM, we just default to 0 as "Bitcoin". However, the byte itself only >> really >> need significance between the sender and the intermediate node. So there >> isn't necessarily pressure to have a globally synchronized set of realm >> bytes. >> >> > Thus, you can route over nodes that are unaware of AMP, and only provide >> > an AMP realm byte to the destination node, who, is able to reconstruct >> this >> > your AMP data as per your algorithm. >> >> Yes, the intermediate nodes don't need to be aware of the end-to-end >> protocol. For the final hop, there are actually 53 free bytes (before one >> needs to signal the existence of EOBs): >> >> * 1 byte realm >> * 8 bytes next addr (all zeroes to signal final dest) >> * 32 bytes hmac (also all zeroes for the final dest) >> * 12 bytes padding >> >> So any combo of these bytes can be used to signal more advanced protocols >> to >> the final destination. >> >> >> A correction from the prior email description: >> >> > We can further modify our usage of the per-hop payloads to send >> > (H(BP), s_i) to consume most of the EOB sent from sender to receiver. >> >> This should actually be (H(s_0 || s_1 || ...), s_i). So we still allow >> them >> to check this finger print to see if they have all the final shares, but >> don't allow them to preemptively pull all the payments. >> >> >> -- Laolu >> >> >> On Mon, Feb 5, 2018 at 11:12 PM ZmnSCPxj wrote: >> >>> Good morning Laolu, >>> >>> This is excellent work! >>> >>> Some minor comments... >>> >>> >>> (Atomic Multi-path Payments). It can be experimented with on Lightning >>> *today* with the addition of a new feature bit to gate this new >>> feature. The beauty of the scheme is that it requires no fundamental >>> changes >>> to the protocol as is now, as the negotiation is strictly *end-to-end* >>> between sender and receiver. >>> >>> >>> I think, a `globalfeatures` odd bit could be used for this. As it is >>> end-ot-end, `localfeatures` is not appropriate. >>> >>> - Potential fee savings for larger payments, contingent on there being >>> a >>> super-linear component to routed fees. It's possible that with >>> modifications to the fee schedule, it's actually *cheaper* to send >>> payments over multiple flows rather than one giant flow. >>> >>> >>> I believe, currently, fees have not this super-linear component. >>> Indeed, the existence of per-hop fees (`fee_base_msat`) means, splitting >>> the payment over multiple flows will be, very likely, more expensive, >>> compared to using a single flow. Tiny roundoffs in computing the >>> proportional fees (`fee_proportional_millionths`) may make smaller >>> flows give a slight fee advantage, but I think the multiplication of >>> per-hop fees will dominate. >>> >>> >>> - Using smaller payments increases the set of possible paths a partial >>> payment could have taken, which reduces the effectiveness of static >>> analysis techniques involving channel capacities and the plaintext >>> values being forwarded. >>> >>> >>> Strongly agree! >>> >>> >>> In order to include the three tuple within the per-hop payload for the >>> final >>> destination, we repurpose the _first_ byte of the un-used padding bytes >>> in >>> the payload to signal version 0x01 of the AMP protocol (note this is a >>> PoC >>> outline, we would need to standardize signalling of these 12 bytes to >>> support other protocols). >>> >>> >>> I believe the `realm` byte is intended for this. Intermediate nodes do >>> not need to understand realm bytes that are understood by other nodes in >>> the route, including the realm bytes understood by the final destination, >>> as intermediate nodes cannot, indeed, read the hop data of other nodes. >>> Thus, you can route over nodes that are unaware of AMP, and only provide an >>> AMP realm byte to the destination node, who, is able to reconstruct this >>> your AMP data as per your algorithm. >>> >>> Indeed, the `realm` byte controls the interpretation of the rest of the >>> 65-byte packet. If you define, instead, a separate `realm` that is >>> understood by the destination node, you can redefine the entire 64 bytes of >>> the final hop data as you wish. >>> >>> If we support AMP only at final payees, we can completely redefine the >>> 64 bytes in the final hop data for the new AMP `realm`, and not consume the >>> next hop (which would reduce route length by 1). >>> >>> (If we want to support multiple routes converging to an intermediate >>> node, then continue routing to a different final node after routes have >>> merged (i.e. A->B->C->D, and A->E->C->D, with the payment being merged by >>> C, who forwards the combination to D), then we need to follow the current >>> hop data format, but I think supporting AMP at final payees is actually >>> enough... AMP at intermediate nodes might not be used often enough by >>> senders for it to matter, as taking advantage of that seems more complex >>> than just asking your routing algo to provide you multiple routes to a >>> destination, which you are probably already doing) >>> >>> ---- >>> >>> Overall, good work I think. >>> >>> Regards, >>> ZmnSCPxj >>> >> _______________________________________________ >> Lightning-dev mailing list >> Lightning-dev at lists.linuxfoundation.org >> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev >> > > _______________________________________________ > Lightning-dev mailing list > Lightning-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From corne at bitonic.nl Wed Feb 7 10:00:09 2018 From: corne at bitonic.nl (=?UTF-8?Q?Corn=c3=a9_Plooy?=) Date: Wed, 7 Feb 2018 11:00:09 +0100 Subject: [Lightning-dev] channel rebalancing support kind of exists already? In-Reply-To: References: Message-ID: Hi, Amiko Pay had this: on an invoice, you could (optionally) specify through which peer you wanted to be paid; on a payment, you could (optionally) specify through which peer you wanted to pay. In fact, if you didn't do this, a payment-to-self would not result in any channel actions, since the most efficient route to yourself makes zero hops. There was some weird edge case in this if you had a channel to yourself(*) and specified it in both the invoice and the payment: the route would actually be forced to go multiple times through the same channel. Routing in Lightning is a bit different than in Amiko Pay, and I never attempted to adapt Amiko Pay to the Lightning protocol standard. I do think that Lightning offers *better* possibilities for channel re-balancing, since it offers source routing: the source can explicitly specify the entire route. If any channels offer negative fee rates to have them re-balanced, you might even make money by rebalancing other peoples' channels. I'm not sure when channel re-balancing would be useful: if you are able to pay through the B-A-others-C-B route and through the B-C-anyone route, then certainly B-A-others-C-anyone would work as well? Maybe to reduce risk that some channels on the 'others' path might be saturated at inconvenient moments? If Bob receives monthly salary from Alice and regularly wants to buy things from Carol, he'd probably want to transfer his funds from the A-B channel as soon as possible to the B-C channel. Alternatively, he could speculate on when fees on the OTHERS route would be optimal to make the transfer. Another use case could be privacy protection: if Alice is an employer, she probably knows Bob's identity; Bob probably doesn't want her to know details about his spending behavior as well. Bob-Carol could be a pseudonymous contact on the TOR network. On receiving salary from Alice, Bob would immediately transfer it to the B-C link, and perform individual payments from there. CJP (*) not very useful in practice, but certainly useful for testing. Besides, *some* user is going to try that sooner or later, so you have to be robust against it. Op 06-02-18 om 17:53 schreef Robert Olsson: > Hello > > Let's say Bob opens a channel to Alice for 2BTC > Carol then opens a channel to Bob for 2BTC. > Alice and Carol are already connected to Others (and/or eachother even) > The network and channel balances will look like this: > > Alice 0--2 Bob 0--2 Carol > ? |? ? ? ? ? ? ? ? ? ?| > ? +----- OTHERS ------+? > > Bob for some reason wants the channels to be balanced, so he has some > better redundancy and it looks better. > > So hypothetically Bob solves this by paying himself an invoice of 1BTC > and making sure the route goes out thru Alice and comes back via > Carol. Bob pays fees so he isn't ashamed if it disturbs the other > balances in the network. Should he care? > ? > Alice 1--1 Bob 1--1 Carol > ? |? ? ? ? ? ? ? ? ? ?| > ? +----- OTHERS ------+? > > Now Bob has two nice balanced channels, meaning he has better > connectivity in both directions. > > Doesn't the protocol already support that kind of solutions, and all > we need is a function in the CLI allowing Bob to pay to himself, and > specify which two channels he would like to balance? > > Maybe even make it automatically balance. > > Is this a good idea of something to support, and/or Is there a risk > the entire network will start doing this and it will start oscillating? > > Best regards > Robert Olsson > > > > > _______________________________________________ > Lightning-dev mailing list > Lightning-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev From fabrice.drouin at acinq.fr Wed Feb 7 17:50:19 2018 From: fabrice.drouin at acinq.fr (Fabrice Drouin) Date: Wed, 7 Feb 2018 18:50:19 +0100 Subject: [Lightning-dev] Improving the initial gossip sync In-Reply-To: References: <874lmvy4gh.fsf@gmail.com> Message-ID: Hi, Suppose you partition nodes into 3 generic roles: - payers: they mostly send payments, are typically small and operated by end users, and are offline quite a lot - relayers: they mostly relay payments, and would be online most of the time (if they're too unreliable other nodes will eventually close their channels with them) - payees: they mostly receive payments, how often they can be online is directly link to their particular mode of operations (since you need to be online to receive payments) Of course most nodes would play more or less all roles. However, mobile nodes would probably be mostly "payers", and they have specific properties: - if they don't relay payments they don't have to be announced. There could be millions of mobile nodes that would have no impact on the size of the routing table - it does not impact the network when they're offline - but they need an accurate routing table. This is very different from nodes who mostly relay or accept payements - they would be connected to a very small number of nodes - they would typically be online for just a few hours every day, but could be stopped/paused/restarted many times a day Laolu wrote: > So I think the primary distinction between y'alls proposals is that > cdecker's proposal focuses on eventually synchronizing all the set of > _updates_, while Fabrice's proposal cares *only* about the newly created > channels. It only cares about new channels as the rationale is that if once > tries to route over a channel with a state channel update for it, then > you'll get an error with the latest update encapsulated. If you have one filter per day and they don't match (because your peer has channels that you missed, or have been closed and you were not aware of it) then you will receive all channel announcements for this particular day, and the associated updates Laolu wrote: > I think he's actually proposing just a general update horizon in which > vertexes+edges with a lower time stamp just shouldn't be set at all. In the > case of an old zombie channel which was resurrected, it would eventually be > re-propagated as the node on either end of the channel should broadcast a > fresh update along with the original chan ann. Yes but it could take a long time. It may be worse on testnet since it seems that nodes don't change their fees very often. "Payer nodes" need a good routing table (as opposed to "relayers" which could work without one if they never initiate payments) Laolu wrote: > This seems to assume that both nodes have a strongly synchronized view of > the network. Otherwise, they'll fall back to sending everything that went on > during the entire epoch regularly. It also doesn't address the zombie churn > issue as they may eventually send you very old channels you'll have to deal > with (or discard). Yes I agree that for nodes which have connections to a lot of peers, strongly synchronized routing tables is harder to achieve since a small change may invalidate an entire bucket. Real queryable filters would be much better, but worst case scenario is we've sent an additionnal 30 Kb or o of sync messages. (A very naive filter would be sort + pack all short ids for example) But we focus on nodes which are connected to a very small number of peers, and in this particular case it is not an unrealistic expectation. We have built a prototype and on testnet it works fairly well. I also found nodes which have no direct channel betweem them but produce the same filters for 75% of the buckets ("produce" here means that I opened a simple gossip connection to them, got their routing table and used it to generate filters). Laolu wrote: > How far back would this go? Weeks, months, years? Since forever :) One filter per day for all annoucements that are older than now - 1 week (modulo 144) One filter per block for recent announcements > > FWIW this approach optimizes for just learning of new channels instead of > learning of the freshest state you haven't yet seen. I'd say it optimizes the case where you are connected to very few peers, and are online a few times every day (?) > > -- Laolu > > > On Mon, Feb 5, 2018 at 7:08 AM Fabrice Drouin > wrote: >> >> Hi, >> >> On 5 February 2018 at 14:02, Christian Decker >> wrote: >> > Hi everyone >> > >> > The feature bit is even, meaning that it is required from the peer, >> > since we extend the `init` message itself, and a peer that does not >> > support this feature would be unable to parse any future extensions to >> > the `init` message. Alternatively we could create a new >> > `set_gossip_timestamp` message that is only sent if both endpoints >> > support this proposal, but that could result in duplicate messages being >> > delivered between the `init` and the `set_gossip_timestamp` message and >> > it'd require additional messages. >> >> We chose the other aproach and propose to use an optional feature >> >> > The reason I'm using timestamp and not the blockheight in the short >> > channel ID is that we already use the timestamp for pruning. In the >> > blockheight based timestamp we might ignore channels that were created, >> > then not announced or forgotten, and then later came back and are now >> > stable. >> >> Just to be clear, you propose to use the timestamp of the most recent >> channel updates to filter >> the associated channel announcements ? >> >> > I hope this rather simple proposal is sufficient to fix the short-term >> > issues we are facing with the initial sync, while we wait for a real >> > sync protocol. It is definitely not meant to allow perfect >> > synchronization of the topology between peers, but then again I don't >> > believe that is strictly necessary to make the routing successful. >> > >> > Please let me know what you think, and I'd love to discuss Pierre's >> > proposal as well. >> > >> > Cheers, >> > Christian >> >> Our idea is to group channel announcements by "buckets", create a >> filter for each bucket, exchange and use them to filter out channel >> announcements. >> >> We would add a new `use_channel_announcement_filters` optional feature >> bit (7 for example), and a new `channel_announcement_filters` message. >> >> When a node that supports channel announcement filters receives an >> `init` message with the `use_channel_announcement_filters` bit set, it >> sends back its channel filters. >> >> When a node that supports channel announcement filters receives >> a`channel_announcement_filters` message, it uses it to filter channel >> announcements (and, implicitly ,channel updates) before sending them. >> >> The filters we have in mind are simple: >> - Sort announcements by short channel id >> - Compute a marker height, which is `144 * ((now - 7 * 144) / 144)` >> (we round to multiples of 144 to make sync easier) >> - Group channel announcements that were created before this marker by >> groups of 144 blocks >> - Group channel announcements that were created after this marker by >> groups of 1 block >> - For each group, sort and concatenate all channel announcements short >> channel ids and hash the result (we could use sha256, or the first 16 >> bytes of the sha256 hash) >> >> The new `channel_announcement_filters` would then be a list of >> (height, hash) pairs ordered by increasing heights. >> >> This implies that implementation can easily sort announcements by >> short channel id, which should not be very difficult. >> An additional step could be to send all short channel ids for all >> groups for which the group hash did not match. Alternatively we could >> use smarter filters >> >> The use case we have in mind is mobile nodes, or more generally nodes >> which are often offline and need to resync very often. >> >> Cheers, >> Fabrice >> _______________________________________________ >> Lightning-dev mailing list >> Lightning-dev at lists.linuxfoundation.org >> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev From jim.posen at gmail.com Wed Feb 7 21:27:05 2018 From: jim.posen at gmail.com (Jim Posen) Date: Wed, 7 Feb 2018 13:27:05 -0800 Subject: [Lightning-dev] Improving the initial gossip sync In-Reply-To: References: <874lmvy4gh.fsf@gmail.com> Message-ID: I like Christian's proposal of adding a simple announcement cutoff timestamp with the intention of designing something more sophisticated given more time. I prefer the approach of having an optional feature bit signalling that a `set_gossip_timestamp` message must be sent immediately after `init`, as Laolu suggested. This way it doesn't conflict with and other possible handshake extensions. On Feb 7, 2018 9:50 AM, "Fabrice Drouin" wrote: Hi, Suppose you partition nodes into 3 generic roles: - payers: they mostly send payments, are typically small and operated by end users, and are offline quite a lot - relayers: they mostly relay payments, and would be online most of the time (if they're too unreliable other nodes will eventually close their channels with them) - payees: they mostly receive payments, how often they can be online is directly link to their particular mode of operations (since you need to be online to receive payments) Of course most nodes would play more or less all roles. However, mobile nodes would probably be mostly "payers", and they have specific properties: - if they don't relay payments they don't have to be announced. There could be millions of mobile nodes that would have no impact on the size of the routing table - it does not impact the network when they're offline - but they need an accurate routing table. This is very different from nodes who mostly relay or accept payements - they would be connected to a very small number of nodes - they would typically be online for just a few hours every day, but could be stopped/paused/restarted many times a day Laolu wrote: > So I think the primary distinction between y'alls proposals is that > cdecker's proposal focuses on eventually synchronizing all the set of > _updates_, while Fabrice's proposal cares *only* about the newly created > channels. It only cares about new channels as the rationale is that if once > tries to route over a channel with a state channel update for it, then > you'll get an error with the latest update encapsulated. If you have one filter per day and they don't match (because your peer has channels that you missed, or have been closed and you were not aware of it) then you will receive all channel announcements for this particular day, and the associated updates Laolu wrote: > I think he's actually proposing just a general update horizon in which > vertexes+edges with a lower time stamp just shouldn't be set at all. In the > case of an old zombie channel which was resurrected, it would eventually be > re-propagated as the node on either end of the channel should broadcast a > fresh update along with the original chan ann. Yes but it could take a long time. It may be worse on testnet since it seems that nodes don't change their fees very often. "Payer nodes" need a good routing table (as opposed to "relayers" which could work without one if they never initiate payments) Laolu wrote: > This seems to assume that both nodes have a strongly synchronized view of > the network. Otherwise, they'll fall back to sending everything that went on > during the entire epoch regularly. It also doesn't address the zombie churn > issue as they may eventually send you very old channels you'll have to deal > with (or discard). Yes I agree that for nodes which have connections to a lot of peers, strongly synchronized routing tables is harder to achieve since a small change may invalidate an entire bucket. Real queryable filters would be much better, but worst case scenario is we've sent an additionnal 30 Kb or o of sync messages. (A very naive filter would be sort + pack all short ids for example) But we focus on nodes which are connected to a very small number of peers, and in this particular case it is not an unrealistic expectation. We have built a prototype and on testnet it works fairly well. I also found nodes which have no direct channel betweem them but produce the same filters for 75% of the buckets ("produce" here means that I opened a simple gossip connection to them, got their routing table and used it to generate filters). Laolu wrote: > How far back would this go? Weeks, months, years? Since forever :) One filter per day for all annoucements that are older than now - 1 week (modulo 144) One filter per block for recent announcements > > FWIW this approach optimizes for just learning of new channels instead of > learning of the freshest state you haven't yet seen. I'd say it optimizes the case where you are connected to very few peers, and are online a few times every day (?) > > -- Laolu > > > On Mon, Feb 5, 2018 at 7:08 AM Fabrice Drouin > wrote: >> >> Hi, >> >> On 5 February 2018 at 14:02, Christian Decker >> wrote: >> > Hi everyone >> > >> > The feature bit is even, meaning that it is required from the peer, >> > since we extend the `init` message itself, and a peer that does not >> > support this feature would be unable to parse any future extensions to >> > the `init` message. Alternatively we could create a new >> > `set_gossip_timestamp` message that is only sent if both endpoints >> > support this proposal, but that could result in duplicate messages being >> > delivered between the `init` and the `set_gossip_timestamp` message and >> > it'd require additional messages. >> >> We chose the other aproach and propose to use an optional feature >> >> > The reason I'm using timestamp and not the blockheight in the short >> > channel ID is that we already use the timestamp for pruning. In the >> > blockheight based timestamp we might ignore channels that were created, >> > then not announced or forgotten, and then later came back and are now >> > stable. >> >> Just to be clear, you propose to use the timestamp of the most recent >> channel updates to filter >> the associated channel announcements ? >> >> > I hope this rather simple proposal is sufficient to fix the short-term >> > issues we are facing with the initial sync, while we wait for a real >> > sync protocol. It is definitely not meant to allow perfect >> > synchronization of the topology between peers, but then again I don't >> > believe that is strictly necessary to make the routing successful. >> > >> > Please let me know what you think, and I'd love to discuss Pierre's >> > proposal as well. >> > >> > Cheers, >> > Christian >> >> Our idea is to group channel announcements by "buckets", create a >> filter for each bucket, exchange and use them to filter out channel >> announcements. >> >> We would add a new `use_channel_announcement_filters` optional feature >> bit (7 for example), and a new `channel_announcement_filters` message. >> >> When a node that supports channel announcement filters receives an >> `init` message with the `use_channel_announcement_filters` bit set, it >> sends back its channel filters. >> >> When a node that supports channel announcement filters receives >> a`channel_announcement_filters` message, it uses it to filter channel >> announcements (and, implicitly ,channel updates) before sending them. >> >> The filters we have in mind are simple: >> - Sort announcements by short channel id >> - Compute a marker height, which is `144 * ((now - 7 * 144) / 144)` >> (we round to multiples of 144 to make sync easier) >> - Group channel announcements that were created before this marker by >> groups of 144 blocks >> - Group channel announcements that were created after this marker by >> groups of 1 block >> - For each group, sort and concatenate all channel announcements short >> channel ids and hash the result (we could use sha256, or the first 16 >> bytes of the sha256 hash) >> >> The new `channel_announcement_filters` would then be a list of >> (height, hash) pairs ordered by increasing heights. >> >> This implies that implementation can easily sort announcements by >> short channel id, which should not be very difficult. >> An additional step could be to send all short channel ids for all >> groups for which the group hash did not match. Alternatively we could >> use smarter filters >> >> The use case we have in mind is mobile nodes, or more generally nodes >> which are often offline and need to resync very often. >> >> Cheers, >> Fabrice >> _______________________________________________ >> Lightning-dev mailing list >> Lightning-dev at lists.linuxfoundation.org >> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev _______________________________________________ Lightning-dev mailing list Lightning-dev at lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From rusty at rustcorp.com.au Wed Feb 7 23:21:43 2018 From: rusty at rustcorp.com.au (Rusty Russell) Date: Thu, 08 Feb 2018 09:51:43 +1030 Subject: [Lightning-dev] AMP: Atomic Multi-Path Payments over Lightning In-Reply-To: References: Message-ID: <87y3k41j3s.fsf@rustcorp.com.au> Olaoluwa Osuntokun writes: > Hi Y'all, > > A common question I've seen concerning Lightning is: "I have five $2 > channels, is it possible for me to *atomically* send $6 to fulfill a > payment?". The answer to this question is "yes", provided that the receiver This is awesome! I'm kicking myself for not proposing it :) Unfortunately, your proposal defines a way to make multipath donations, not multipath payments :( In other words, you've lost proof of payment, which IMHO is critical. Fortunately, this can be fairly trivially fixed when we go to scriptless scripts or other equivalent decorrelation mechanism, when I think this mechanism becomes extremely powerful. > - Potential fee savings for larger payments, contingent on there being a > super-linear component to routed fees. It's possible that with > modifications to the fee schedule, it's actually *cheaper* to send > payments over multiple flows rather than one giant flow. This is a stretch. I'd stick with the increased reliability/privacy arguments which are overwhelmingly compelling IMHO. If I have any important feedback on deeper reading (and after a sccond coffee), I'll send a separate email. Thanks! Rusty. From rusty at rustcorp.com.au Thu Feb 8 00:22:47 2018 From: rusty at rustcorp.com.au (Rusty Russell) Date: Thu, 08 Feb 2018 10:52:47 +1030 Subject: [Lightning-dev] AMP: Atomic Multi-Path Payments over Lightning In-Reply-To: References: Message-ID: <87o9l01ga0.fsf@rustcorp.com.au> Olaoluwa Osuntokun writes: > Protocol Overview > ================== > This design can be seen as a generalization of the single, non-interactive > payment scheme, that uses decoding of extra onion blobs (EOBs?) to encode > extra data for the receiver. In that design, the extra data includes a > payment preimage that the receiver can use to settle back the payment. EOBs > and some method of parsing them are really the only requirement for this > protocol to work. Thus, only the sender and receiver need to implement this > feature in order for it to function, which can be announced using a feature > bit. OK, so this proposal conflates two things: 1. split payments. 2. expansion of onion space. We've got a wiki page for #2 which could probably use some love: https://github.com/lightningnetwork/lightning-rfc/wiki/Brainstorming#using-multiple-hops_data-cells-in-the-onion For the final hop this may not be necessary, as we have 8 unused bytes in `next addr`, giving us 20 free bytes. But why not simplify the proposal: the payment preimage is the XOR of those 20 bytes (with 12 zero bytes prepended)? And the receiver gives up to 30 seconds(?) to receive all the parts after the first one. That means the sender gets dynamic resizing (if they want to split a payment further, set one to randomness, and XOR that into the other), the receive has only to remember the combination-so-far. Cheers, Rusty. From johanth at gmail.com Thu Feb 8 16:41:41 2018 From: johanth at gmail.com (=?UTF-8?Q?Johan_Tor=C3=A5s_Halseth?=) Date: Thu, 08 Feb 2018 11:41:41 -0500 Subject: [Lightning-dev] AMP: Atomic Multi-Path Payments over Lightning In-Reply-To: <87y3k41j3s.fsf@rustcorp.com.au> References: <87y3k41j3s.fsf@rustcorp.com.au> Message-ID: An obvious way to make this compatible with proof-of-payment would be to require two hashes to claim the HTLC: the presage from the invoice payment hash (as today) + the new hash introduced here. This would give the sender a receipt after only one of the HTLCs was claimed. Would require changes to the scripts of course. With Schnorr/EC operations this could probably be made more elegant, as mentioned. - Johan On Wed, Feb 7, 2018 at 18:21, Rusty Russell wrote: Olaoluwa Osuntokun writes: > Hi Y'all, > > A common question I've seen concerning Lightning is: "I have five $2 > channels, is it possible for me to *atomically* send $6 to fulfill a > payment?". The answer to this question is "yes", provided that the receiver This is awesome! I'm kicking myself for not proposing it :) Unfortunately, your proposal defines a way to make multipath donations, not multipath payments :( In other words, you've lost proof of payment, which IMHO is critical. Fortunately, this can be fairly trivially fixed when we go to scriptless scripts or other equivalent decorrelation mechanism, when I think this mechanism becomes extremely powerful. > - Potential fee savings for larger payments, contingent on there being a > super-linear component to routed fees. It's possible that with > modifications to the fee schedule, it's actually *cheaper* to send > payments over multiple flows rather than one giant flow. This is a stretch. I'd stick with the increased reliability/privacy arguments which are overwhelmingly compelling IMHO. If I have any important feedback on deeper reading (and after a sccond coffee), I'll send a separate email. Thanks! Rusty. _______________________________________________ Lightning-dev mailing list Lightning-dev at lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim.posen at gmail.com Thu Feb 8 17:44:21 2018 From: jim.posen at gmail.com (Jim Posen) Date: Thu, 8 Feb 2018 09:44:21 -0800 Subject: [Lightning-dev] AMP: Atomic Multi-Path Payments over Lightning In-Reply-To: References: <87y3k41j3s.fsf@rustcorp.com.au> Message-ID: If using two hashes to deliver the payment while still getting a proof, I'm not sure what that provides above just sending regular lightning payments over multiple routes with one hash. Firstly, if there is a second hash, it would presumably be the same for all routes, making them linkable again, which AMP tries to solve. And secondly, the receiver has no incentive to claim any of the HTLCs before all of them are locked in, because in that case they are releasing the transaction receipt before fully being paid. On Thu, Feb 8, 2018 at 8:41 AM, Johan Tor?s Halseth wrote: > An obvious way to make this compatible with proof-of-payment would be to > require two hashes to claim the HTLC: the presage from the invoice payment > hash (as today) + the new hash introduced here. This would give the sender > a receipt after only one of the HTLCs was claimed. Would require changes to > the scripts of course. > > With Schnorr/EC operations this could probably be made more elegant, as > mentioned. > > - Johan > On Wed, Feb 7, 2018 at 18:21, Rusty Russell wrote: > > Olaoluwa Osuntokun writes: > > Hi Y'all, > > > > A common question I've seen concerning Lightning is: "I have five $2 > > channels, is it possible for me to *atomically* send $6 to fulfill a > > payment?". The answer to this question is "yes", provided that the > receiver > > This is awesome! I'm kicking myself for not proposing it :) > > Unfortunately, your proposal defines a way to make multipath donations, > not multipath payments :( > > In other words, you've lost proof of payment, which IMHO is critical. > > Fortunately, this can be fairly trivially fixed when we go to scriptless > scripts or other equivalent decorrelation mechanism, when I think this > mechanism becomes extremely powerful. > > > - Potential fee savings for larger payments, contingent on there being a > > super-linear component to routed fees. It's possible that with > > modifications to the fee schedule, it's actually *cheaper* to send > > payments over multiple flows rather than one giant flow. > > This is a stretch. I'd stick with the increased reliability/privacy > arguments which are overwhelmingly compelling IMHO. > > If I have any important feedback on deeper reading (and after a sccond > coffee), I'll send a separate email. > > Thanks! > Rusty. > _______________________________________________ > Lightning-dev mailing list > Lightning-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev > > > _______________________________________________ > Lightning-dev mailing list > Lightning-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johanth at gmail.com Thu Feb 8 18:05:40 2018 From: johanth at gmail.com (=?UTF-8?Q?Johan_Tor=C3=A5s_Halseth?=) Date: Thu, 08 Feb 2018 13:05:40 -0500 Subject: [Lightning-dev] AMP: Atomic Multi-Path Payments over Lightning In-Reply-To: References: <87y3k41j3s.fsf@rustcorp.com.au> Message-ID: <400e46b6-726e-467d-8bb4-67e3c49dd8b2@gmail.com> Yeah, that is true, it would only give you the atomicity, not the decorrelation. I don?t see how you could get all the same properties using only one hash though. I guess the sender has no incentive to claim any of the payments before all of them have arrived, but you get no guarantee that partial payments cannot be made. Seems hard to do without introducing new primitives. - Johan On Thu, Feb 8, 2018 at 12:44, Jim Posen wrote: If using two hashes to deliver the payment while still getting a proof, I'm not sure what that provides above just sending regular lightning payments over multiple routes with one hash. Firstly, if there is a second hash, it would presumably be the same for all routes, making them linkable again, which AMP tries to solve. And secondly, the receiver has no incentive to claim any of the HTLCs before all of them are locked in, because in that case they are releasing the transaction receipt before fully being paid. On Thu, Feb 8, 2018 at 8:41 AM, Johan Tor?s Halseth < johanth at gmail.com [johanth at gmail.com] > wrote: An obvious way to make this compatible with proof-of-payment would be to require two hashes to claim the HTLC: the presage from the invoice payment hash (as today) + the new hash introduced here. This would give the sender a receipt after only one of the HTLCs was claimed. Would require changes to the scripts of course. With Schnorr/EC operations this could probably be made more elegant, as mentioned. - Johan On Wed, Feb 7, 2018 at 18:21, Rusty Russell < rusty at rustcorp.com.au [rusty at rustcorp.com.au] > wrote: Olaoluwa Osuntokun < laolu32 at gmail.com [laolu32 at gmail.com] > writes: > Hi Y'all, > > A common question I've seen concerning Lightning is: "I have five $2 > channels, is it possible for me to *atomically* send $6 to fulfill a > payment?". The answer to this question is "yes", provided that the receiver This is awesome! I'm kicking myself for not proposing it :) Unfortunately, your proposal defines a way to make multipath donations, not multipath payments :( In other words, you've lost proof of payment, which IMHO is critical. Fortunately, this can be fairly trivially fixed when we go to scriptless scripts or other equivalent decorrelation mechanism, when I think this mechanism becomes extremely powerful. > - Potential fee savings for larger payments, contingent on there being a > super-linear component to routed fees. It's possible that with > modifications to the fee schedule, it's actually *cheaper* to send > payments over multiple flows rather than one giant flow. This is a stretch. I'd stick with the increased reliability/privacy arguments which are overwhelmingly compelling IMHO. If I have any important feedback on deeper reading (and after a sccond coffee), I'll send a separate email. Thanks! Rusty. ______________________________ _________________ Lightning-dev mailing list Lightning-dev at lists. linuxfoundation.org [Lightning-dev at lists.linuxfoundation.org] https://lists.linuxfoundation. org/mailman/listinfo/ lightning-dev [https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev] ______________________________ _________________ Lightning-dev mailing list Lightning-dev at lists. linuxfoundation.org [Lightning-dev at lists.linuxfoundation.org] https://lists.linuxfoundation. org/mailman/listinfo/ lightning-dev [https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev] -------------- next part -------------- An HTML attachment was scrubbed... URL: From decker.christian at gmail.com Thu Feb 8 23:11:24 2018 From: decker.christian at gmail.com (Christian Decker) Date: Fri, 09 Feb 2018 00:11:24 +0100 Subject: [Lightning-dev] channel rebalancing support kind of exists already? In-Reply-To: <5-r0g7Pox46kFZU_pNSirAPIwcDNSbYytEvrP1GSKsc6azP7ZfjBl0-cfHPRCVaxUIuxrhe4TSk2Jzc9tJFIlUAojPzGljm6EsOvgSr0wVU=@protonmail.com> References: <5-r0g7Pox46kFZU_pNSirAPIwcDNSbYytEvrP1GSKsc6azP7ZfjBl0-cfHPRCVaxUIuxrhe4TSk2Jzc9tJFIlUAojPzGljm6EsOvgSr0wVU=@protonmail.com> Message-ID: <87vaf7vzz7.fsf@gmail.com> Technically you can do it with c-lightning today, if you create a circular route manually and then use the `sendpay` JSON-RPC command to send funds along that route it'll do just that. It's as simple as that. We don't have built-in support yet, I don't know if we ever will, since it is trivially implemented outside of the daemon itself. I also don't think we need to consider this use-case at all from a protocol point of view. Cheers, Christian ZmnSCPxj via Lightning-dev writes: > Good Morning Robert, > > Yes, this already is possible, but is not implemented by any implementation to my knowledge at this point. > > Note that "balance" is not necessarily a property you might desire for your channels. In your example, under the "unbalanced" case, Bob can pay a 1.5BTC invoice, but in the "balanced" case Bob can no longer pay that 1.5BTC invoice. Of course, once AMP is possible then this consideration is not an issue. > > Regards, > ZmnSCPxj > > Sent with [ProtonMail](https://protonmail.com) Secure Email. > > -------- Original Message -------- > On February 7, 2018 12:53 AM, Robert Olsson wrote: > >> Hello >> >> Let's say Bob opens a channel to Alice for 2BTC >> Carol then opens a channel to Bob for 2BTC. >> Alice and Carol are already connected to Others (and/or eachother even) >> The network and channel balances will look like this: >> >> Alice 0--2 Bob 0--2 Carol >> | | >> +----- OTHERS ------+ >> >> Bob for some reason wants the channels to be balanced, so he has some better redundancy and it looks better. >> >> So hypothetically Bob solves this by paying himself an invoice of 1BTC and making sure the route goes out thru Alice and comes back via Carol. Bob pays fees so he isn't ashamed if it disturbs the other balances in the network. Should he care? >> >> Alice 1--1 Bob 1--1 Carol >> | | >> +----- OTHERS ------+ >> >> Now Bob has two nice balanced channels, meaning he has better connectivity in both directions. >> >> Doesn't the protocol already support that kind of solutions, and all we need is a function in the CLI allowing Bob to pay to himself, and specify which two channels he would like to balance? >> >> Maybe even make it automatically balance. >> >> Is this a good idea of something to support, and/or Is there a risk the entire network will start doing this and it will start oscillating? >> >> Best regards >> Robert Olsson > _______________________________________________ > Lightning-dev mailing list > Lightning-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev From rusty at rustcorp.com.au Fri Feb 9 01:44:24 2018 From: rusty at rustcorp.com.au (Rusty Russell) Date: Fri, 09 Feb 2018 12:14:24 +1030 Subject: [Lightning-dev] Improving the initial gossip sync In-Reply-To: <874lmvy4gh.fsf@gmail.com> References: <874lmvy4gh.fsf@gmail.com> Message-ID: <87tvurym13.fsf@rustcorp.com.au> Hi all! Finally catching up. I prefer the simplicity of the timestamp mechanism, with a more ambitious mechanism TBA. Deployment suggestions: 1. This should be a feature bit pair. As usual, even == 'support this or disconnect', and odd == 'ok even if you don't understand'. 2. This `timestamp_routing_sync`? feature overrides `initial_routing_sync`. That lets you decide what old nodes do, using the older `initial_routing_sync` option. Similarly, a future `fancy_sync` would override `timestamp_routing_sync`. 3. We can append an optional 4 byte `routing_sync_timestamp` field to to `init` without issues, since all lengths in there are explicit. If you don't offer the `timestamp_sync` feature, this Must Be Zero (for appending more stuff in future). Now, as to the proposal specifics. I dislike the re-transmission of all old channel_announcement and node_announcement messages, just because there's been a recent channel_update. Simpler to just say 'send anything >= routing_sync_timestamp`. Background: c-lightning internally keeps an tree of gossip in the order we received them, keeping a 'current' pointer for each peer. This is very efficient (though we don't remember if a peer sent us a gossip msg already, so uses twice the bandwidth it could). But this isn't *quite* the same as timestamp order, so we can't just set the 'current' pointer based on the first entry >= `routing_sync_timestamp`; we need to actively filter. This is still a simple traverse, however, skipping over any entry less than routing_sync_timestamp. OTOH, if we need to retransmit announcements, when do we stop retransmitting them? If a new channel_update comes in during this time, are we still to dump the announcements? Do we have to remember which ones we've sent to each peer? If missing announcements becomes a problem, we could add a simple query message: I think this is going to be needed for any fancy scheme anyway. Cheers, Rusty. From cjp at ultimatestunts.nl Fri Feb 9 10:15:20 2018 From: cjp at ultimatestunts.nl (CJP) Date: Fri, 09 Feb 2018 11:15:20 +0100 Subject: [Lightning-dev] AMP: Atomic Multi-Path Payments over Lightning In-Reply-To: References: Message-ID: <1518171320.5145.0.camel@ultimatestunts.nl> Can you give a use case for this? Usually, especially in the common case that a payment is done in exchange for some non-cryptographic asset (e.g. physical goods), there already is some kind of trust between payer and payee. So, if a payment is split non-atomically into smaller transactions, and only a part succeeds, presumably they can cooperatively figure out some way to settle the situation. I spoke to people of the "interledger" project, and what they are planning to do is to non-atomically split *every* transaction into lots of micro-payments. In fact, they consider it unnecessary to enforce HTLCs with scripts, because their amounts are so small(*). If one micro-payment fails, that just makes them learn that a certain channel is unreliable, and they'll send further payments (and even the remaining part of the same payment) through a different route. CJP (*) not worth the extra on-blockchain fee due to the increased tx size. Olaoluwa Osuntokun schreef op di 06-02-2018 om 05:26 [+0000]: > Hi Y'all, > > > A common question I've seen concerning Lightning is: "I have five $2 > channels, is it possible for me to *atomically* send $6 to fulfill a > payment?". The answer to this question is "yes", provided that the > receiver > waits to pull all HTLC's until the sum matches their invoice. > Typically, one > assumes that the receiver will supply a payment hash, and the sender > will > re-use the payment hash for all streams. This has the downside of > payment > hash re-use across *multiple* payments (which can already easily be > correlated), and also has a failure mode where if the sender fails to > actually satisfy all the payment flows, then the receiver can still > just > pull the monies (and possibly not disperse a service, or w/e). > > > Conner Fromknecht and I have come up with a way to achieve this over > Lightning while (1) not re-using any payment hashes across all payment > flows, and (2) adding a *strong* guarantee that the receiver won't be > paid > until *all* partial payment flows are extended. We call this scheme > AMP > (Atomic Multi-path Payments). It can be experimented with on Lightning > *today* with the addition of a new feature bit to gate this new > feature. The beauty of the scheme is that it requires no fundamental > changes > to the protocol as is now, as the negotiation is strictly *end-to-end* > between sender and receiver. > > > TL;DR: we repurpose some unused space in the onion per-hop payload of > the > onion blob to signal our protocol (and deliver some protocol-specific > data), > then use additive secret sharing to ensure that the receiver can't > pull the > payment until they have enough shares to reconstruct the original > pre-image. > > > > > Protocol Goals > ============== > 1. Atomicity: The logical transaction should either succeed or fail in > entirety. Naturally, this implies that the receiver should not be > unable to > settle *any* of the partial payments, until all of them have arrived. > > > 2. Avoid Payment Hash Reuse: The payment preimages validated by the > consensus layer should be distinct for each partial payment. > Primarily, > this helps avoid correlation of the partial payments, and ensures that > malicious intermediaries straddling partial payments cannot steal > funds. > > > 3. Order Invariance: The protocol should be forgiving to the order in > which > partial payments arrive at the destination, adding robustness in the > face of > delays or routing failures. > > > 4. Non-interactive Setup: It should be possible for the sender to > perform an > AMP without directly coordinating with the receiving node. > Predominantly, > this means that the *sender* is able to determine the number of > partial > payments to use for a particular AMP, which makes sense since they > will be > the one fronting the fees for the cost of this parameter. Plus, we can > always turn a non-interactive protocol into an interactive one for the > purposes of invoicing. > > > > > Protocol Benefits > ================= > > > Sending pay payments predominantly over an AMP-like protocol has > several > clear benefits: > > > - Eliminates the constraint that a single path from sender to > receiver > with sufficient directional capacity. This reduces the pressure to > have > larger channels in order to support larger payment flows. As a > result, > the payment graph be very diffused, without sacrificing payment > utility > > > - Reduces strain from larger payments on individual paths, and > allows the > liquidity imbalances to be more diffuse. We expect this to have a > non-negligible impact on channel longevity. This is due to the > fact that > with usage of AMP, payment flows are typically *smaller* meaning > that > each payment will unbalance a channel to a lesser degree that > with one giant flow. > > > - Potential fee savings for larger payments, contingent on there > being a > super-linear component to routed fees. It's possible that with > modifications to the fee schedule, it's actually *cheaper* to send > payments over multiple flows rather than one giant flow. > > > - Allows for logical payments larger than the current maximum value > of an > individual payment. Atm we have a (temporarily) limit on the max > payment > size. With AMP, this can be side stepped as each flow can be up > the max > size, with the sum of all flows exceeding the max. > > > - Given sufficient path diversity, AMPs may improve the privacy of > LN > Intermediaries are now unaware to how much of the total payment > they are > forwarding, or even if they are forwarding a partial payment at > all. > > > - Using smaller payments increases the set of possible paths a > partial > payment could have taken, which reduces the effectiveness of > static > analysis techniques involving channel capacities and the plaintext > values being forwarded. > > > > > Protocol Overview > ================== > This design can be seen as a generalization of the single, > non-interactive > payment scheme, that uses decoding of extra onion blobs (EOBs?) to > encode > extra data for the receiver. In that design, the extra data includes a > payment preimage that the receiver can use to settle back the payment. > EOBs > and some method of parsing them are really the only requirement for > this > protocol to work. Thus, only the sender and receiver need to implement > this > feature in order for it to function, which can be announced using a > feature > bit. > > > First, let's review the current format of the per-hop payload for each > node > described in BOLT-0004. > > > ?????????????????????????????????????????????????????????????????????????????????????????????????????????????????? > ?Realm (1 byte) ?Next Addr (8 bytes)?Amount (8 bytes)?Outgoing CLTV (4 > bytes)?Unused (12 bytes)? HMAC (32 bytes) ? > ?????????????????????????????????????????????????????????????????????????????????????????????????????????????????? > ?????????????????????????????????????????????????????????????????????????????????????????????????????????????????? > ??????????????????? > ?65 Bytes Per Hop ? > ??????????????????? > > > Currently, *each* node gets a 65-byte payload. We use this payload to > give > each node instructions on *how* to forward a payment. We tell each > node: the > realm (or chain to forward on), then next node to forward to, the > amount to > forward (this is where fees are extracted by forwarding out less than > in), > the outgoing CLTV (allows verification that the prior node didn't > modify any > values), and finally an HMAC over the entire thing. > > > Two important points: > 1. We have 12 bytes for each hop that are currently unpurposed and > can be > used by application protocols to signal new interpretation of bytes > and > also deliver additional encrypted+authenticated data to *each* hop. > > > 2. The protocol currently has a hard limit of 20-hops. With this > feature > we ensure that the packet stays fixed sized during processing in > order to > avoid leaking positional information. Typically most payments won't > use > all 20 hops, as a result, we can use the remaining hops to stuff in > *even > more* data. > > > > > Protocol Description > ==================== > The solution we propose is Atomic Multi-path Payments (AMPs). At a > high > level, this leverages EOBs to deliver additive shares of a base > preimage, > from which the payment preimages of partial payments can be derived. > The > receiver can only construct this value after having received all of > the > partial payments, satisfying the atomicity constraint. > > > The basic protocol: > > > Primitives > ========== > Let H be a CRH function. > Let || denote concatenation. > Let ^ denote xor. > > > > > Sender Requirements > =================== > The parameters to the sending procedure are a random identifier ID, > the > number of partial payments n, and the total payment value V. Assume > the > sender has some way of dividing V such that V = v_1 + ? + v_n. > > > To begin, the sender builds the base preimage BP, from which n partial > preimages will be derived. Next, the sender samples n additive shares > s_1, > ?, s_n, and takes the sum to compute BP = s_1 ^ ? ^ s_n. > > > With the base preimage created, the sender now moves on to > constructing the > n partial payments. For each i in [1,n], the sender deterministically > computes the partial preimage r_i = H(BP || i), by concatenating the > sequence number i to the base preimage and hashing the result. > Afterwards, > it applies H to determine the payment hash to use in the i?th partial > payment as h_i = H(r_i). Note that that with this preimage derivation > scheme, once the payments are pulled each pre-image is distinct and > indistinguishable from any other. > > > With all of the pieces in place, the sender initiates the i?th payment > by > constructing a route to the destination with value v_i and payment > hash h_i. > The tuple (ID, n, s_i) is included in the EOB to be opened by the > receiver. > > > In order to include the three tuple within the per-hop payload for the > final > destination, we repurpose the _first_ byte of the un-used padding > bytes in > the payload to signal version 0x01 of the AMP protocol (note this is a > PoC > outline, we would need to standardize signalling of these 12 bytes to > support other protocols). Typically this byte isn't set, so the > existence of > this means that we're (1) using AMP, and (2) the receiver should > consume the > _next_ hop as well. So if the payment length is actually 5, the sender > tacks > on an additional dummy 6th hop, encrypted with the _same_ shared > secret for > that hop to deliver the e2e encrypted data. > > > Note, the sender can retry partial payments just as they would normal > payments, since they are order invariant, and would be > indistinguishable > from regular payments to intermediaries in the network. > > > > > Receiver Requirements > ===================== > > > Upon the arrival of each partial payment, the receiver will > iteratively > reconstruct BP, and do some bookkeeping to figure out when to settle > the > partial payments. During this reconstruction process, the receiver > does not > need to be aware of the order in which the payments were sent, and in > fact > nothing about the incoming partial payments reveals this information > to the > receiver, though this can be learned after reconstructing BP. > > > Each EOB is decoded to retrieve (ID, n, s_i), where i is the unique > but > unknown index of the incoming partial payment. The receiver has access > to > persistent key-value store DB that maps ID to (n, c*, BP*), where c* > represents the number of partial payments received, BP* is the sum of > the > received additive shares, and the superscript * denotes that the value > is > being updated iteratively. c* and BP* both have initial values of 0. > > > In the basic protocol, the receiver cache?s the first n it sees, and > verifies that all incoming partial payments have the same n. The > receiver > should reject all partial payments if any EOB deviates. Next, the we > update > our persistent store with DB[ID] = (n, c* + 1, BP* ^ s_i), advancing > the > reconstruction by one step. > > > If c* + 1 < n, there are still more packets in flight, so we sit > tight. > Otherwise, the receiver assumes all partial payments have arrived, and > can > being settling them back. Using the base preimage BP = BP* ^ s_i from > our > final iteration, the receiver can re-derive all n partial preimages > and > payment hashes, using r_i = H(BP || i) and h_i = H(r_i) simply through > knowledge of n and BP. > > > Finally, the receiver settles back any outstanding payments that > include > payment hash h_i using the partial preimage r_i. Each r_i will appear > random > due to the nature of H, as will it?s corresponding h_i. Thus, each > partial > payment should appear uncorrelated, and does not reveal that it is > part of > an AMP nor the number of partial payments used. > > > Non-interactive to Interactive AMPs > =================================== > > > Sender simply receives an ID and amount from the receiver in an > invoice > before initiating the protocol. The receiver should only consider the > invoice settled if the total amount received in partial payments > containing > ID matches or exceeds the amount specified in the invoice. With this > variant, the receiver is able to map all partial payments to a > pre-generated > invoice statement. > > > > > Additive Shares vs Threshold-Shares > =================================== > > > The biggest reason to use additive shares seems to be atomicity. > Threshold > shares open the door to some partial payments being settled, even if > others > are left in flight. Haven?t yet come up with a good reason for using > threshold schemes, but there seem to be plenty against it. > > > Reconstruction of additive shares can be done iteratively, and is win > for > the storage and computation requirements on the receiving end. If the > sender > decides to use fewer than n partial payments, the remaining shares > could be > included in the EOB of the final partial payment to allow the sender > to > reconstruct sooner. Sender could also optimistically do partial > reconstruction on this last aggregate value. > > > > > Adaptive AMPs > ============= > > > The sender may not always be aware of how many partial payments they > wish to > send at the time of the first partial payment, at which point the > simplified > protocol would require n to be chosen. To accommodate, the above > scheme can > be adapted to handle a dynamically chosen n by iteratively > constructing the > shared secrets as follows. > > > Starting with a base preimage BP, the key trick is that the sender > remember > the difference between the base preimage and the sum of all partial > preimages used so far. The relation is described using the following > equations: > > > X_0 = 0 > X_i = X_{i-1} ^ s_i > X_n = BP ^ X_{n-1} > > > where if n=1, X_1 = BP, implying that this is in fact a generalization > of > the single, non-interactive payment scheme mentioned above. For > i=1, ..., > n-1, the sender sends s_i in the EOB, and X_n for the n-th share. > > > Iteratively reconstructing s_1 ^ ?. ^ s_{n-1} ^ X_n = BP, allows the > receiver to compute all relevant r_i = H(BP || i) and h_i = H(r_i). > Lastly, > the final number of partial payments n could be signaled in the final > EOB, > which would also serve as a sentinel value for signaling completion. > In > response to DOS vectors stemming from unknown values of n, > implementations > could consider advertising a maximum value for n, or adopting some > sort of > framing pattern for conveying that more partial payments are on the > way. > > > We can further modify our usage of the per-hop payloads to send > (H(BP), s_i) to > consume most of the EOB sent from sender to receiver. In this > scenario, we'd > repurpose the 11-bytes *after* our signalling byte in the unused byte > section > to store the payment ID (which should be unique for each payment). In > the case > of a non-interactive payment, this will be unused. While for > interactive > payments, this will be the ID within the invoice. To deliver this > slimmer > 2-tuple, we'll use 32-bytes for the hash of the BP, and 32-bytes for > the > partial pre-image share, leaving an un-used byte in the payload. > > > > > Cross-Chain AMPs > ================ > > > AMPs can be used to pay a receiver in multiple currencies > atomically...which > is pretty cool :D > > > > > Open Research Questions > ======================= > > > The above is a protocol sketch to achieve atomic multi-path payments > over > Lightning. The details concerning onion blob usage serves as a > template that > future protocols can draw upon in order to deliver additional data to > *any* > hop in the route. However, there are still a few open questions before > something like this can be feasibly deployed. > > > 1. How does the sender decide how many chunked payments to send, and > the > size of each payment? > > > - Upon a closer examination, this seems to overlap with the task of > congestion control within TCP. The sender may be able to utilize > inspired heuristics to gauge: (1) how large the initial payment > should be > and (2) how many subsequent payments may be required. Note that if > the > first payment succeeds, then the exchange is over in a signal > round. > > > 2. How can AMP and HORNET be composed? > > > - If we eventually integrate HORNET, then a distinct communications > sessions can be established to allow the sender+receiver to > exchange > up-to-date partial payment information. This may allow the sender > to more > accurately size each partial payment. > > 3. Can the sender's initial strategy be governed by an instance of the > Push-relabel max flow algo? > > > 4. How does this mesh with the current max HTLC limit on a commitment? > > > - ATM, we have a max limit on the number of active HTLC's on a > particular > commitment transaction. We do this, as otherwise it's possible > that the > transaction is too large, and exceeds standardness w.r.t > transaction > size. In a world where most payments use an AMP-like protocol, > then > overall ant any given instance there will be several pending > HTLC's on > commitments network wise. > > > This may incentivize nodes to open more channels in order to > support > the increased commitment space utilization. > > > > > Conclusion > ========== > > > We've presented a design outline of how to integrate atomic multi-path > payments (AMP) into Lightning. The existence of such a construct > allows a > sender to atomically split a payment flow amongst several individual > payment > flows. As a result, larger channels aren't as important as it's > possible to > utilize one total outbound payment bandwidth to send several channels. > Additionally, in order to support the increased load, internal routing > nodes > are incensed have more active channels. The existence of AMP-like > payments > may also increase the longevity of channels as there'll be smaller, > more > numerous payment flows, making it unlikely that a single payment comes > across unbalances a channel entirely. We've also showed how one can > utilize > the current onion packet format to deliver additional data from a > sender to > receiver, that's still e2e authenticated. > > > > > -- Conner && Laolu > > > _______________________________________________ > Lightning-dev mailing list > Lightning-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev From decker.christian at gmail.com Fri Feb 9 11:41:42 2018 From: decker.christian at gmail.com (Christian Decker) Date: Fri, 09 Feb 2018 12:41:42 +0100 Subject: [Lightning-dev] Improving the initial gossip sync In-Reply-To: <87tvurym13.fsf@rustcorp.com.au> References: <874lmvy4gh.fsf@gmail.com> <87tvurym13.fsf@rustcorp.com.au> Message-ID: <87shaawft5.fsf@gmail.com> Rusty Russell writes: > Finally catching up. I prefer the simplicity of the timestamp > mechanism, with a more ambitious mechanism TBA. Fabrice and I had a short chat a few days ago and decided that we'll simulate both approaches and see what consumes less bandwidth. With zombie channels and the chances for missing channels during a weak form of synchronization, it's not that clear to us which one has the better tradeoff. With some numbers behind it it may become easier to decide :-) > Deployment suggestions: > > 1. This should be a feature bit pair. As usual, even == 'support this or > disconnect', and odd == 'ok even if you don't understand'. If we add the timestamp to the end of the `init` message, instead of introducing a new message altogether, we are forced to use the required bit, otherwise we just made any future field appended to the `init` message unparseable to non-supporting nodes. Let's say we add another field to it later, that the peer supports, but it follows the timestamp which the peer does not. The peer doesn't know how many bytes to skip (if any) for the timestamp bit he doesn't understand, to get to the bytes he does know how to parse. I'm slowly getting to like the extra message more, since it also allows a number of cute tricks later. > 2. This `timestamp_routing_sync`? feature overrides `initial_routing_sync`. > That lets you decide what old nodes do, using the older > `initial_routing_sync` option. Similarly, a future `fancy_sync` would > override `timestamp_routing_sync`. So you'd set both bits, and if the peer knows `timestamp_routing_sync` that then force-sets the `initial_routing_sync`? Sounds ok, if we allow optional implementations, even though I'd like to avoid feature interactions as much as possible. > 3. We can append an optional 4 byte `routing_sync_timestamp` field to to > `init` without issues, since all lengths in there are explicit. If you > don't offer the `timestamp_sync` feature, this Must Be Zero (for appending > more stuff in future). That'd still require the peer to know that it has to skip 4 bytes to get any future fields, which is why I am convinced that either forcing it to be mandatory, or adding a new message is the better way to go, even if now everybody implements it correctly. > Now, as to the proposal specifics. > > I dislike the re-transmission of all old channel_announcement and > node_announcement messages, just because there's been a recent > channel_update. Simpler to just say 'send anything >= > routing_sync_timestamp`. I'm afraid we can't really omit the `channel_announcement` since a `channel_update` that isn't preceded by a `channel_announcement` is invalid and will be dropped by peers (especially because the `channel_update` doesn't contain the necessary information for validation). > Background: c-lightning internally keeps an tree of gossip in the order > we received them, keeping a 'current' pointer for each peer. This is > very efficient (though we don't remember if a peer sent us a gossip msg > already, so uses twice the bandwidth it could). We can solve that by keeping a filter of the messages we received from the peer, it's more of an optimization than anything, other than the bandwidth cost, it doesn't hurt. > But this isn't *quite* the same as timestamp order, so we can't just set > the 'current' pointer based on the first entry >= > `routing_sync_timestamp`; we need to actively filter. This is still a > simple traverse, however, skipping over any entry less than > routing_sync_timestamp. > > OTOH, if we need to retransmit announcements, when do we stop > retransmitting them? If a new channel_update comes in during this time, > are we still to dump the announcements? Do we have to remember which > ones we've sent to each peer? That's more of an implementation detail. In c-lightning we can just remember the index at which the initial sync started, and send announcements along until the index is larger than the initial sync index. A more general approach would be to have 2 timestamps, one highwater and one lowwater mark. Anything inbetween these marks will be forwarded together with all associated announcements (node / channel), anything newer than that will only forward the update. The two timestamps approach, combined with a new message, would also allow us to send multiple `timestamp_routing_sync` messages, e.g., first sync the last hour, then the last day, then the last week, etc. It gives the syncing node control over what timewindow to send, inverting the current initial sync. Cheers, Christian From cezary.dziemian at gmail.com Sun Feb 11 13:58:49 2018 From: cezary.dziemian at gmail.com (Cezary Dziemian) Date: Sun, 11 Feb 2018 14:58:49 +0100 Subject: [Lightning-dev] AMP: Atomic Multi-Path Payments over Lightning In-Reply-To: <1518171320.5145.0.camel@ultimatestunts.nl> References: <1518171320.5145.0.camel@ultimatestunts.nl> Message-ID: That would be great improvement, if AMP could work this way: 1. I would like to send 0.1 BTC, so I split this to 5 payment 0.02 BTC each + one extra 0.02 BTC payment. 2. When recipient received 6 htlcs, he is able to spend only 5 of them. If recipient receives, only 5 of them, it is still fine, and payment is success. In such scenario, single route/payment would fail, and payment as whole would still be success. Do you think that would be possible? It could greatly increase reliability of LN payments. 2018-02-09 11:15 GMT+01:00 CJP : > Can you give a use case for this? > > Usually, especially in the common case that a payment is done in > exchange for some non-cryptographic asset (e.g. physical goods), there > already is some kind of trust between payer and payee. So, if a payment > is split non-atomically into smaller transactions, and only a part > succeeds, presumably they can cooperatively figure out some way to > settle the situation. > > I spoke to people of the "interledger" project, and what they are > planning to do is to non-atomically split *every* transaction into lots > of micro-payments. In fact, they consider it unnecessary to enforce > HTLCs with scripts, because their amounts are so small(*). If one > micro-payment fails, that just makes them learn that a certain channel > is unreliable, and they'll send further payments (and even the remaining > part of the same payment) through a different route. > > CJP > > (*) not worth the extra on-blockchain fee due to the increased tx size. > > Olaoluwa Osuntokun schreef op di 06-02-2018 om 05:26 [+0000]: > > Hi Y'all, > > > > > > A common question I've seen concerning Lightning is: "I have five $2 > > channels, is it possible for me to *atomically* send $6 to fulfill a > > payment?". The answer to this question is "yes", provided that the > > receiver > > waits to pull all HTLC's until the sum matches their invoice. > > Typically, one > > assumes that the receiver will supply a payment hash, and the sender > > will > > re-use the payment hash for all streams. This has the downside of > > payment > > hash re-use across *multiple* payments (which can already easily be > > correlated), and also has a failure mode where if the sender fails to > > actually satisfy all the payment flows, then the receiver can still > > just > > pull the monies (and possibly not disperse a service, or w/e). > > > > > > Conner Fromknecht and I have come up with a way to achieve this over > > Lightning while (1) not re-using any payment hashes across all payment > > flows, and (2) adding a *strong* guarantee that the receiver won't be > > paid > > until *all* partial payment flows are extended. We call this scheme > > AMP > > (Atomic Multi-path Payments). It can be experimented with on Lightning > > *today* with the addition of a new feature bit to gate this new > > feature. The beauty of the scheme is that it requires no fundamental > > changes > > to the protocol as is now, as the negotiation is strictly *end-to-end* > > between sender and receiver. > > > > > > TL;DR: we repurpose some unused space in the onion per-hop payload of > > the > > onion blob to signal our protocol (and deliver some protocol-specific > > data), > > then use additive secret sharing to ensure that the receiver can't > > pull the > > payment until they have enough shares to reconstruct the original > > pre-image. > > > > > > > > > > Protocol Goals > > ============== > > 1. Atomicity: The logical transaction should either succeed or fail in > > entirety. Naturally, this implies that the receiver should not be > > unable to > > settle *any* of the partial payments, until all of them have arrived. > > > > > > 2. Avoid Payment Hash Reuse: The payment preimages validated by the > > consensus layer should be distinct for each partial payment. > > Primarily, > > this helps avoid correlation of the partial payments, and ensures that > > malicious intermediaries straddling partial payments cannot steal > > funds. > > > > > > 3. Order Invariance: The protocol should be forgiving to the order in > > which > > partial payments arrive at the destination, adding robustness in the > > face of > > delays or routing failures. > > > > > > 4. Non-interactive Setup: It should be possible for the sender to > > perform an > > AMP without directly coordinating with the receiving node. > > Predominantly, > > this means that the *sender* is able to determine the number of > > partial > > payments to use for a particular AMP, which makes sense since they > > will be > > the one fronting the fees for the cost of this parameter. Plus, we can > > always turn a non-interactive protocol into an interactive one for the > > purposes of invoicing. > > > > > > > > > > Protocol Benefits > > ================= > > > > > > Sending pay payments predominantly over an AMP-like protocol has > > several > > clear benefits: > > > > > > - Eliminates the constraint that a single path from sender to > > receiver > > with sufficient directional capacity. This reduces the pressure to > > have > > larger channels in order to support larger payment flows. As a > > result, > > the payment graph be very diffused, without sacrificing payment > > utility > > > > > > - Reduces strain from larger payments on individual paths, and > > allows the > > liquidity imbalances to be more diffuse. We expect this to have a > > non-negligible impact on channel longevity. This is due to the > > fact that > > with usage of AMP, payment flows are typically *smaller* meaning > > that > > each payment will unbalance a channel to a lesser degree that > > with one giant flow. > > > > > > - Potential fee savings for larger payments, contingent on there > > being a > > super-linear component to routed fees. It's possible that with > > modifications to the fee schedule, it's actually *cheaper* to send > > payments over multiple flows rather than one giant flow. > > > > > > - Allows for logical payments larger than the current maximum value > > of an > > individual payment. Atm we have a (temporarily) limit on the max > > payment > > size. With AMP, this can be side stepped as each flow can be up > > the max > > size, with the sum of all flows exceeding the max. > > > > > > - Given sufficient path diversity, AMPs may improve the privacy of > > LN > > Intermediaries are now unaware to how much of the total payment > > they are > > forwarding, or even if they are forwarding a partial payment at > > all. > > > > > > - Using smaller payments increases the set of possible paths a > > partial > > payment could have taken, which reduces the effectiveness of > > static > > analysis techniques involving channel capacities and the plaintext > > values being forwarded. > > > > > > > > > > Protocol Overview > > ================== > > This design can be seen as a generalization of the single, > > non-interactive > > payment scheme, that uses decoding of extra onion blobs (EOBs?) to > > encode > > extra data for the receiver. In that design, the extra data includes a > > payment preimage that the receiver can use to settle back the payment. > > EOBs > > and some method of parsing them are really the only requirement for > > this > > protocol to work. Thus, only the sender and receiver need to implement > > this > > feature in order for it to function, which can be announced using a > > feature > > bit. > > > > > > First, let's review the current format of the per-hop payload for each > > node > > described in BOLT-0004. > > > > > > ???????????????????????????????????????????????????????????? > ?????????????????????????????????????????????????????? > > ?Realm (1 byte) ?Next Addr (8 bytes)?Amount (8 bytes)?Outgoing CLTV (4 > > bytes)?Unused (12 bytes)? HMAC (32 bytes) ? > > ???????????????????????????????????????????????????????????? > ?????????????????????????????????????????????????????? > > ???????????????????????????????????????????????????????????? > ?????????????????????????????????????????????????????? > > ??????????????????? > > ?65 Bytes Per Hop ? > > ??????????????????? > > > > > > Currently, *each* node gets a 65-byte payload. We use this payload to > > give > > each node instructions on *how* to forward a payment. We tell each > > node: the > > realm (or chain to forward on), then next node to forward to, the > > amount to > > forward (this is where fees are extracted by forwarding out less than > > in), > > the outgoing CLTV (allows verification that the prior node didn't > > modify any > > values), and finally an HMAC over the entire thing. > > > > > > Two important points: > > 1. We have 12 bytes for each hop that are currently unpurposed and > > can be > > used by application protocols to signal new interpretation of bytes > > and > > also deliver additional encrypted+authenticated data to *each* hop. > > > > > > 2. The protocol currently has a hard limit of 20-hops. With this > > feature > > we ensure that the packet stays fixed sized during processing in > > order to > > avoid leaking positional information. Typically most payments won't > > use > > all 20 hops, as a result, we can use the remaining hops to stuff in > > *even > > more* data. > > > > > > > > > > Protocol Description > > ==================== > > The solution we propose is Atomic Multi-path Payments (AMPs). At a > > high > > level, this leverages EOBs to deliver additive shares of a base > > preimage, > > from which the payment preimages of partial payments can be derived. > > The > > receiver can only construct this value after having received all of > > the > > partial payments, satisfying the atomicity constraint. > > > > > > The basic protocol: > > > > > > Primitives > > ========== > > Let H be a CRH function. > > Let || denote concatenation. > > Let ^ denote xor. > > > > > > > > > > Sender Requirements > > =================== > > The parameters to the sending procedure are a random identifier ID, > > the > > number of partial payments n, and the total payment value V. Assume > > the > > sender has some way of dividing V such that V = v_1 + ? + v_n. > > > > > > To begin, the sender builds the base preimage BP, from which n partial > > preimages will be derived. Next, the sender samples n additive shares > > s_1, > > ?, s_n, and takes the sum to compute BP = s_1 ^ ? ^ s_n. > > > > > > With the base preimage created, the sender now moves on to > > constructing the > > n partial payments. For each i in [1,n], the sender deterministically > > computes the partial preimage r_i = H(BP || i), by concatenating the > > sequence number i to the base preimage and hashing the result. > > Afterwards, > > it applies H to determine the payment hash to use in the i?th partial > > payment as h_i = H(r_i). Note that that with this preimage derivation > > scheme, once the payments are pulled each pre-image is distinct and > > indistinguishable from any other. > > > > > > With all of the pieces in place, the sender initiates the i?th payment > > by > > constructing a route to the destination with value v_i and payment > > hash h_i. > > The tuple (ID, n, s_i) is included in the EOB to be opened by the > > receiver. > > > > > > In order to include the three tuple within the per-hop payload for the > > final > > destination, we repurpose the _first_ byte of the un-used padding > > bytes in > > the payload to signal version 0x01 of the AMP protocol (note this is a > > PoC > > outline, we would need to standardize signalling of these 12 bytes to > > support other protocols). Typically this byte isn't set, so the > > existence of > > this means that we're (1) using AMP, and (2) the receiver should > > consume the > > _next_ hop as well. So if the payment length is actually 5, the sender > > tacks > > on an additional dummy 6th hop, encrypted with the _same_ shared > > secret for > > that hop to deliver the e2e encrypted data. > > > > > > Note, the sender can retry partial payments just as they would normal > > payments, since they are order invariant, and would be > > indistinguishable > > from regular payments to intermediaries in the network. > > > > > > > > > > Receiver Requirements > > ===================== > > > > > > Upon the arrival of each partial payment, the receiver will > > iteratively > > reconstruct BP, and do some bookkeeping to figure out when to settle > > the > > partial payments. During this reconstruction process, the receiver > > does not > > need to be aware of the order in which the payments were sent, and in > > fact > > nothing about the incoming partial payments reveals this information > > to the > > receiver, though this can be learned after reconstructing BP. > > > > > > Each EOB is decoded to retrieve (ID, n, s_i), where i is the unique > > but > > unknown index of the incoming partial payment. The receiver has access > > to > > persistent key-value store DB that maps ID to (n, c*, BP*), where c* > > represents the number of partial payments received, BP* is the sum of > > the > > received additive shares, and the superscript * denotes that the value > > is > > being updated iteratively. c* and BP* both have initial values of 0. > > > > > > In the basic protocol, the receiver cache?s the first n it sees, and > > verifies that all incoming partial payments have the same n. The > > receiver > > should reject all partial payments if any EOB deviates. Next, the we > > update > > our persistent store with DB[ID] = (n, c* + 1, BP* ^ s_i), advancing > > the > > reconstruction by one step. > > > > > > If c* + 1 < n, there are still more packets in flight, so we sit > > tight. > > Otherwise, the receiver assumes all partial payments have arrived, and > > can > > being settling them back. Using the base preimage BP = BP* ^ s_i from > > our > > final iteration, the receiver can re-derive all n partial preimages > > and > > payment hashes, using r_i = H(BP || i) and h_i = H(r_i) simply through > > knowledge of n and BP. > > > > > > Finally, the receiver settles back any outstanding payments that > > include > > payment hash h_i using the partial preimage r_i. Each r_i will appear > > random > > due to the nature of H, as will it?s corresponding h_i. Thus, each > > partial > > payment should appear uncorrelated, and does not reveal that it is > > part of > > an AMP nor the number of partial payments used. > > > > > > Non-interactive to Interactive AMPs > > =================================== > > > > > > Sender simply receives an ID and amount from the receiver in an > > invoice > > before initiating the protocol. The receiver should only consider the > > invoice settled if the total amount received in partial payments > > containing > > ID matches or exceeds the amount specified in the invoice. With this > > variant, the receiver is able to map all partial payments to a > > pre-generated > > invoice statement. > > > > > > > > > > Additive Shares vs Threshold-Shares > > =================================== > > > > > > The biggest reason to use additive shares seems to be atomicity. > > Threshold > > shares open the door to some partial payments being settled, even if > > others > > are left in flight. Haven?t yet come up with a good reason for using > > threshold schemes, but there seem to be plenty against it. > > > > > > Reconstruction of additive shares can be done iteratively, and is win > > for > > the storage and computation requirements on the receiving end. If the > > sender > > decides to use fewer than n partial payments, the remaining shares > > could be > > included in the EOB of the final partial payment to allow the sender > > to > > reconstruct sooner. Sender could also optimistically do partial > > reconstruction on this last aggregate value. > > > > > > > > > > Adaptive AMPs > > ============= > > > > > > The sender may not always be aware of how many partial payments they > > wish to > > send at the time of the first partial payment, at which point the > > simplified > > protocol would require n to be chosen. To accommodate, the above > > scheme can > > be adapted to handle a dynamically chosen n by iteratively > > constructing the > > shared secrets as follows. > > > > > > Starting with a base preimage BP, the key trick is that the sender > > remember > > the difference between the base preimage and the sum of all partial > > preimages used so far. The relation is described using the following > > equations: > > > > > > X_0 = 0 > > X_i = X_{i-1} ^ s_i > > X_n = BP ^ X_{n-1} > > > > > > where if n=1, X_1 = BP, implying that this is in fact a generalization > > of > > the single, non-interactive payment scheme mentioned above. For > > i=1, ..., > > n-1, the sender sends s_i in the EOB, and X_n for the n-th share. > > > > > > Iteratively reconstructing s_1 ^ ?. ^ s_{n-1} ^ X_n = BP, allows the > > receiver to compute all relevant r_i = H(BP || i) and h_i = H(r_i). > > Lastly, > > the final number of partial payments n could be signaled in the final > > EOB, > > which would also serve as a sentinel value for signaling completion. > > In > > response to DOS vectors stemming from unknown values of n, > > implementations > > could consider advertising a maximum value for n, or adopting some > > sort of > > framing pattern for conveying that more partial payments are on the > > way. > > > > > > We can further modify our usage of the per-hop payloads to send > > (H(BP), s_i) to > > consume most of the EOB sent from sender to receiver. In this > > scenario, we'd > > repurpose the 11-bytes *after* our signalling byte in the unused byte > > section > > to store the payment ID (which should be unique for each payment). In > > the case > > of a non-interactive payment, this will be unused. While for > > interactive > > payments, this will be the ID within the invoice. To deliver this > > slimmer > > 2-tuple, we'll use 32-bytes for the hash of the BP, and 32-bytes for > > the > > partial pre-image share, leaving an un-used byte in the payload. > > > > > > > > > > Cross-Chain AMPs > > ================ > > > > > > AMPs can be used to pay a receiver in multiple currencies > > atomically...which > > is pretty cool :D > > > > > > > > > > Open Research Questions > > ======================= > > > > > > The above is a protocol sketch to achieve atomic multi-path payments > > over > > Lightning. The details concerning onion blob usage serves as a > > template that > > future protocols can draw upon in order to deliver additional data to > > *any* > > hop in the route. However, there are still a few open questions before > > something like this can be feasibly deployed. > > > > > > 1. How does the sender decide how many chunked payments to send, and > > the > > size of each payment? > > > > > > - Upon a closer examination, this seems to overlap with the task of > > congestion control within TCP. The sender may be able to utilize > > inspired heuristics to gauge: (1) how large the initial payment > > should be > > and (2) how many subsequent payments may be required. Note that if > > the > > first payment succeeds, then the exchange is over in a signal > > round. > > > > > > 2. How can AMP and HORNET be composed? > > > > > > - If we eventually integrate HORNET, then a distinct communications > > sessions can be established to allow the sender+receiver to > > exchange > > up-to-date partial payment information. This may allow the sender > > to more > > accurately size each partial payment. > > > > 3. Can the sender's initial strategy be governed by an instance of the > > Push-relabel max flow algo? > > > > > > 4. How does this mesh with the current max HTLC limit on a commitment? > > > > > > - ATM, we have a max limit on the number of active HTLC's on a > > particular > > commitment transaction. We do this, as otherwise it's possible > > that the > > transaction is too large, and exceeds standardness w.r.t > > transaction > > size. In a world where most payments use an AMP-like protocol, > > then > > overall ant any given instance there will be several pending > > HTLC's on > > commitments network wise. > > > > > > This may incentivize nodes to open more channels in order to > > support > > the increased commitment space utilization. > > > > > > > > > > Conclusion > > ========== > > > > > > We've presented a design outline of how to integrate atomic multi-path > > payments (AMP) into Lightning. The existence of such a construct > > allows a > > sender to atomically split a payment flow amongst several individual > > payment > > flows. As a result, larger channels aren't as important as it's > > possible to > > utilize one total outbound payment bandwidth to send several channels. > > Additionally, in order to support the increased load, internal routing > > nodes > > are incensed have more active channels. The existence of AMP-like > > payments > > may also increase the longevity of channels as there'll be smaller, > > more > > numerous payment flows, making it unlikely that a single payment comes > > across unbalances a channel entirely. We've also showed how one can > > utilize > > the current onion packet format to deliver additional data from a > > sender to > > receiver, that's still e2e authenticated. > > > > > > > > > > -- Conner && Laolu > > > > > > _______________________________________________ > > Lightning-dev mailing list > > Lightning-dev at lists.linuxfoundation.org > > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev > > > _______________________________________________ > Lightning-dev mailing list > Lightning-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rusty at rustcorp.com.au Mon Feb 12 01:45:55 2018 From: rusty at rustcorp.com.au (Rusty Russell) Date: Mon, 12 Feb 2018 12:15:55 +1030 Subject: [Lightning-dev] Improving the initial gossip sync In-Reply-To: <87shaawft5.fsf@gmail.com> References: <874lmvy4gh.fsf@gmail.com> <87tvurym13.fsf@rustcorp.com.au> <87shaawft5.fsf@gmail.com> Message-ID: <878tbzugj0.fsf@rustcorp.com.au> Christian Decker writes: > Rusty Russell writes: >> Finally catching up. I prefer the simplicity of the timestamp >> mechanism, with a more ambitious mechanism TBA. > > Fabrice and I had a short chat a few days ago and decided that we'll > simulate both approaches and see what consumes less bandwidth. With > zombie channels and the chances for missing channels during a weak form > of synchronization, it's not that clear to us which one has the better > tradeoff. With some numbers behind it it may become easier to decide :-) Maybe; I think we'd be best off with an IBLT-approach similar to Fabrice's proposal. An IBLT is better than a simple hash, since if your results are similar you can just extract the differences, and they're easier to maintain. Even easier if we make the boundaries static rather than now-relative. For node_announce and channel_update you'd probably want separate IBLTs (perhaps, though not necessarily, as a separate RTT). Note that this approach fits really well as a complement to the timestamp approach: you'd use this for older pre-timestamp, where you're likely to have a similar idea of channels. >> Deployment suggestions: >> >> 1. This should be a feature bit pair. As usual, even == 'support this or >> disconnect', and odd == 'ok even if you don't understand'. > > If we add the timestamp to the end of the `init` message, instead of > introducing a new message altogether, we are forced to use the required > bit, otherwise we just made any future field appended to the `init` > message unparseable to non-supporting nodes. Let's say we add another > field to it later, that the peer supports, but it follows the timestamp > which the peer does not. The peer doesn't know how many bytes to skip > (if any) for the timestamp bit he doesn't understand, to get to the > bytes he does know how to parse. I'm slowly getting to like the extra > message more, since it also allows a number of cute tricks later. This, of course, is the nature of all appendings. You can't understand feature N+1 without understanding feature N, if they both append to the same message. You don't have to *support* feature N, of course. >> 2. This `timestamp_routing_sync`? feature overrides `initial_routing_sync`. >> That lets you decide what old nodes do, using the older >> `initial_routing_sync` option. Similarly, a future `fancy_sync` would >> override `timestamp_routing_sync`. > > So you'd set both bits, and if the peer knows `timestamp_routing_sync` > that then force-sets the `initial_routing_sync`? Sounds ok, if we allow > optional implementations, even though I'd like to avoid feature > interactions as much as possible. If we don't allow optional implementations we're breaking the spec. And we're not going to do that* [* Yeah, OK, we'll eventually do that. But we'll do it when we're pretty sure that ~0 users would break, because they'd be ancient ] >> 3. We can append an optional 4 byte `routing_sync_timestamp` field to to >> `init` without issues, since all lengths in there are explicit. If you >> don't offer the `timestamp_sync` feature, this Must Be Zero (for appending >> more stuff in future). > > That'd still require the peer to know that it has to skip 4 bytes to get > any future fields, which is why I am convinced that either forcing it to > be mandatory, or adding a new message is the better way to go, even if > now everybody implements it correctly. This is simply how we upgrade. See `open_channel` for how this is already done, for example; in fact, we originally had two different upgrades (but we broke spec instead) and they used exactly this technique. A separate message here is supremely awkward, too. >> Now, as to the proposal specifics. >> >> I dislike the re-transmission of all old channel_announcement and >> node_announcement messages, just because there's been a recent >> channel_update. Simpler to just say 'send anything >= >> routing_sync_timestamp`. > > I'm afraid we can't really omit the `channel_announcement` since a > `channel_update` that isn't preceded by a `channel_announcement` is > invalid and will be dropped by peers (especially because the > `channel_update` doesn't contain the necessary information for > validation). OTOH this is a rare corner case which will eventually be fixed by weekly channel_announce retransmission. In particular, the receiver should have already seen the channel_announce, since it preceeded the timestamp they asked for. Presumably IRL you'd ask for a timestamp sometime before you were last disconnected, say 30 minutes. "The perfect is the enemy of the good". >> Background: c-lightning internally keeps an tree of gossip in the order >> we received them, keeping a 'current' pointer for each peer. This is >> very efficient (though we don't remember if a peer sent us a gossip msg >> already, so uses twice the bandwidth it could). > > We can solve that by keeping a filter of the messages we received from > the peer, it's more of an optimization than anything, other than the > bandwidth cost, it doesn't hurt. Yes, it's on the TODO somewhere... we should do this! >> But this isn't *quite* the same as timestamp order, so we can't just set >> the 'current' pointer based on the first entry >= >> `routing_sync_timestamp`; we need to actively filter. This is still a >> simple traverse, however, skipping over any entry less than >> routing_sync_timestamp. >> >> OTOH, if we need to retransmit announcements, when do we stop >> retransmitting them? If a new channel_update comes in during this time, >> are we still to dump the announcements? Do we have to remember which >> ones we've sent to each peer? > > That's more of an implementation detail. In c-lightning we can just > remember the index at which the initial sync started, and send > announcements along until the index is larger than the initial sync > index. True. It is an implementation detail which is critical to saving bandwidth though. > A more general approach would be to have 2 timestamps, one highwater and > one lowwater mark. Anything inbetween these marks will be forwarded > together with all associated announcements (node / channel), anything > newer than that will only forward the update. The two timestamps > approach, combined with a new message, would also allow us to send > multiple `timestamp_routing_sync` messages, e.g., first sync the last > hour, then the last day, then the last week, etc. It gives the syncing > node control over what timewindow to send, inverting the current initial > sync. That would fit neatly with the more complicated bucketing approaches: you'd use this technique to ask for the entire bucket if the SHA mismatched/IBLT failed. Cheers, Rusty. From ZmnSCPxj at protonmail.com Mon Feb 12 03:03:37 2018 From: ZmnSCPxj at protonmail.com (ZmnSCPxj) Date: Sun, 11 Feb 2018 22:03:37 -0500 Subject: [Lightning-dev] AMP: Atomic Multi-Path Payments over Lightning In-Reply-To: References: <1518171320.5145.0.camel@ultimatestunts.nl> Message-ID: Good morning Cezary, > That would be great improvement, if AMP could work this way: > > 1. I would like to send 0.1 BTC, so I split this to 5 payment 0.02 BTC each + one extra 0.02 BTC payment. > 2. When recipient received 6 htlcs, he is able to spend only 5 of them. > If recipient receives, only 5 of them, it is still fine, and payment is success. > > In such scenario, single route/payment would fail, and payment as whole would still be success. Do you think that would be possible? It could greatly increase reliability of LN payments. I will leave it to better mathematicians to answer your direct question, but, my intuition suggests it is not possible as stated. However, let me propose an alternative AMP method instead. ------ Roughly, we want to proceed this way. 1. When paying: 1.1. Try to pay. 1.2. If it fails, split it into two smaller payments and recurse into 1. Now we should ensure that the receiver can only receive if it has received all payments, and once it has received all payments, it can claim all payments. So let me first introduce the below dual: * A pseudorandom number generator can be represented by its algorithm and the seed. Alternatively, it can be represented by a stream of numbers. Now, a stream of numbers has no end, but it does have a start (i.e. the first random number generated by the PRNG from the seed). It is possible to "split" a stream into two streams, by taking the 0th, 2nd, 4th... numbers in one stream, and taking the 1st, 3rd, 5th... numbers in the other stream. Each such "split" stream can itself be further split into two streams. Split streams can be re-"merged" by interleaving their members to yield the original pre-split stream. Now, we also want to be able to split a random seed into two seeds. This splitting need not correspond to the stream-split (i.e. the split seeds will NOT generate the split streams); we only need to split seeds to prevent the receiver from claiming partial amounts. This can be done by using another random source to generate a new "seed", and XOR it with the real seed. The split "halves" are then the random number, and seed XOR the random number; the result is two apparently random numbers which, when XORed together, generate the original seed. Let us now sketch our algorithm: 1. def pay(seed, stream, dest, amount): 1.1. try { r = route(dest, amount, randomstuff); offer_htlc(H(stream[0]), r, seed, stream); 1.2. } catch(PaymentFailure) { sd1, sd2 = split_seed(seed); sr1, sr2 = split_stream(stream); pay(sd1, sr1, dest, amount / 2); pay(sd2, sr2, dest, amount / 2); } Now notice that the hash we use is H(stream[0]). That is, the first item in the stream of random numbers. Thus our streams do not actually need to give anything more than the first number in a stream. We can represent a "split" stream simply by the index into the original stream. For example, if we have: s = original stream sl, sr = split_stream(s) sll, slr = split_stream(sl) Then s[0] and sl[0] and sll[0] are simply index 0 into the original stream, sr[0] is index 1, and slr[0] is index 2. We can thus represent streams and their splits by the tuple (seed, index, depth), where depth indicates how many splits the stream has been through. So, for the below: s = (seed, 0, 0) sl, sr = split_stream(s) = (seed, 0, 1), (seed, 1, 1) sll, slr = split_stream(sl) = (seed, 0, 2), (seed, 2, 2) split_stream( (seed, index, depth) ) = (seed, index, depth + 1), (seed, index + 2**depth, depth + 1) Then, for any stream s whose RNG algorithm is PRNG: s[0] = (seed, index, _)[0] = PRNG(seed)[index] Let us now consider how our payment might proceed. 1. First, we generate a random seed, and call pay(seed, (seed, 0, 0), dest, amount) 2. Let us suppose that payment fails for the entire amount. Split the amount into two: 2.1. In one branch we have pay(X, (seed, 0, 1), dest, amount / 2). X is a new random number. 2.2. In other branch we have pay(seed ^ X, (seed, 1, 1), dest, amount / 2). X is the same number as branch 2.1. 2.2.1. Suppose this payment fails. Split it again intow two payments: 2.2.1.1 In one sub-branch we have pay(Y, (seed, 1, 2), dest, amount / 4). 2.2.1.2. In other sub-branch we have pay(seed ^ X ^ Y, (seed, 3, 2), dest, amount / 4). The receiver receives the branches 2.1, 2.2.1.1, and 2.2.1.2., which provide the seeds: 2.1. => X 2.2.1.1 => Y 2.2.1.2. => seed ^X ^ Y Xoring all of the above provides X ^ Y ^ seed ^ X ^ Y = seed. The receiver can claim branch 2.1. by using PRNG(seed)[0], can claim branch 2.2.1.1 using PRNG(seed)[1], and branch 2.2.1.2 using PRNG(seed)[3]. Thus the sender needs only to send the split seed (say 32 bytes) and the index (say 1 byte for up to 8-level splitting into up to 256 payments). The receiver gathers each split seed, XOR them all together to get the original PRNG seed, and runs the PRNG the appropriate number of times to get the preimages of each payment. (pragmatically we also need some kind of payment ID to differentiate different logical payments from the same sender, and to differentiate it from non-AMP) The receiver cannot claim partial payments as it cannot determine the original seed until all branches of the payment reach it. Once it has received all branches of the payment, however, it can determine the seed and the preimage of each payment; once it does so it has incentive to get all branches, yielding atomicity. Regards, ZmnSCPxj > 2018-02-09 11:15 GMT+01:00 CJP : > >> Can you give a use case for this? >> >> Usually, especially in the common case that a payment is done in >> exchange for some non-cryptographic asset (e.g. physical goods), there >> already is some kind of trust between payer and payee. So, if a payment >> is split non-atomically into smaller transactions, and only a part >> succeeds, presumably they can cooperatively figure out some way to >> settle the situation. >> >> I spoke to people of the "interledger" project, and what they are >> planning to do is to non-atomically split *every* transaction into lots >> of micro-payments. In fact, they consider it unnecessary to enforce >> HTLCs with scripts, because their amounts are so small(*). If one >> micro-payment fails, that just makes them learn that a certain channel >> is unreliable, and they'll send further payments (and even the remaining >> part of the same payment) through a different route. >> >> CJP >> >> (*) not worth the extra on-blockchain fee due to the increased tx size. >> >> Olaoluwa Osuntokun schreef op di 06-02-2018 om 05:26 [+0000]: >> >>> Hi Y'all, >>> >>> >>> A common question I've seen concerning Lightning is: "I have five $2 >>> channels, is it possible for me to *atomically* send $6 to fulfill a >>> payment?". The answer to this question is "yes", provided that the >>> receiver >>> waits to pull all HTLC's until the sum matches their invoice. >>> Typically, one >>> assumes that the receiver will supply a payment hash, and the sender >>> will >>> re-use the payment hash for all streams. This has the downside of >>> payment >>> hash re-use across *multiple* payments (which can already easily be >>> correlated), and also has a failure mode where if the sender fails to >>> actually satisfy all the payment flows, then the receiver can still >>> just >>> pull the monies (and possibly not disperse a service, or w/e). >>> >>> >>> Conner Fromknecht and I have come up with a way to achieve this over >>> Lightning while (1) not re-using any payment hashes across all payment >>> flows, and (2) adding a *strong* guarantee that the receiver won't be >>> paid >>> until *all* partial payment flows are extended. We call this scheme >>> AMP >>> (Atomic Multi-path Payments). It can be experimented with on Lightning >>> *today* with the addition of a new feature bit to gate this new >>> feature. The beauty of the scheme is that it requires no fundamental >>> changes >>> to the protocol as is now, as the negotiation is strictly *end-to-end* >>> between sender and receiver. >>> >>> >>> TL;DR: we repurpose some unused space in the onion per-hop payload of >>> the >>> onion blob to signal our protocol (and deliver some protocol-specific >>> data), >>> then use additive secret sharing to ensure that the receiver can't >>> pull the >>> payment until they have enough shares to reconstruct the original >>> pre-image. >>> >>> >>> >>> >>> Protocol Goals >>> ============== >>> 1. Atomicity: The logical transaction should either succeed or fail in >>> entirety. Naturally, this implies that the receiver should not be >>> unable to >>> settle *any* of the partial payments, until all of them have arrived. >>> >>> >>> 2. Avoid Payment Hash Reuse: The payment preimages validated by the >>> consensus layer should be distinct for each partial payment. >>> Primarily, >>> this helps avoid correlation of the partial payments, and ensures that >>> malicious intermediaries straddling partial payments cannot steal >>> funds. >>> >>> >>> 3. Order Invariance: The protocol should be forgiving to the order in >>> which >>> partial payments arrive at the destination, adding robustness in the >>> face of >>> delays or routing failures. >>> >>> >>> 4. Non-interactive Setup: It should be possible for the sender to >>> perform an >>> AMP without directly coordinating with the receiving node. >>> Predominantly, >>> this means that the *sender* is able to determine the number of >>> partial >>> payments to use for a particular AMP, which makes sense since they >>> will be >>> the one fronting the fees for the cost of this parameter. Plus, we can >>> always turn a non-interactive protocol into an interactive one for the >>> purposes of invoicing. >>> >>> >>> >>> >>> Protocol Benefits >>> ================= >>> >>> >>> Sending pay payments predominantly over an AMP-like protocol has >>> several >>> clear benefits: >>> >>> >>> - Eliminates the constraint that a single path from sender to >>> receiver >>> with sufficient directional capacity. This reduces the pressure to >>> have >>> larger channels in order to support larger payment flows. As a >>> result, >>> the payment graph be very diffused, without sacrificing payment >>> utility >>> >>> >>> - Reduces strain from larger payments on individual paths, and >>> allows the >>> liquidity imbalances to be more diffuse. We expect this to have a >>> non-negligible impact on channel longevity. This is due to the >>> fact that >>> with usage of AMP, payment flows are typically *smaller* meaning >>> that >>> each payment will unbalance a channel to a lesser degree that >>> with one giant flow. >>> >>> >>> - Potential fee savings for larger payments, contingent on there >>> being a >>> super-linear component to routed fees. It's possible that with >>> modifications to the fee schedule, it's actually *cheaper* to send >>> payments over multiple flows rather than one giant flow. >>> >>> >>> - Allows for logical payments larger than the current maximum value >>> of an >>> individual payment. Atm we have a (temporarily) limit on the max >>> payment >>> size. With AMP, this can be side stepped as each flow can be up >>> the max >>> size, with the sum of all flows exceeding the max. >>> >>> >>> - Given sufficient path diversity, AMPs may improve the privacy of >>> LN >>> Intermediaries are now unaware to how much of the total payment >>> they are >>> forwarding, or even if they are forwarding a partial payment at >>> all. >>> >>> >>> - Using smaller payments increases the set of possible paths a >>> partial >>> payment could have taken, which reduces the effectiveness of >>> static >>> analysis techniques involving channel capacities and the plaintext >>> values being forwarded. >>> >>> >>> >>> >>> Protocol Overview >>> ================== >>> This design can be seen as a generalization of the single, >>> non-interactive >>> payment scheme, that uses decoding of extra onion blobs (EOBs?) to >>> encode >>> extra data for the receiver. In that design, the extra data includes a >>> payment preimage that the receiver can use to settle back the payment. >>> EOBs >>> and some method of parsing them are really the only requirement for >>> this >>> protocol to work. Thus, only the sender and receiver need to implement >>> this >>> feature in order for it to function, which can be announced using a >>> feature >>> bit. >>> >>> >>> First, let's review the current format of the per-hop payload for each >>> node >>> described in BOLT-0004. >>> >>> >>> ?????????????????????????????????????????????????????????????????????????????????????????????????????????????????? >>> ?Realm (1 byte) ?Next Addr (8 bytes)?Amount (8 bytes)?Outgoing CLTV (4 >>> bytes)?Unused (12 bytes)? HMAC (32 bytes) ? >>> ?????????????????????????????????????????????????????????????????????????????????????????????????????????????????? >>> ?????????????????????????????????????????????????????????????????????????????????????????????????????????????????? >>> ??????????????????? >>> ?65 Bytes Per Hop ? >>> ??????????????????? >>> >>> >>> Currently, *each* node gets a 65-byte payload. We use this payload to >>> give >>> each node instructions on *how* to forward a payment. We tell each >>> node: the >>> realm (or chain to forward on), then next node to forward to, the >>> amount to >>> forward (this is where fees are extracted by forwarding out less than >>> in), >>> the outgoing CLTV (allows verification that the prior node didn't >>> modify any >>> values), and finally an HMAC over the entire thing. >>> >>> >>> Two important points: >>> 1. We have 12 bytes for each hop that are currently unpurposed and >>> can be >>> used by application protocols to signal new interpretation of bytes >>> and >>> also deliver additional encrypted+authenticated data to *each* hop. >>> >>> >>> 2. The protocol currently has a hard limit of 20-hops. With this >>> feature >>> we ensure that the packet stays fixed sized during processing in >>> order to >>> avoid leaking positional information. Typically most payments won't >>> use >>> all 20 hops, as a result, we can use the remaining hops to stuff in >>> *even >>> more* data. >>> >>> >>> >>> >>> Protocol Description >>> ==================== >>> The solution we propose is Atomic Multi-path Payments (AMPs). At a >>> high >>> level, this leverages EOBs to deliver additive shares of a base >>> preimage, >>> from which the payment preimages of partial payments can be derived. >>> The >>> receiver can only construct this value after having received all of >>> the >>> partial payments, satisfying the atomicity constraint. >>> >>> >>> The basic protocol: >>> >>> >>> Primitives >>> ========== >>> Let H be a CRH function. >>> Let || denote concatenation. >>> Let ^ denote xor. >>> >>> >>> >>> >>> Sender Requirements >>> =================== >>> The parameters to the sending procedure are a random identifier ID, >>> the >>> number of partial payments n, and the total payment value V. Assume >>> the >>> sender has some way of dividing V such that V = v_1 + ? + v_n. >>> >>> >>> To begin, the sender builds the base preimage BP, from which n partial >>> preimages will be derived. Next, the sender samples n additive shares >>> s_1, >>> ?, s_n, and takes the sum to compute BP = s_1 ^ ? ^ s_n. >>> >>> >>> With the base preimage created, the sender now moves on to >>> constructing the >>> n partial payments. For each i in [1,n], the sender deterministically >>> computes the partial preimage r_i = H(BP || i), by concatenating the >>> sequence number i to the base preimage and hashing the result. >>> Afterwards, >>> it applies H to determine the payment hash to use in the i?th partial >>> payment as h_i = H(r_i). Note that that with this preimage derivation >>> scheme, once the payments are pulled each pre-image is distinct and >>> indistinguishable from any other. >>> >>> >>> With all of the pieces in place, the sender initiates the i?th payment >>> by >>> constructing a route to the destination with value v_i and payment >>> hash h_i. >>> The tuple (ID, n, s_i) is included in the EOB to be opened by the >>> receiver. >>> >>> >>> In order to include the three tuple within the per-hop payload for the >>> final >>> destination, we repurpose the _first_ byte of the un-used padding >>> bytes in >>> the payload to signal version 0x01 of the AMP protocol (note this is a >>> PoC >>> outline, we would need to standardize signalling of these 12 bytes to >>> support other protocols). Typically this byte isn't set, so the >>> existence of >>> this means that we're (1) using AMP, and (2) the receiver should >>> consume the >>> _next_ hop as well. So if the payment length is actually 5, the sender >>> tacks >>> on an additional dummy 6th hop, encrypted with the _same_ shared >>> secret for >>> that hop to deliver the e2e encrypted data. >>> >>> >>> Note, the sender can retry partial payments just as they would normal >>> payments, since they are order invariant, and would be >>> indistinguishable >>> from regular payments to intermediaries in the network. >>> >>> >>> >>> >>> Receiver Requirements >>> ===================== >>> >>> >>> Upon the arrival of each partial payment, the receiver will >>> iteratively >>> reconstruct BP, and do some bookkeeping to figure out when to settle >>> the >>> partial payments. During this reconstruction process, the receiver >>> does not >>> need to be aware of the order in which the payments were sent, and in >>> fact >>> nothing about the incoming partial payments reveals this information >>> to the >>> receiver, though this can be learned after reconstructing BP. >>> >>> >>> Each EOB is decoded to retrieve (ID, n, s_i), where i is the unique >>> but >>> unknown index of the incoming partial payment. The receiver has access >>> to >>> persistent key-value store DB that maps ID to (n, c*, BP*), where c* >>> represents the number of partial payments received, BP* is the sum of >>> the >>> received additive shares, and the superscript * denotes that the value >>> is >>> being updated iteratively. c* and BP* both have initial values of 0. >>> >>> >>> In the basic protocol, the receiver cache?s the first n it sees, and >>> verifies that all incoming partial payments have the same n. The >>> receiver >>> should reject all partial payments if any EOB deviates. Next, the we >>> update >>> our persistent store with DB[ID] = (n, c* + 1, BP* ^ s_i), advancing >>> the >>> reconstruction by one step. >>> >>> >>> If c* + 1 < n, there are still more packets in flight, so we sit >>> tight. >>> Otherwise, the receiver assumes all partial payments have arrived, and >>> can >>> being settling them back. Using the base preimage BP = BP* ^ s_i from >>> our >>> final iteration, the receiver can re-derive all n partial preimages >>> and >>> payment hashes, using r_i = H(BP || i) and h_i = H(r_i) simply through >>> knowledge of n and BP. >>> >>> >>> Finally, the receiver settles back any outstanding payments that >>> include >>> payment hash h_i using the partial preimage r_i. Each r_i will appear >>> random >>> due to the nature of H, as will it?s corresponding h_i. Thus, each >>> partial >>> payment should appear uncorrelated, and does not reveal that it is >>> part of >>> an AMP nor the number of partial payments used. >>> >>> >>> Non-interactive to Interactive AMPs >>> =================================== >>> >>> >>> Sender simply receives an ID and amount from the receiver in an >>> invoice >>> before initiating the protocol. The receiver should only consider the >>> invoice settled if the total amount received in partial payments >>> containing >>> ID matches or exceeds the amount specified in the invoice. With this >>> variant, the receiver is able to map all partial payments to a >>> pre-generated >>> invoice statement. >>> >>> >>> >>> >>> Additive Shares vs Threshold-Shares >>> =================================== >>> >>> >>> The biggest reason to use additive shares seems to be atomicity. >>> Threshold >>> shares open the door to some partial payments being settled, even if >>> others >>> are left in flight. Haven?t yet come up with a good reason for using >>> threshold schemes, but there seem to be plenty against it. >>> >>> >>> Reconstruction of additive shares can be done iteratively, and is win >>> for >>> the storage and computation requirements on the receiving end. If the >>> sender >>> decides to use fewer than n partial payments, the remaining shares >>> could be >>> included in the EOB of the final partial payment to allow the sender >>> to >>> reconstruct sooner. Sender could also optimistically do partial >>> reconstruction on this last aggregate value. >>> >>> >>> >>> >>> Adaptive AMPs >>> ============= >>> >>> >>> The sender may not always be aware of how many partial payments they >>> wish to >>> send at the time of the first partial payment, at which point the >>> simplified >>> protocol would require n to be chosen. To accommodate, the above >>> scheme can >>> be adapted to handle a dynamically chosen n by iteratively >>> constructing the >>> shared secrets as follows. >>> >>> >>> Starting with a base preimage BP, the key trick is that the sender >>> remember >>> the difference between the base preimage and the sum of all partial >>> preimages used so far. The relation is described using the following >>> equations: >>> >>> >>> X_0 = 0 >>> X_i = X_{i-1} ^ s_i >>> X_n = BP ^ X_{n-1} >>> >>> >>> where if n=1, X_1 = BP, implying that this is in fact a generalization >>> of >>> the single, non-interactive payment scheme mentioned above. For >>> i=1, ..., >>> n-1, the sender sends s_i in the EOB, and X_n for the n-th share. >>> >>> >>> Iteratively reconstructing s_1 ^ ?. ^ s_{n-1} ^ X_n = BP, allows the >>> receiver to compute all relevant r_i = H(BP || i) and h_i = H(r_i). >>> Lastly, >>> the final number of partial payments n could be signaled in the final >>> EOB, >>> which would also serve as a sentinel value for signaling completion. >>> In >>> response to DOS vectors stemming from unknown values of n, >>> implementations >>> could consider advertising a maximum value for n, or adopting some >>> sort of >>> framing pattern for conveying that more partial payments are on the >>> way. >>> >>> >>> We can further modify our usage of the per-hop payloads to send >>> (H(BP), s_i) to >>> consume most of the EOB sent from sender to receiver. In this >>> scenario, we'd >>> repurpose the 11-bytes *after* our signalling byte in the unused byte >>> section >>> to store the payment ID (which should be unique for each payment). In >>> the case >>> of a non-interactive payment, this will be unused. While for >>> interactive >>> payments, this will be the ID within the invoice. To deliver this >>> slimmer >>> 2-tuple, we'll use 32-bytes for the hash of the BP, and 32-bytes for >>> the >>> partial pre-image share, leaving an un-used byte in the payload. >>> >>> >>> >>> >>> Cross-Chain AMPs >>> ================ >>> >>> >>> AMPs can be used to pay a receiver in multiple currencies >>> atomically...which >>> is pretty cool :D >>> >>> >>> >>> >>> Open Research Questions >>> ======================= >>> >>> >>> The above is a protocol sketch to achieve atomic multi-path payments >>> over >>> Lightning. The details concerning onion blob usage serves as a >>> template that >>> future protocols can draw upon in order to deliver additional data to >>> *any* >>> hop in the route. However, there are still a few open questions before >>> something like this can be feasibly deployed. >>> >>> >>> 1. How does the sender decide how many chunked payments to send, and >>> the >>> size of each payment? >>> >>> >>> - Upon a closer examination, this seems to overlap with the task of >>> congestion control within TCP. The sender may be able to utilize >>> inspired heuristics to gauge: (1) how large the initial payment >>> should be >>> and (2) how many subsequent payments may be required. Note that if >>> the >>> first payment succeeds, then the exchange is over in a signal >>> round. >>> >>> >>> 2. How can AMP and HORNET be composed? >>> >>> >>> - If we eventually integrate HORNET, then a distinct communications >>> sessions can be established to allow the sender+receiver to >>> exchange >>> up-to-date partial payment information. This may allow the sender >>> to more >>> accurately size each partial payment. >>> >>> 3. Can the sender's initial strategy be governed by an instance of the >>> Push-relabel max flow algo? >>> >>> >>> 4. How does this mesh with the current max HTLC limit on a commitment? >>> >>> >>> - ATM, we have a max limit on the number of active HTLC's on a >>> particular >>> commitment transaction. We do this, as otherwise it's possible >>> that the >>> transaction is too large, and exceeds standardness w.r.t >>> transaction >>> size. In a world where most payments use an AMP-like protocol, >>> then >>> overall ant any given instance there will be several pending >>> HTLC's on >>> commitments network wise. >>> >>> >>> This may incentivize nodes to open more channels in order to >>> support >>> the increased commitment space utilization. >>> >>> >>> >>> >>> Conclusion >>> ========== >>> >>> >>> We've presented a design outline of how to integrate atomic multi-path >>> payments (AMP) into Lightning. The existence of such a construct >>> allows a >>> sender to atomically split a payment flow amongst several individual >>> payment >>> flows. As a result, larger channels aren't as important as it's >>> possible to >>> utilize one total outbound payment bandwidth to send several channels. >>> Additionally, in order to support the increased load, internal routing >>> nodes >>> are incensed have more active channels. The existence of AMP-like >>> payments >>> may also increase the longevity of channels as there'll be smaller, >>> more >>> numerous payment flows, making it unlikely that a single payment comes >>> across unbalances a channel entirely. We've also showed how one can >>> utilize >>> the current onion packet format to deliver additional data from a >>> sender to >>> receiver, that's still e2e authenticated. >>> >>> >>> >>> >>> -- Conner && Laolu >>> >>> >>> _______________________________________________ >>> Lightning-dev mailing list >>> Lightning-dev at lists.linuxfoundation.org >>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev >> >> _______________________________________________ >> Lightning-dev mailing list >> Lightning-dev at lists.linuxfoundation.org >> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From decker.christian at gmail.com Mon Feb 12 09:18:49 2018 From: decker.christian at gmail.com (Christian Decker) Date: Mon, 12 Feb 2018 10:18:49 +0100 Subject: [Lightning-dev] AMP: Atomic Multi-Path Payments over Lightning In-Reply-To: References: <87y3k41j3s.fsf@rustcorp.com.au> Message-ID: <87k1viwop2.fsf@gmail.com> Jim Posen writes: > If using two hashes to deliver the payment while still getting a proof, I'm > not sure what that provides above just sending regular lightning payments > over multiple routes with one hash. Firstly, if there is a second hash, it > would presumably be the same for all routes, making them linkable again, > which AMP tries to solve. And secondly, the receiver has no incentive to > claim any of the HTLCs before all of them are locked in, because in that > case they are releasing the transaction receipt before fully being paid. Arguably the second concern is not really an issue, if you allow partial claims you'll end up in a whole lot of trouble. It should always be the case that the payment as whole is atomic, i.e., either the entirety of the payment goes through or none of it, independently of whether it was a singlepath or a multipath payment. This is actually one of the really nice features that was enforced using the simple "just reuse the hash"-mechanism, you always had to wait for the complete payment or you'd risk losing part of it. From decker.christian at gmail.com Mon Feb 12 09:23:22 2018 From: decker.christian at gmail.com (Christian Decker) Date: Mon, 12 Feb 2018 10:23:22 +0100 Subject: [Lightning-dev] AMP: Atomic Multi-Path Payments over Lightning In-Reply-To: <1518171320.5145.0.camel@ultimatestunts.nl> References: <1518171320.5145.0.camel@ultimatestunts.nl> Message-ID: <87h8qmwohh.fsf@gmail.com> CJP writes: > Can you give a use case for this? > > Usually, especially in the common case that a payment is done in > exchange for some non-cryptographic asset (e.g. physical goods), there > already is some kind of trust between payer and payee. So, if a payment > is split non-atomically into smaller transactions, and only a part > succeeds, presumably they can cooperatively figure out some way to > settle the situation. The scenario that is commonly used in these cases is a merchant that provides a signed invoice "if you pay me X with payment_hash Y I will deliver Z". Now the user performs the payment, learns the payment_key matching the payment_hash, but the merchant refuses to deliver, claiming it didn't get the payment. Now the user can go to a court, present the invoice signed by the merchant, and the proof-of-payment, and force the merchant to honor its commitment. From corne at bitonic.nl Mon Feb 12 13:30:07 2018 From: corne at bitonic.nl (=?UTF-8?Q?Corn=c3=a9_Plooy?=) Date: Mon, 12 Feb 2018 14:30:07 +0100 Subject: [Lightning-dev] AMP: Atomic Multi-Path Payments over Lightning In-Reply-To: <87h8qmwohh.fsf@gmail.com> References: <1518171320.5145.0.camel@ultimatestunts.nl> <87h8qmwohh.fsf@gmail.com> Message-ID: I was thinking that, for that use case, a different signed invoice could be formulated, stating * several payment hashes with their corresponding amounts * the obligation of signer to deliver Z if all corresponding payment keys are shown * some terms to handle the case where only a part of the payments was successful, e.g. an obligation to refund The third item is a bit problematic: in order to distinguish this case from a complete success, the payee would have to prove *absence* of successful transactions, which is hard. Absence of successful transactions can only be declared by the payer, so in order to reliably settle *without* going to court first, the payer should sign a declaration stating that certain transactions were canceled and that the other ones should be refunded. This can be another invoice. So, the original invoice states: * several payment hashes with their corresponding amounts * if all corresponding payment keys are shown: the obligation of to deliver Z, UNLESS stated otherwise by an invoice signed by -- signed by But if a payment partially fails, it can be refunded cooperatively with an invoice created by payer: * declares which of the original payments were successful (with payment keys) and which were not * replaces the obligation of to deliver Z with an obligation to refund the successful transactions * several payment hashes with their corresponding amounts * if all corresponding payment keys are shown: cancel the obligation of to refund -- signed by Maybe this can be repeated iteratively if necessary; hopefully the not-yet-settled amount will converge to zero. Important advantage: this only requires changes to the invoice format, not to the network protocol. The point is: in this use case, the court is apparently the final point of settlement for invoices, just like the blockchain is for the other channels in the route. IANAL, but I think the "scripting language" accepted by courts is quite flexible, and you can use that to enforce atomicity. With the construction described above, you can either refund cooperatively (and collect evidence that refund has happened), or, if that fails, go to court to enforce settlement there. CJP Op 12-02-18 om 10:23 schreef Christian Decker: > CJP writes: >> Can you give a use case for this? >> >> Usually, especially in the common case that a payment is done in >> exchange for some non-cryptographic asset (e.g. physical goods), there >> already is some kind of trust between payer and payee. So, if a payment >> is split non-atomically into smaller transactions, and only a part >> succeeds, presumably they can cooperatively figure out some way to >> settle the situation. > The scenario that is commonly used in these cases is a merchant that > provides a signed invoice "if you pay me X with payment_hash Y I will > deliver Z". Now the user performs the payment, learns the payment_key > matching the payment_hash, but the merchant refuses to deliver, claiming > it didn't get the payment. Now the user can go to a court, present the > invoice signed by the merchant, and the proof-of-payment, and force the > merchant to honor its commitment. > _______________________________________________ > Lightning-dev mailing list > Lightning-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev From decker.christian at gmail.com Mon Feb 12 18:05:56 2018 From: decker.christian at gmail.com (Christian Decker) Date: Mon, 12 Feb 2018 19:05:56 +0100 Subject: [Lightning-dev] AMP: Atomic Multi-Path Payments over Lightning In-Reply-To: References: <1518171320.5145.0.camel@ultimatestunts.nl> <87h8qmwohh.fsf@gmail.com> Message-ID: <87eflqw0aj.fsf@gmail.com> Honestly I don't get why we are complicating this so much. We have a system that allows atomic multipath payments using a single secret, and future decorrelation mechanisms allow us to vary the secret in such a way that multiple paths cannot be collated, why introduce a whole set of problems by giving away the atomicity? The same goes for the overpaying and trusting the recipient to only claim the owed amount, there is no need for this. Just pay the exact amount, by deriving secrets from the main secret and make the derivation reproducible by intermediate hops. Having proof-of-payment be presentable in a court is a nice feature, but it doesn't mean we need to abandon all guarantees we have worked so hard to establish in LN. Corn? Plooy via Lightning-dev writes: > I was thinking that, for that use case, a different signed invoice could > be formulated, stating > > * several payment hashes with their corresponding amounts > > * the obligation of signer to deliver Z if all corresponding payment > keys are shown > > * some terms to handle the case where only a part of the payments was > successful, e.g. an obligation to refund > > > The third item is a bit problematic: in order to distinguish this case > from a complete success, the payee would have to prove *absence* of > successful transactions, which is hard. Absence of successful > transactions can only be declared by the payer, so in order to reliably > settle *without* going to court first, the payer should sign a > declaration stating that certain transactions were canceled and that the > other ones should be refunded. This can be another invoice. > > > So, the original invoice states: > > * several payment hashes with their corresponding amounts > > * if all corresponding payment keys are shown: the obligation of > to deliver Z, UNLESS stated otherwise by an invoice signed by > > -- signed by > > > But if a payment partially fails, it can be refunded cooperatively with > an invoice created by payer: > > * declares which of the original payments were successful (with payment > keys) and which were not > > * replaces the obligation of to deliver Z with an obligation to > refund the successful transactions > > * several payment hashes with their corresponding amounts > > * if all corresponding payment keys are shown: cancel the obligation of > to refund > > -- signed by > > > Maybe this can be repeated iteratively if necessary; hopefully the > not-yet-settled amount will converge to zero. > > > Important advantage: this only requires changes to the invoice format, > not to the network protocol. > > > The point is: in this use case, the court is apparently the final point > of settlement for invoices, just like the blockchain is for the other > channels in the route. IANAL, but I think the "scripting language" > accepted by courts is quite flexible, and you can use that to enforce > atomicity. With the construction described above, you can either refund > cooperatively (and collect evidence that refund has happened), or, if > that fails, go to court to enforce settlement there. > > > CJP > > > Op 12-02-18 om 10:23 schreef Christian Decker: >> CJP writes: >>> Can you give a use case for this? >>> >>> Usually, especially in the common case that a payment is done in >>> exchange for some non-cryptographic asset (e.g. physical goods), there >>> already is some kind of trust between payer and payee. So, if a payment >>> is split non-atomically into smaller transactions, and only a part >>> succeeds, presumably they can cooperatively figure out some way to >>> settle the situation. >> The scenario that is commonly used in these cases is a merchant that >> provides a signed invoice "if you pay me X with payment_hash Y I will >> deliver Z". Now the user performs the payment, learns the payment_key >> matching the payment_hash, but the merchant refuses to deliver, claiming >> it didn't get the payment. Now the user can go to a court, present the >> invoice signed by the merchant, and the proof-of-payment, and force the >> merchant to honor its commitment. >> _______________________________________________ >> Lightning-dev mailing list >> Lightning-dev at lists.linuxfoundation.org >> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev > > > _______________________________________________ > Lightning-dev mailing list > Lightning-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev From ZmnSCPxj at protonmail.com Tue Feb 13 02:56:04 2018 From: ZmnSCPxj at protonmail.com (ZmnSCPxj) Date: Mon, 12 Feb 2018 21:56:04 -0500 Subject: [Lightning-dev] AMP: Atomic Multi-Path Payments over Lightning In-Reply-To: <87eflqw0aj.fsf@gmail.com> References: <1518171320.5145.0.camel@ultimatestunts.nl> <87h8qmwohh.fsf@gmail.com> <87eflqw0aj.fsf@gmail.com> Message-ID: Good morning Christian and Corne, Another idea to consider, is techniques like ZKCP and ZKCSP, which provide atomic access to information in exchange for monetary compensation. Ensuring atomicity of the exchange can be done by providing the information encrypted, a hash of the encryption key, and proofs that the encrypted data is the one desired and that the data was encrypted with the given key; the proof-of-payment is the encryption key, and possession of the encryption key is sufficient to gain access to the information, with no need to bring in legal structures. (admittedly, ZKCP and ZKCSP are dependent on new cryptography...) (also, AMP currently cannot provide a proof-of-payment, unlike current payment routing that has proof-of-payment, but that is an eventual design goal that would enable use of ZKC(S)P on-Lightning, assuming we eventually find out that zk-SNARKs and so on are something we can trust) Regards, ZmnSCPxj ? Sent with ProtonMail Secure Email. ? -------- Original Message -------- On February 13, 2018 2:05 AM, Christian Decker wrote: >Honestly I don't get why we are complicating this so much. We have a > system that allows atomic multipath payments using a single secret, and > future decorrelation mechanisms allow us to vary the secret in such a > way that multiple paths cannot be collated, why introduce a whole set of > problems by giving away the atomicity? The same goes for the overpaying > and trusting the recipient to only claim the owed amount, there is no > need for this. Just pay the exact amount, by deriving secrets from the > main secret and make the derivation reproducible by intermediate hops. > > Having proof-of-payment be presentable in a court is a nice feature, but > it doesn't mean we need to abandon all guarantees we have worked so hard > to establish in LN. > > Corn? Plooy via Lightning-dev lightning-dev at lists.linuxfoundation.org >writes: > >>I was thinking that, for that use case, a different signed invoice could >> be formulated, stating >> - several payment hashes with their corresponding amounts >> >> - the obligation of signer to deliver Z if all corresponding payment >> keys are shown >> >> - some terms to handle the case where only a part of the payments was >> successful, e.g. an obligation to refund >>The third item is a bit problematic: in order to distinguish this case >> from a complete success, the payee would have to prove absence of >> successful transactions, which is hard. Absence of successful >> transactions can only be declared by the payer, so in order to reliably >> settle without going to court first, the payer should sign a >> declaration stating that certain transactions were canceled and that the >> other ones should be refunded. This can be another invoice. >>So, the original invoice states: >> - several payment hashes with their corresponding amounts >> >> - if all corresponding payment keys are shown: the obligation of >> to deliver Z, UNLESS stated otherwise by an invoice signed by >>-- signed by >>But if a payment partially fails, it can be refunded cooperatively with >> an invoice created by payer: >> - declares which of the original payments were successful (with payment >> keys) and which were not >> >> - replaces the obligation of to deliver Z with an obligation to >> refund the successful transactions >> >> - several payment hashes with their corresponding amounts >> >> - if all corresponding payment keys are shown: cancel the obligation of >> to refund >>-- signed by >>Maybe this can be repeated iteratively if necessary; hopefully the >> not-yet-settled amount will converge to zero. >>Important advantage: this only requires changes to the invoice format, >> not to the network protocol. >>The point is: in this use case, the court is apparently the final point >> of settlement for invoices, just like the blockchain is for the other >> channels in the route. IANAL, but I think the "scripting language" >> accepted by courts is quite flexible, and you can use that to enforce >> atomicity. With the construction described above, you can either refund >> cooperatively (and collect evidence that refund has happened), or, if >> that fails, go to court to enforce settlement there. >>CJP >>Op 12-02-18 om 10:23 schreef Christian Decker: >>>CJP cjp at ultimatestunts.nl writes: >>>>Can you give a use case for this? >>>>Usually, especially in the common case that a payment is done in >>>> exchange for some non-cryptographic asset (e.g. physical goods), there >>>> already is some kind of trust between payer and payee. So, if a payment >>>> is split non-atomically into smaller transactions, and only a part >>>> succeeds, presumably they can cooperatively figure out some way to >>>> settle the situation. >>>> The scenario that is commonly used in these cases is a merchant that >>>> provides a signed invoice "if you pay me X with payment_hash Y I will >>>> deliver Z". Now the user performs the payment, learns the payment_key >>>> matching the payment_hash, but the merchant refuses to deliver, claiming >>>> it didn't get the payment. Now the user can go to a court, present the >>>> invoice signed by the merchant, and the proof-of-payment, and force the >>>> merchant to honor its commitment. >>>> >>>Lightning-dev mailing list >>>Lightning-dev at lists.linuxfoundation.org >>>https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev >>> >>Lightning-dev mailing list >>Lightning-dev at lists.linuxfoundation.org >>https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev >> >Lightning-dev mailing list >Lightning-dev at lists.linuxfoundation.org >https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev > From conner at lightning.engineering Tue Feb 13 03:29:28 2018 From: conner at lightning.engineering (Conner Fromknecht) Date: Tue, 13 Feb 2018 03:29:28 +0000 Subject: [Lightning-dev] AMP: Atomic Multi-Path Payments over Lightning In-Reply-To: References: <1518171320.5145.0.camel@ultimatestunts.nl> <87h8qmwohh.fsf@gmail.com> <87eflqw0aj.fsf@gmail.com> Message-ID: Hi everyone, I've seen some discussions over losing proofs of payment in the AMP setting, and wanted to address some lingering concerns I have regarding the soundness of using the current invoicing system to prove payments. In general, I think we are ascribing too much weight to simply having a preimage and BOLT 11 invoice. The structure of non-interactive payments definitely poses some interesting challenges in adapting the existing invoicing scheme. However, I believe there exist stronger and better means of doing proofs of payment, and would prefer not to tie our hands by assuming this is the best way to approach the problem. IMHO, the current signed invoice + preimage is a very weak proof of payment. It's the hash equivalent to proving you own a public key by publishing the secret key. There is an assumption that the only way someone could get that preimage is by having made a payment, but this assumption is broken most directly by the proving mechanism. Similarly, any intermediary who acquires an invoice with the appropriate hash could also make this claim since they also have the preimage. Further, I think it's a mistake to conflate 1) me being able to present a valid preimage/invoice pair, with 2) me having received the correct preimage in response to an onion packet that I personally crafted for the receiving node in the invoice. The main issue is that the proof does not bind a specific sender, making statement 1 producible by multiple individuals. I think it would be potentially worthwhile to explore proofs of stronger statements, such as 2, that could utilize the ephemeral keys in the onion packets, or even the onion as a witness, which is more rigidly coupled to having actually completed a payment. Without any modification to the spec, we can always use something like ZKBoo to prove (w/o trusted setup) knowledge of a preimage without totally revealing it to the verifier. This isn't perfect, but at least gives the sender the option to prove the statement without necessarily giving up the preimage. TL;DR: I'm not convinced the signed invoice + hash is really a good yardstick by which to measure provability, and I think doing some research into proofs of payment on stronger statements would be incredibly valuable. Therefore, I'm not sure if AMPs really lose this, so much as force us to reconsider what it actually requires to soundly prove a payment to an external verifier. Best, Conner On Mon, Feb 12, 2018 at 6:56 PM ZmnSCPxj via Lightning-dev < lightning-dev at lists.linuxfoundation.org> wrote: > Good morning Christian and Corne, > > Another idea to consider, is techniques like ZKCP and ZKCSP, which provide > atomic access to information in exchange for monetary compensation. > Ensuring atomicity of the exchange can be done by providing the information > encrypted, a hash of the encryption key, and proofs that the encrypted data > is the one desired and that the data was encrypted with the given key; the > proof-of-payment is the encryption key, and possession of the encryption > key is sufficient to gain access to the information, with no need to bring > in legal structures. > > (admittedly, ZKCP and ZKCSP are dependent on new cryptography...) > > (also, AMP currently cannot provide a proof-of-payment, unlike current > payment routing that has proof-of-payment, but that is an eventual design > goal that would enable use of ZKC(S)P on-Lightning, assuming we eventually > find out that zk-SNARKs and so on are something we can trust) > > Regards, > ZmnSCPxj > > ? > Sent with ProtonMail Secure Email. > ? > > -------- Original Message -------- > On February 13, 2018 2:05 AM, Christian Decker < > decker.christian at gmail.com> wrote: > > >Honestly I don't get why we are complicating this so much. We have a > > system that allows atomic multipath payments using a single secret, and > > future decorrelation mechanisms allow us to vary the secret in such a > > way that multiple paths cannot be collated, why introduce a whole set of > > problems by giving away the atomicity? The same goes for the overpaying > > and trusting the recipient to only claim the owed amount, there is no > > need for this. Just pay the exact amount, by deriving secrets from the > > main secret and make the derivation reproducible by intermediate hops. > > > > Having proof-of-payment be presentable in a court is a nice feature, but > > it doesn't mean we need to abandon all guarantees we have worked so hard > > to establish in LN. > > > > Corn? Plooy via Lightning-dev lightning-dev at lists.linuxfoundation.org > >writes: > > > >>I was thinking that, for that use case, a different signed invoice could > >> be formulated, stating > >> - several payment hashes with their corresponding amounts > >> > >> - the obligation of signer to deliver Z if all corresponding payment > >> keys are shown > >> > >> - some terms to handle the case where only a part of the payments was > >> successful, e.g. an obligation to refund > >>The third item is a bit problematic: in order to distinguish this case > >> from a complete success, the payee would have to prove absence of > >> successful transactions, which is hard. Absence of successful > >> transactions can only be declared by the payer, so in order to reliably > >> settle without going to court first, the payer should sign a > >> declaration stating that certain transactions were canceled and that the > >> other ones should be refunded. This can be another invoice. > >>So, the original invoice states: > >> - several payment hashes with their corresponding amounts > >> > >> - if all corresponding payment keys are shown: the obligation of > >> to deliver Z, UNLESS stated otherwise by an invoice signed by > >>-- signed by > >>But if a payment partially fails, it can be refunded cooperatively with > >> an invoice created by payer: > >> - declares which of the original payments were successful (with payment > >> keys) and which were not > >> > >> - replaces the obligation of to deliver Z with an obligation to > >> refund the successful transactions > >> > >> - several payment hashes with their corresponding amounts > >> > >> - if all corresponding payment keys are shown: cancel the obligation of > >> to refund > >>-- signed by > >>Maybe this can be repeated iteratively if necessary; hopefully the > >> not-yet-settled amount will converge to zero. > >>Important advantage: this only requires changes to the invoice format, > >> not to the network protocol. > >>The point is: in this use case, the court is apparently the final point > >> of settlement for invoices, just like the blockchain is for the other > >> channels in the route. IANAL, but I think the "scripting language" > >> accepted by courts is quite flexible, and you can use that to enforce > >> atomicity. With the construction described above, you can either refund > >> cooperatively (and collect evidence that refund has happened), or, if > >> that fails, go to court to enforce settlement there. > >>CJP > >>Op 12-02-18 om 10:23 schreef Christian Decker: > >>>CJP cjp at ultimatestunts.nl writes: > >>>>Can you give a use case for this? > >>>>Usually, especially in the common case that a payment is done in > >>>> exchange for some non-cryptographic asset (e.g. physical goods), there > >>>> already is some kind of trust between payer and payee. So, if a > payment > >>>> is split non-atomically into smaller transactions, and only a part > >>>> succeeds, presumably they can cooperatively figure out some way to > >>>> settle the situation. > >>>> The scenario that is commonly used in these cases is a merchant that > >>>> provides a signed invoice "if you pay me X with payment_hash Y I will > >>>> deliver Z". Now the user performs the payment, learns the payment_key > >>>> matching the payment_hash, but the merchant refuses to deliver, > claiming > >>>> it didn't get the payment. Now the user can go to a court, present the > >>>> invoice signed by the merchant, and the proof-of-payment, and force > the > >>>> merchant to honor its commitment. > >>>> > >>>Lightning-dev mailing list > >>>Lightning-dev at lists.linuxfoundation.org > >>>https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev > >>> > >>Lightning-dev mailing list > >>Lightning-dev at lists.linuxfoundation.org > >>https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev > >> > >Lightning-dev mailing list > >Lightning-dev at lists.linuxfoundation.org > >https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev > > > > _______________________________________________ > Lightning-dev mailing list > Lightning-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fabrice.drouin at acinq.fr Tue Feb 13 09:01:38 2018 From: fabrice.drouin at acinq.fr (Fabrice Drouin) Date: Tue, 13 Feb 2018 10:01:38 +0100 Subject: [Lightning-dev] Improving the initial gossip sync In-Reply-To: <878tbzugj0.fsf@rustcorp.com.au> References: <874lmvy4gh.fsf@gmail.com> <87tvurym13.fsf@rustcorp.com.au> <87shaawft5.fsf@gmail.com> <878tbzugj0.fsf@rustcorp.com.au> Message-ID: On 12 February 2018 at 02:45, Rusty Russell wrote: > Christian Decker writes: >> Rusty Russell writes: >>> Finally catching up. I prefer the simplicity of the timestamp >>> mechanism, with a more ambitious mechanism TBA. >> >> Fabrice and I had a short chat a few days ago and decided that we'll >> simulate both approaches and see what consumes less bandwidth. With >> zombie channels and the chances for missing channels during a weak form >> of synchronization, it's not that clear to us which one has the better >> tradeoff. With some numbers behind it it may become easier to decide :-) > > Maybe; I think we'd be best off with an IBLT-approach similar to > Fabrice's proposal. An IBLT is better than a simple hash, since if your > results are similar you can just extract the differences, and they're > easier to maintain. Even easier if we make the boundaries static rather > than now-relative. For node_announce and channel_update you'd probably > want separate IBLTs (perhaps, though not necessarily, as a separate > RTT). Yes, ?real filters would be better, but the 'bucket hash' idea works (from what I've seen on testnet) ?for our? specific ?target? (nodes which are connected to very small number of peers and go offline ? very often) ?. > Note that this approach fits really well as a complement to the > timestamp approach: you'd use this for older pre-timestamp, where you're > likely to have a similar idea of channels. Both approaches maybe needed because they may be solutions to different problems (nodes which get ? d isconnected from ? a small set of peers vs nodes connected to many peers, which remain online but not some of their peers) >>> Now, as to the proposal specifics. >>> >>> I dislike the re-transmission of all old channel_announcement and >>> node_announcement messages, just because there's been a recent >>> channel_update. Simpler to just say 'send anything >= >>> routing_sync_timestamp`. >> >> I'm afraid we can't really omit the `channel_announcement` since a >> `channel_update` that isn't preceded by a `channel_announcement` is >> invalid and will be dropped by peers (especially because the >> `channel_update` doesn't contain the necessary information for >> validation). > > OTOH this is a rare corner case which will eventually be fixed by weekly > channel_announce retransmission. In particular, the receiver should > have already seen the channel_announce, since it preceeded the timestamp > they asked for. > > Presumably IRL you'd ask for a timestamp sometime before you were last > disconnected, say 30 minutes. > > "The perfect is the enemy of the good". This is precisely what I think ?would not work very well with the timestamp approach: ? ? when you're missing an 'old' channel announcement, and only have a few sources for them. ? ? It can have a huge impact on terminal nodes which won't be able to find routes and waiting for a ? ? new channel update would take too long. Yes, using just a few peers mean that you will be limited to the routing table they will give you, but ? ? having some kind of filter would let nodes connect ? ? to other peers just to retrieve ?them and check how far off they are from the rest of the nework. This would not possible with a timestamp (you would need to download the entire routing table again, which is what we're trying to avoid) >>> Background: c-lightning internally keeps an tree of gossip in the order >>> we received them, keeping a 'current' pointer for each peer. This is >>> very efficient (though we don't remember if a peer sent us a gossip msg >>> already, so uses twice the bandwidth it could). Ok so a peer would receive an announcement it has sent, but woud immediately dismiss it ? >> >> We can solve that by keeping a filter of the messages we received from >> the peer, it's more of an optimization than anything, other than the >> bandwidth cost, it doesn't hurt. > > Yes, it's on the TODO somewhere... we should do this! > >>> But this isn't *quite* the same as timestamp order, so we can't just set >>> the 'current' pointer based on the first entry >= >>> `routing_sync_timestamp`; we need to actively filter. This is still a >>> simple traverse, however, skipping over any entry less than >>> routing_sync_timestamp. >>> >>> OTOH, if we need to retransmit announcements, when do we stop >>> retransmitting them? If a new channel_update comes in during this time, >>> are we still to dump the announcements? Do we have to remember which >>> ones we've sent to each peer? >> >> That's more of an implementation detail. In c-lightning we can just >> remember the index at which the initial sync started, and send >> announcements along until the index is larger than the initial sync >> index. > > True. It is an implementation detail which is critical to saving > bandwidth though. > >> A more general approach would be to have 2 timestamps, one highwater and >> one lowwater mark. Anything inbetween these marks will be forwarded >> together with all associated announcements (node / channel), anything >> newer than that will only forward the update. The two timestamps >> approach, combined with a new message, would also allow us to send >> multiple `timestamp_routing_sync` messages, e.g., first sync the last >> hour, then the last day, then the last week, etc. It gives the syncing >> node control over what timewindow to send, inverting the current initial >> sync. > > That would fit neatly with the more complicated bucketing approaches: > you'd use this technique to ask for the entire bucket if the SHA > mismatched/IBLT failed. There is also somehting that would work fairly well today: just exchange all shortIds that you have. With the simplest possible implementation (sort and concatenate all 8 bytes short ids and compress with xz or gz or zip) it fits in about 8 Kb. And there are lots of easy optimizations ?(heights are mostly consecutive integers, tx and output index are small...) > Cheers, > Rusty. > _______________________________________________ > Lightning-dev mailing list > Lightning-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From corne at bitonic.nl Tue Feb 13 10:33:50 2018 From: corne at bitonic.nl (=?UTF-8?Q?Corn=c3=a9_Plooy?=) Date: Tue, 13 Feb 2018 11:33:50 +0100 Subject: [Lightning-dev] Proof of payment (Re: AMP: Atomic Multi-Path Payments over Lightning) In-Reply-To: References: <1518171320.5145.0.camel@ultimatestunts.nl> <87h8qmwohh.fsf@gmail.com> <87eflqw0aj.fsf@gmail.com> Message-ID: <9b84f100-172d-11ba-8440-12c2c9956698@bitonic.nl> Hi Conner, I do believe proof of payment is an important feature to have, especially for the use case of a payer/payee pair that doesn't completely trust each other, but does have the possibility to go to court. However, I'm not convinced by what you wrote. I do think a combination of signed invoice + preimage is a reliable proof of payment. Strictly speaking, you are right: it is not so much a proof that they payer *sent* the funds, but it *is* proof that the payee *received* the funds. This is because the only scenario where it makes sense for the payee to reveal the preimage is if it can claim a corresponding incoming HTLC with (at least) the correct amount of funds. Revealing the preimage in any other scenario would be stupid(*), and no amount of cryptography can protect against stupidity. So, when it comes to cryptographic proof, this is about as good as it gets. Now, about the difference between the payer having sent the funds and the payee having received the funds: I'd argue that it's the second that really matters. If the payer can prove that there is *any* kind of arrangement that ended up with the payee having received the correct amount of funds, that should count as payment. Now, if none of the intermediaries has been stupid, this does imply that the payer ends up on the sending side of the payment, but even if one if the intermediaries has been stupid, why should the payer and payee care? All that matters is that an arrangement has been made to let the payee receive (at least) the correct amount of funds, and that arrangement has been proven to be successful. I consider that proof of payment. CJP (*) Stupidity includes being hacked, and anything else that can cause your secrets being used against your own interests. Op 13-02-18 om 04:29 schreef Conner Fromknecht: > > Hi everyone, > > I've seen some discussions over losing proofs of payment in the AMP > setting, > and wanted to address some lingering concerns I have regarding the > soundness of using the current invoicing system to prove payments. > > In general, I think we are ascribing too much weight to simply having a > preimage and BOLT 11 invoice. The structure of non-interactive payments > definitely poses some interesting challenges in adapting the existing > invoicing > scheme. However, I believe there exist stronger and better means of doing > proofs of?payment,?and would prefer not to tie our hands by assuming > this is the best way to approach the problem. > > IMHO, the current signed invoice + preimage is a very weak proof of > payment. > It's the hash equivalent to proving you own a public key by publishing the > secret key. There is an assumption that the only way someone could get > that > preimage is by having made a payment, but this assumption is broken most > directly by the proving mechanism. Similarly, any intermediary who > acquires > an invoice with the appropriate hash could also make this claim since they > also have the preimage. > > Further, I think it's a mistake to conflate > ? 1) me being able to present a valid preimage/invoice pair, with > ? 2) me having received the correct preimage in response to an onion > packet > ? ? that I personally crafted for the receiving node?in?the invoice.? > ? ?? > The main issue is that the proof does not bind a specific sender, > making statement 1 producible by multiple individuals.?I think it would be > potentially worthwhile to explore proofs of stronger statements, such > as 2, > that could utilize the ephemeral keys in the onion?packets,?or even the > onion as a witness, which is more rigidly coupled to having actually > completed a payment. > > Without any modification to the spec, we can always use something like > ZKBoo to prove (w/o trusted setup) knowledge of a preimage without > totally revealing it to the verifier. This isn't perfect, but at least > gives the > sender the option to prove the statement without necessarily giving up > the preimage. > > TL;DR: I'm not convinced the signed invoice?+ hash is really a good > yardstick > by which to measure?provability, and I think doing some research into > proofs > of payment on stronger statements would be incredibly valuable. Therefore, > I'm not sure if AMPs really lose this, so much as?force?us to reconsider > what it actually requires to soundly prove a payment to an external > verifier. > > Best, > Conner > > On Mon, Feb 12, 2018 at 6:56 PM ZmnSCPxj via Lightning-dev > > wrote: > > Good morning Christian and Corne, > > Another idea to consider, is techniques like ZKCP and ZKCSP, which > provide atomic access to information in exchange for monetary > compensation.? Ensuring atomicity of the exchange can be done by > providing the information encrypted, a hash of the encryption key, > and proofs that the encrypted data is the one desired and that the > data was encrypted with the given key; the proof-of-payment is the > encryption key, and possession of the encryption key is sufficient > to gain access to the information, with no need to bring in legal > structures. > > (admittedly, ZKCP and ZKCSP are dependent on new cryptography...) > > (also, AMP currently cannot provide a proof-of-payment, unlike > current payment routing that has proof-of-payment, but that is an > eventual design goal that would enable use of ZKC(S)P > on-Lightning, assuming we eventually find out that zk-SNARKs and > so on are something we can trust) > > Regards, > ZmnSCPxj > > ? > Sent with ProtonMail Secure Email. > ? > > -------- Original Message -------- > ?On February 13, 2018 2:05 AM, Christian Decker > > > wrote: > > >Honestly I don't get why we are complicating this so much. We have a > > system that allows atomic multipath payments using a single > secret, and > > future decorrelation mechanisms allow us to vary the secret in > such a > > way that multiple paths cannot be collated, why introduce a > whole set of > > problems by giving away the atomicity? The same goes for the > overpaying > > and trusting the recipient to only claim the owed amount, there > is no > > need for this. Just pay the exact amount, by deriving secrets > from the > > main secret and make the derivation reproducible by intermediate > hops. > > > > Having proof-of-payment be presentable in a court is a nice > feature, but > > it doesn't mean we need to abandon all guarantees we have worked > so hard > > to establish in LN. > > > > Corn? Plooy via Lightning-dev > lightning-dev at lists.linuxfoundation.org > > >writes: > > > >>I was thinking that, for that use case, a different signed > invoice could > >> be formulated, stating > >> - several payment hashes with their corresponding amounts > >> > >> - the obligation of signer to deliver Z if all corresponding > payment > >> keys are shown > >> > >> - some terms to handle the case where only a part of the > payments was > >> successful, e.g. an obligation to refund > >>The third item is a bit problematic: in order to distinguish > this case > >> from a complete success, the payee would have to prove absence of > >> successful transactions, which is hard. Absence of successful > >> transactions can only be declared by the payer, so in order to > reliably > >> settle without going to court first, the payer should sign a > >> declaration stating that certain transactions were canceled and > that the > >> other ones should be refunded. This can be another invoice. > >>So, the original invoice states: > >> - several payment hashes with their corresponding amounts > >> > >> - if all corresponding payment keys are shown: the obligation > of > >> to deliver Z, UNLESS stated otherwise by an invoice signed by > > >>-- signed by > >>But if a payment partially fails, it can be refunded > cooperatively with > >> an invoice created by payer: > >> - declares which of the original payments were successful (with > payment > >> keys) and which were not > >> > >> - replaces the obligation of to deliver Z with an > obligation to > >> refund the successful transactions > >> > >> - several payment hashes with their corresponding amounts > >> > >> - if all corresponding payment keys are shown: cancel the > obligation of > >> to refund > >>-- signed by > >>Maybe this can be repeated iteratively if necessary; hopefully the > >> not-yet-settled amount will converge to zero. > >>Important advantage: this only requires changes to the invoice > format, > >> not to the network protocol. > >>The point is: in this use case, the court is apparently the > final point > >> of settlement for invoices, just like the blockchain is for the > other > >> channels in the route. IANAL, but I think the "scripting language" > >> accepted by courts is quite flexible, and you can use that to > enforce > >> atomicity. With the construction described above, you can > either refund > >> cooperatively (and collect evidence that refund has happened), > or, if > >> that fails, go to court to enforce settlement there. > >>CJP > >>Op 12-02-18 om 10:23 schreef Christian Decker: > >>>CJP cjp at ultimatestunts.nl writes: > >>>>Can you give a use case for this? > >>>>Usually, especially in the common case that a payment is done in > >>>> exchange for some non-cryptographic asset (e.g. physical > goods), there > >>>> already is some kind of trust between payer and payee. So, if > a payment > >>>> is split non-atomically into smaller transactions, and only a > part > >>>> succeeds, presumably they can cooperatively figure out some > way to > >>>> settle the situation. > >>>> The scenario that is commonly used in these cases is a > merchant that > >>>> provides a signed invoice "if you pay me X with payment_hash > Y I will > >>>> deliver Z". Now the user performs the payment, learns the > payment_key > >>>> matching the payment_hash, but the merchant refuses to > deliver, claiming > >>>> it didn't get the payment. Now the user can go to a court, > present the > >>>> invoice signed by the merchant, and the proof-of-payment, and > force the > >>>> merchant to honor its commitment. > >>>> > >>>Lightning-dev mailing list > >>>Lightning-dev at lists.linuxfoundation.org > > >>>https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev > >>> > >>Lightning-dev mailing list > >>Lightning-dev at lists.linuxfoundation.org > > >>https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev > >> > >Lightning-dev mailing list > >Lightning-dev at lists.linuxfoundation.org > > >https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev > > > > _______________________________________________ > Lightning-dev mailing list > Lightning-dev at lists.linuxfoundation.org > > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev > > > > _______________________________________________ > Lightning-dev mailing list > Lightning-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev From ZmnSCPxj at protonmail.com Tue Feb 13 14:23:37 2018 From: ZmnSCPxj at protonmail.com (ZmnSCPxj) Date: Tue, 13 Feb 2018 09:23:37 -0500 Subject: [Lightning-dev] Proof of payment (Re: AMP: Atomic Multi-Path Payments over Lightning) In-Reply-To: <9b84f100-172d-11ba-8440-12c2c9956698@bitonic.nl> References: <1518171320.5145.0.camel@ultimatestunts.nl> <87h8qmwohh.fsf@gmail.com> <87eflqw0aj.fsf@gmail.com> <9b84f100-172d-11ba-8440-12c2c9956698@bitonic.nl> Message-ID: Good morning Corne and Conner, Ignoring the practical matters that Corne rightly brings up, I think, it is possible to use ZKCP to provide a "stronger" proof-of-payment in the sense that Conner is asking for. All that is needed is to create a message (possibly in some standard language) indicating the payment amount and whatever commitment the payee claims to have for this payment, have the payee securely sign the message, and encrypt them. The encryption key used is hashed and is used as the payment hash for a standard BOLT11. Together with the standard BOLT11 the payee provides the encrypted message and signature to the payer in some secure communication channel that is end-to-end encrypted (so that only the payer can receive the encrypted message). (additionally zk-SNARKs attesting to correct operation of the parsing and encryption of the message, as well as the hashing of the encryption key, will be needed) In order to claim the payment on-Lightning, the payee provides the encryption key (as it is the preimage for the payment hash). Standard LN routing protocols then propagate this encryption key back to the payer. As only the payer received the encrypted message, only the payer can decrypt that message using the payment preimage, even if the preimage was propagated to multiple hops (and may very well have been published onchain in case a hop resolved a channel onchain). Regards, ZmnSCPxj -------- Original Message -------- On February 13, 2018 10:33 AM, Corn? Plooy via Lightning-dev wrote: >Hi Conner, > > > I do believe proof of payment is an important feature to have, > especially for the use case of a payer/payee pair that doesn't > completely trust each other, but does have the possibility to go to court. > > > However, I'm not convinced by what you wrote. I do think a combination > of signed invoice + preimage is a reliable proof of payment. Strictly > speaking, you are right: it is not so much a proof that they payer >sent the funds, but it is proof that the payee received the funds. > This is because the only scenario where it makes sense for the payee to > reveal the preimage is if it can claim a corresponding incoming HTLC > with (at least) the correct amount of funds. Revealing the preimage in > any other scenario would be stupid(), and no amount of cryptography can > protect against stupidity. So, when it comes to cryptographic proof, > this is about as good as it gets. > > > Now, about the difference between the payer having sent the funds and > the payee having received the funds: I'd argue that it's the second that > really matters. If the payer can prove that there is any kind of > arrangement that ended up with the payee having received the correct > amount of funds, that should count as payment. Now, if none of the > intermediaries has been stupid, this does imply that the payer ends up > on the sending side of the payment, but even if one if the > intermediaries has been stupid, why should the payer and payee care? All > that matters is that an arrangement has been made to let the payee > receive (at least) the correct amount of funds, and that arrangement has > been proven to be successful. I consider that proof of payment. > > > CJP > > > () Stupidity includes being hacked, and anything else that can cause > your secrets being used against your own interests. > > > Op 13-02-18 om 04:29 schreef Conner Fromknecht: >>Hi everyone, >>I've seen some discussions over losing proofs of payment in the AMP >> setting, >> and wanted to address some lingering concerns I have regarding the >> soundness of using the current invoicing system to prove payments. >>In general, I think we are ascribing too much weight to simply having a >> preimage and BOLT 11 invoice. The structure of non-interactive payments >> definitely poses some interesting challenges in adapting the existing >> invoicing >> scheme. However, I believe there exist stronger and better means of doing >> proofs of?payment,?and would prefer not to tie our hands by assuming >> this is the best way to approach the problem. >>IMHO, the current signed invoice + preimage is a very weak proof of >> payment. >> It's the hash equivalent to proving you own a public key by publishing the >> secret key. There is an assumption that the only way someone could get >> that >> preimage is by having made a payment, but this assumption is broken most >> directly by the proving mechanism. Similarly, any intermediary who >> acquires >> an invoice with the appropriate hash could also make this claim since they >> also have the preimage. >>Further, I think it's a mistake to conflate >> ? 1) me being able to present a valid preimage/invoice pair, with >> ? 2) me having received the correct preimage in response to an onion >> packet >> ? ? that I personally crafted for the receiving node?in?the invoice.? >> >> The main issue is that the proof does not bind a specific sender, >> making statement 1 producible by multiple individuals.?I think it would be >> potentially worthwhile to explore proofs of stronger statements, such >> as 2, >> that could utilize the ephemeral keys in the onion?packets,?or even the >> onion as a witness, which is more rigidly coupled to having actually >> completed a payment. >>Without any modification to the spec, we can always use something like >> ZKBoo to prove (w/o trusted setup) knowledge of a preimage without >> totally revealing it to the verifier. This isn't perfect, but at least >> gives the >> sender the option to prove the statement without necessarily giving up >> the preimage. >>TL;DR: I'm not convinced the signed invoice?+ hash is really a good >> yardstick >> by which to measure?provability, and I think doing some research into >> proofs >> of payment on stronger statements would be incredibly valuable. Therefore, >> I'm not sure if AMPs really lose this, so much as?force?us to reconsider >> what it actually requires to soundly prove a payment to an external >> verifier. >>Best, >> Conner >>On Mon, Feb 12, 2018 at 6:56 PM ZmnSCPxj via Lightning-dev >> >mailto:lightning-dev at lists.linuxfoundation.org> wrote: >>Good morning Christian and Corne, >> >>Another idea to consider, is techniques like ZKCP and ZKCSP, which >>provide atomic access to information in exchange for monetary >>compensation.? Ensuring atomicity of the exchange can be done by >>providing the information encrypted, a hash of the encryption key, >>and proofs that the encrypted data is the one desired and that the >>data was encrypted with the given key; the proof-of-payment is the >>encryption key, and possession of the encryption key is sufficient >>to gain access to the information, with no need to bring in legal >>structures. >> >>(admittedly, ZKCP and ZKCSP are dependent on new cryptography...) >> >>(also, AMP currently cannot provide a proof-of-payment, unlike >>current payment routing that has proof-of-payment, but that is an >>eventual design goal that would enable use of ZKC(S)P >>on-Lightning, assuming we eventually find out that zk-SNARKs and >>so on are something we can trust) >> >>Regards, >>ZmnSCPxj >> >>? >>Sent with ProtonMail Secure Email. >>? >> >>-------- Original Message -------- >>?On February 13, 2018 2:05 AM, Christian Decker >>> >>wrote: >> >>>Honestly I don't get why we are complicating this so much. We have a >>> system that allows atomic multipath payments using a single >>secret, and >>> future decorrelation mechanisms allow us to vary the secret in >>such a >>> way that multiple paths cannot be collated, why introduce a >>whole set of >>> problems by giving away the atomicity? The same goes for the >>overpaying >>> and trusting the recipient to only claim the owed amount, there >>is no >>> need for this. Just pay the exact amount, by deriving secrets >>from the >>> main secret and make the derivation reproducible by intermediate >>hops. >>> >>> Having proof-of-payment be presentable in a court is a nice >>feature, but >>> it doesn't mean we need to abandon all guarantees we have worked >>so hard >>> to establish in LN. >>> >>> Corn? Plooy via Lightning-dev >>lightning-dev at lists.linuxfoundation.org >> >>>writes: >>> >>>>I was thinking that, for that use case, a different signed >>invoice could >>>> be formulated, stating >>>> - several payment hashes with their corresponding amounts >>>> >>>> - the obligation of signer to deliver Z if all corresponding >>payment >>>> keys are shown >>>> >>>> - some terms to handle the case where only a part of the >>payments was >>>> successful, e.g. an obligation to refund >>>>The third item is a bit problematic: in order to distinguish >>this case >>>> from a complete success, the payee would have to prove absence of >>>> successful transactions, which is hard. Absence of successful >>>> transactions can only be declared by the payer, so in order to >>reliably >>>> settle without going to court first, the payer should sign a >>>> declaration stating that certain transactions were canceled and >>that the >>>> other ones should be refunded. This can be another invoice. >>>>So, the original invoice states: >>>> - several payment hashes with their corresponding amounts >>>> >>>> - if all corresponding payment keys are shown: the obligation >>of >>>> to deliver Z, UNLESS stated otherwise by an invoice signed by >> >>>>-- signed by >>>>But if a payment partially fails, it can be refunded >>cooperatively with >>>> an invoice created by payer: >>>> - declares which of the original payments were successful (with >>payment >>>> keys) and which were not >>>> >>>> - replaces the obligation of to deliver Z with an >>obligation to >>>> refund the successful transactions >>>> >>>> - several payment hashes with their corresponding amounts >>>> >>>> - if all corresponding payment keys are shown: cancel the >>obligation of >>>> to refund >>>>-- signed by >>>>Maybe this can be repeated iteratively if necessary; hopefully the >>>> not-yet-settled amount will converge to zero. >>>>Important advantage: this only requires changes to the invoice >>format, >>>> not to the network protocol. >>>>The point is: in this use case, the court is apparently the >>final point >>>> of settlement for invoices, just like the blockchain is for the >>other >>>> channels in the route. IANAL, but I think the "scripting language" >>>> accepted by courts is quite flexible, and you can use that to >>enforce >>>> atomicity. With the construction described above, you can >>either refund >>>> cooperatively (and collect evidence that refund has happened), >>or, if >>>> that fails, go to court to enforce settlement there. >>>>CJP >>>>Op 12-02-18 om 10:23 schreef Christian Decker: >>>>>CJP cjp at ultimatestunts.nl writes: >>>>>>Can you give a use case for this? >>>>>>Usually, especially in the common case that a payment is done in >>>>>> exchange for some non-cryptographic asset (e.g. physical >>goods), there >>>>>> already is some kind of trust between payer and payee. So, if >>a payment >>>>>> is split non-atomically into smaller transactions, and only a >>part >>>>>> succeeds, presumably they can cooperatively figure out some >>way to >>>>>> settle the situation. >>>>>> The scenario that is commonly used in these cases is a >>merchant that >>>>>> provides a signed invoice "if you pay me X with payment_hash >>Y I will >>>>>> deliver Z". Now the user performs the payment, learns the >>payment_key >>>>>> matching the payment_hash, but the merchant refuses to >>deliver, claiming >>>>>> it didn't get the payment. Now the user can go to a court, >>present the >>>>>> invoice signed by the merchant, and the proof-of-payment, and >>force the >>>>>> merchant to honor its commitment. >>>>>> >>>>>Lightning-dev mailing list >>>>>Lightning-dev at lists.linuxfoundation.org >> >>>>>https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev >>>>> >>>>Lightning-dev mailing list >>>>Lightning-dev at lists.linuxfoundation.org >> >>>>https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev >>>> >>>Lightning-dev mailing list >>>Lightning-dev at lists.linuxfoundation.org >> >>>https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev >>> >> >>_______________________________________________ >>Lightning-dev mailing list >>Lightning-dev at lists.linuxfoundation.org >> >>https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev >> >> >>Lightning-dev mailing list >>Lightning-dev at lists.linuxfoundation.org >>https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev >> > > >Lightning-dev mailing list >Lightning-dev at lists.linuxfoundation.org >https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev > From rusty at rustcorp.com.au Wed Feb 14 00:47:49 2018 From: rusty at rustcorp.com.au (Rusty Russell) Date: Wed, 14 Feb 2018 11:17:49 +1030 Subject: [Lightning-dev] AMP: Atomic Multi-Path Payments over Lightning In-Reply-To: References: <1518171320.5145.0.camel@ultimatestunts.nl> <87h8qmwohh.fsf@gmail.com> <87eflqw0aj.fsf@gmail.com> Message-ID: <878tbwtn0q.fsf@rustcorp.com.au> Conner Fromknecht writes: > IMHO, the current signed invoice + preimage is a very weak proof of payment. > It's the hash equivalent to proving you own a public key by publishing the > secret key. There is an assumption that the only way someone could get that > preimage is by having made a payment, but this assumption is broken most > directly by the proving mechanism. Similarly, any intermediary who acquires > an invoice with the appropriate hash could also make this claim since they > also have the preimage. Agreed. > Further, I think it's a mistake to conflate > 1) me being able to present a valid preimage/invoice pair, with > 2) me having received the correct preimage in response to an onion packet > that I personally crafted for the receiving node in the invoice. > > The main issue is that the proof does not bind a specific sender, > making statement 1 producible by multiple individuals. I think it would be > potentially worthwhile to explore proofs of stronger statements, such as 2, > that could utilize the ephemeral keys in the onion packets, or even the > onion as a witness, which is more rigidly coupled to having actually > completed a payment. Yes; this places more emphasis on the invoice's precision, eg. "I will ship X to Y". In practice, as we move to payment decorrelation the proof-of-payment does half of what you suggest: only the initial payer has the necessary proof, but it's still open-kimono if they reveal it. Using some kind of point-supplied-in-onion to tweak result might help here (handwave?!) since you can prove you know the secret for the point easily without revealing it, and then AMP is simply an aggregation of tweaks. > TL;DR: I'm not convinced the signed invoice + hash is really a good > yardstick > by which to measure provability, and I think doing some research into proofs > of payment on stronger statements would be incredibly valuable. Therefore, > I'm not sure if AMPs really lose this, so much as force us to reconsider > what it actually requires to soundly prove a payment to an external > verifier. Proof-of-payment is a unique lightning property, which I think is terribly underrated (because we're used to not having it). Our actions so far have been to boltser this (hence BOLT11), and I'd hate to see us discard it for convenience: I fear we'd never get it back! Fortunately I think we *can* have our cake and eat it too... Thanks, Rusty. From fabrice.drouin at acinq.fr Mon Feb 19 18:04:39 2018 From: fabrice.drouin at acinq.fr (Fabrice Drouin) Date: Mon, 19 Feb 2018 19:04:39 +0100 Subject: [Lightning-dev] Improving the initial gossip sync In-Reply-To: References: <874lmvy4gh.fsf@gmail.com> <87tvurym13.fsf@rustcorp.com.au> <87shaawft5.fsf@gmail.com> <878tbzugj0.fsf@rustcorp.com.au> <87mv0cto38.fsf@rustcorp.com.au> Message-ID: I'm still pushing for the hash-based solution because it can be implemented and developed quickly and easily and fixes the main issues we've seen on testnet: - routing sync on mobile nodes - "route not found" errors when you're missing routing info. It can be deployed as an optional feature and will give us time to specify and implement proper IBLT-based filters. It can be combined with the timestamp approach: nodes would send bucket hashes + low and high watermarks. I've tried and summarised the issue below: ## The problem The current scheme (broadcast + optionally ask for a full routing table when you connect) works well for nodes which are never completely offline, but is becoming unpractical on mobile node/end-user nodes which are often offline and connected to a few peers. We need a way to improve the initial routing sync and retrieve announcements that we've missed without having to download the entire routing table. Additionally, the only way to check that routing information is consistent between different nodes is to ask each one of them to send you their entire routing table. Exchanging filters/digests/? would mitigate the issue of having to ?trust? that your peers do provide do with a good routing table, especially when you?re connected to very few peers. ## Requirements - Easy to specify and implement - Low overhead - Ability to retrieve missing routing information - (Nice to have) Ability to query multiple peers for consistency checks ## Background The current method for retrieving this routing table is to set the `initial_routing_sync` flag in the `init` message that you send every time you connect/reconnect to a peer, which will then send back its entire routing table (currently 6000 channels on testnet). If a node believes that a channel is available when it has in fact been closed, and uses it to build a route, it will receive an error message and try again without this specific channel. But if a node is missing a channel, and cannot route payments, there is currently no way to recover it: it has to wait until the channel parameters are updated, and will then receive a `channel_announcement` and the matching `channel_update`. This could take a very long time. Hence, the only option for mobile nodes is to request a routing table dump every time they connect/reconnect, which simply does not work. We need a way to improve this initial table sync, which is simple enough to be implemented and deployed quickly. Otherwise, these nodes will probably use ugly specific hacks (like use their own mechanisms for retrieving and syncing routing tables) or even rely on remote servers to compute routes for them. ## Proposed solutions ### Timestamps/watermarks When they connect to another peer, nodes send a timestamp (I know the routing table up to X) or a pair of timestamps (I know the routing table from time X to time Y). Pros: - Very simple to specify (use `channel_update` timestamps) and implement. - Very little overhead - Solves the ?download the entire routing table every time? issue Cons: - Does not solve the "missing announcements" issue: if you're missing routing info you would have to wait for channel parameters to be updated etc.., as above ### Bucket hashes Routing info (i.e. channel announcements) are grouped by buckets, one bucket being a group of 144 blocks. A hash is computed for each bucket, peers exchanges these hashes and send back all announcements for which bucket hashes don't match. We propose to use a bucket per block for the last 7 days, one bucket per 144 blocks for older announcements, If gives a total of `(365 + 7*144) = 1373` hashes, for a year of announcements Pros: - Simple to specify and implement - Little overhead (a few dozen Kb) - If a node is missing a few elements it will immediately recover them, even if they're very old - Works well when routing tables are mostly synchronized, which would be the case for nodes which connect to a very small number of peers - Bucket hashes are the same for all peers you connect to, and can be used for consistency checks between peers Cons: - Buckets hashes are not queryable filters - Since a single mismatch will invalidate an entire buckets, even with small differences nodes could end up having to send their entire routing table (which exactly what they are doing today) ### IBLT filters Upon connection, nodes exchange information to estimate the number of differences between their routing table. Using this estimate, they build and exchange IBLT filters, and use them to compute the announcements that they should send to their peer. Pros: - Queryable filters - Very efficient if the number of differences is small, even with very large routing tables Cons: - More complex to specify and implement: we need a good estimator for the number of differences (send min height + max height + a sample of announcements ?) - Filters become peer-specific (similar to the server-side vs client-side filtering for SPV clients) On 16 February 2018 at 13:34, Fabrice Drouin wrote: > I like the IBLT idea very much but my understanding of how they work > is that that the tricky part would be first to estimate the number of > differences between "our" and "their" routing tables. > So when we open a connection we would first exchange messages to > estimate how far off we are (by sending a sample of shortids and > extrapolate ?) then send filters which would be specific to each peer. > This may become an issue if a node is connected to many peers, and is > similar to the server-side vs client-side filtering issue for SPV > clients. > Basically, I fear that it would take some time before it is agreed > upon and available, hindering the development of mobile nodes. > > The bucket hash idea is naive but is very simple to implement and > could buy us enough time to implement IBLT filters properly. Imho the > timestamp idea does not work for the mobile phone use case (but is > probably simpler and better that bucket hashes for "centre" nodes > which are never completely off the grid) > > > On 14 February 2018 at 01:24, Rusty Russell wrote: >> Fabrice Drouin writes: >>> Yes, real filters would be better, but the 'bucket hash' idea works >>> (from what I've seen on testnet) for our specific target (nodes which >>> are connected to very small number of peers and go offline very >>> often) >> >> What if we also add an 'announce_query' message: if you see a >> 'channel_update' which you discard because you don't know the channel, >> 'announce_query' asks them to send the 'channel_announce' for that >> 'short_channel_id' followed by re-sending the 'channel_update'(s)? >> (Immediately, rather than waiting for next gossip batch). >> >> I think we would want this for IBLT, too, since you'd want this to query >> any short-channel-id you extract from that which you don't know about. > > Yes, unless it is part of the initial sync (compare filters. then send > what they're missing) > >> I see. (BTW, your formatting makes your post sounds very Zen!). > Sorry about that, I've disabled the haiku mode :) > >> Yes, we can probably use difference encoding and use 1 bit for output >> index (with anything which is greater appended) and get down to 1 byte >> per channel_id at scale. >> >> But my rule-of-thumb for scaling today is 1M - 10M channels, and that >> starts to get a little excessive. Hence my interest in IBLTs, which are >> still pretty trivial to implement. > > Yes, sending all shortids would also have been a temporary measure > while a more sophisticated approach is being designed. >> >> Cheers, >> Rusty. From aj at erisian.com.au Mon Feb 19 22:59:07 2018 From: aj at erisian.com.au (Anthony Towns) Date: Tue, 20 Feb 2018 08:59:07 +1000 Subject: [Lightning-dev] Post-Schnorr lightning txes Message-ID: <20180219225907.GA16444@erisian.com.au> Hi *, My understanding of lightning may be out of date, so please forgive (or at least correct :) any errors on my behalf. I was thinking about whether Greg Maxwell's graftroot might solve the channel monitoring problem (spoiler: not really) and ended up with maybe an interesting take on Schnorr. I don't think I've seen any specific writeup of what that might look like, so hopefully at least some of this is novel! I'm assuming familiarity with current thinking on Schnorr sigs -- but all you should need to know is the quick summary at footnote [0]. So I think there's four main scenarios for closing a lightning channel: - both parties are happy to close, do so cooperatively, and can sign a new unconditional transaction that they agree on. already fine. (should happen almost all of the time, call it 80%) - communications failure: one side has to close, but the other side is happy to cooperate as far as they're able but can only do so via the blockchain and maybe with some delay (maybe 15% of the time) - disappearance, uncooperative: one side effectively completely disappears so the other side has to fully close the channel on their own (5% of the time) - misbehaviour: one side tries publishing an old channel state due to error or maliciousness, and the other collects the entire balance as penalty (0% of the time) With "graftroot" in mind, I was thinking that optimising for the last case might be interesting -- despite expecting it to be vanishingly rare. That would have to look something like: (0) funding tx (1) ...which is spent by a misbehaving commitment tx (2) ...which is spent by a penalty tx You do need 3 txes for that case, but you really only need 1 output for each: so (0) is 2-in-1-out, (1) is 1-in-1-out, (2) is 1-in-1-out; which could all be relatively cheap. (And (2) could be batched with other txes making it 1 input in a potentially large tx) For concreteness, I'm going to treat A as the one doing the penalising, and B (Bad?) as the one that's misbehaving. If you treat each of those txes as a muSig Schnorr pay-to-pubkey, the output addresses would be: (0) funding tx pays to [A,B] (1) commitment tx pays to [A(i),Revocation(B,i)] (2) pays to A (where i is a commitment id / counter for the channel state) If B misbehaves by posting the commitment tx after revealing the revocation secret, A can calculate A(i) and Revocation(B,i) and claim all the funds immediately. As far as the other cases go: - In a cooperative close, you don't publish any commitment txes, you just spend the funding to each party's preferred destinations directly; so this is already great. - Otherwise, you need to be able to actually commit to how the funds get distributed. But committing to distributing funds is easy: just jointly sign a transaction with [A(i),Revocation(B,i)]. Since B is the one we're worrying about misbehaving, it needs to hold a transaction with the appropriate outputs that is: - timelocked to `to_self_delay` blocks/seconds in advance via nSequence - signed by A(i) That ensures A has `to_self_delay` blocks/seconds to penalise misehaviour, and that when closing properly, B can complete the signature using the current revocation secret. This means the "appropriate outputs" no longer need the OP_CSV step, which should simplify the scripts a bit. Having B have a distribution transaction isn't enough -- B could vanish between publishing the commitment transaction and the distribution transaction, leaving A without access to any funds. So A needs a corresponding distribution transaction. But because that transaction can only be published if B signs and publishes the corresponding commitment transaction, the fact that it's published indicates both A and B are happy with the channel close -- so this is a semi-cooperative close and no delay is needed. So A should hold a partially signed transaction with the same outputs: - without any timelock - signed by Revocation(B,i), waiting for signature by A(i) Thus, if B does a non-cooperative close, either: - A proves misbehaviour and claims all the funds immediately - A agrees that the channel state is correct, signs and publishes the un-timelocked distribution transaction, then claims A's outputs; B can then immediately claim its outputs - A does nothing, and B waits for the `to_self_delay` period, signs and publishes its transaction, then claims B's outputs; A can eventually claim its own outputs In that case all of the transactions except the in-flight HTLCs just look like simple pay-to-pubkey transactions. Further, other than the historical secrets no old information needs to be retained: misbehaviour can be dealt with (and can only be dealt with) by creating a new transaction signed by your own secrets and the revocation information. None of that actually relies on Schnorr-multisig, I think -- it could be done today with normal 2-of-2 multisig as far as I can see. I'm not 100% sure how this approach works compared to the current one for the CSV/CLTV overlap problem. I think any case you could solve by obtaining a HTLC-Timeout or HTLC-Success transaction currently, you could solve in the above scenario by just updating the channel state to remove the HTLC. So I believe the above lets you completely forget info about old HTLCs, while still enforcing correct behavior, and also makes enforcing correct behaviour cheaper because it's just two extremely simple transactions to post. If I haven't missed any corner cases, it also seems to simplify the scripts a fair bit. Does this make sense? It seems to to me... So for completeness, it would make sense to do HTLCs via Schnorr -- at least to make them reveal elliptic curve private keys, and ideally to make them mostly indistinguishable from regular transactions as a "scriptless script" [1] or "discreet log contract" [2]. (I think, at least for HTLCs, these end up being the same?) The idea then is to have the HTLC payment hash be R=r*G, where r is the secret/payment receipt. Supposing your current commitment has n HTLCs in-flight, some paying A if the HTLC succeeds and "r" is revealed, others paying B. We'll focus on one paying A. So you succeed by A completing a signature that reveals r to B, and which simultaneously allows collection of the funds on chain. A needs to be able to do this knowing nothing other than r (and their own private keys). So agree to sign to muSig 2-of-2 multisig [A,B]. A and B generate random values i and j respectively and reveal I=i*G and J=j*G, and each calculates Q=I+J+R, and they generate partial signatures of a transaction paying A: I, i + H(X,Q,m)*H(L,A)*a J, j + H(X,Q,m)*H(L,B)*b where L = H(A,B) and X = H(L,A)*A + H(L,B)*B as usual. Once A knows r, A can construct a full signature by adding R, r to the above values, and B can then determine r by subtracting the above values from signature A generated. To ensure B gets paid if the HTLC timesout, they should also sign a timelocked transaction paying B directly, that B can hold onto until the channel state gets updated. And once you're doing payment hashes via ECC, you can of course change them at each hop to make it harder to correlate steps in a payment route. I think that when combined with the above method of handling CSV delays and revocation, this covers all the needed cases with a straightforward pay-to-pubkey(hash) output, no script info needed at all. It does mean each HTLC needs a signature every time the channel state updates (B needs to sign a tx allowing A to claim the output once A knows the secret, A needs to sign a tx allowing B to claim the output on timeout). For channel monitoring this is pretty good, I think. You need to keep track of the revocation info and your secret keys -- but that's essentially a constant amount of data. If you're happy to have the data grow by 64 bytes every time the channel state updates, you can outsource channel monitoring: arrange a formula for constructing a penalty tx based on the channel commitment tx -- eg, 95% of the balance goes to me, 4% goes to the monitor's address, 1% goes to fees, there's a relative locktime of to_self_delay/3 to allow me to directly claim 100% of the funds if I happen to be paying attention; then do a partial signature with A(i), and then allow the monitoring service to catch fraudulent transactions, work out the appropriate revocation secret, and finish the signature. If your channel updates 100 times a second for an entire year, that's 200GB of data, which seems pretty feasible. (You can't just regenerate that data though, unless you keep each commitment tx) And it's pretty easy to work out which bit of data you need to access: the funding tx that's being spent tells you which channel, and the channel state index is encoded in the locktime and sequence, so you should only need small/logarithmic overhead even for frequently updated channels rather than any serious indexes. I don't think you can do better than that without serious changes to bitcoin: if you let the monitoring agency sign on its own, you'd need some sort of covenant opcode to ensure it sends any money to you; and with segwit outputs, there's no way to provide a signature for a transaction without committing to exactly which transaction you're signing. I was hoping covenants and graftroot would be enough, but I don't think they are. The idea would be that since the transaction spends to A(i)+Rev(B,i), you'd sign an output script with A that uses covenant opcodes to ensure the transaction only pays the appropriate monitoring reward, and the monitor could then work out A(i)-A and Rev(B,i) and finish the signature. But the signature by "A" would need to know A(i)+Rev(B,i) when calculating the hash, and that's different for every commitment transaction, so as far as I can see, it just doesn't work. You can't drop the muSig-style construction because you need to be protect yourself against potential malicious choice of the revocation secret [3]. Summary: - Funding txes as 2-of-2 multisig is still great. Convert to Schnorr/muSig when available of course. - Generate 6+8*n transactions everytime the channel state is updated, (n = number of HTLCs in-flight) 1. Channel state commitment tx, held by A, spends funding tx, payable to Schnorr muSig address [A(i),Rev(B,i)], signed by B 2. Channel fund distribution tx, held by A (CSV), spends (1), signed by Rev(B,i) 3. Channel fund distribution tx, held by B (no CSV), spends (1), signed by A(i) 4. Channel state commitment tx, held by B, spends funding tx payable to Schnorr muSig address [B(i),Rev(A,i)], signed by A 5. Channel fund distribution tx, held by B (CSV), spends (4), signed by Rev(A,i) 6. Channel fund distribution tx, held by A (no CSV), spends (4), signed by B(i) The fund distribution txs all pay the same collection of addresses: - channel balance for A directly to A's preferred address - channel balance for B directly to B's preferred address - HTLC balance to muSig address for [A,B] for each in-flight HTLC paying A on success - HTLC balance to muSig address for [B,A] for each in-flight HTLC paying B on success - (probably makes sense to bump the HTLC addresses by some random value to make it harder for third parties to tell which addresses were balances versus HTLCs) Both (1) and (4) include obscured channel state ids as per current standard. For each HTLC that pays X on timeout and Y on success: a. Timeout tx, held by X, signed by Y, spends from (2) b. Timeout tx, held by X, signed by Y, spends from (3) c. Timeout tx, held by X, signed by Y, spends from (5) d. Timeout tx, held by X, signed by Y, spends from (6) e. Success tx, held by Y, signed by X, spends from (2) f. Success tx, held by Y, signed by X, spends from (3) g. Success tx, held by Y, signed by X, spends from (5) h. Success tx, held by Y, signed by X, spends from (6) (these should all be able to be SIGHASH_SINGLE, ANYONECANPAY to allow some level of aggregation) - Fund distribution tx outputs can all be pay2pubkey(hash): HTLCs work by pre-signed timelocked transactions and scriptless scripts/discreet-log contracts to reveal the secret; balances work directly; CSV and revocations are already handled by that point - You can discard all old transaction info and HTLC parameters once they're not relevant to the current channel state - Channel monitoring can be outsourced pretty efficiently -- as little as a signature per state could be made to works as far as I can see, which doesn't add up too fast. - There's still no plausible way of doing constant space outsourced channel monitoring without some sort of SIGHASH_NOINPUT, at least that I can see Thoughts? [4] Cheers, aj, very sad that this didn't turn out to be a potential use case for graftroot :( [0] In particular, I'm assuming that: - Schnorr sigs in bitcoin will look something like: R, r + H(X,R,m)*x (where m is the message being signed by private key x, r is a random per-sig nonce, R and X are public keys corresponding to r,x; H is the secure hash function) - muSig is a secure way for untrusting parties to construct an n-of-n combined signature; for public keys A and B, it produces a combined public key: X = H(L,A)*A + H(L,B)*B with L = H(A,B) See https://blockstream.com/2018/01/23/musig-key-aggregation-schnorr-signatures.html [1] https://scalingbitcoin.org/stanford2017/Day2/Using-the-Chain-for-what-Chains-are-Good-For.pdf http://diyhpl.us/wiki/transcripts/scalingbitcoin/stanford-2017/using-the-chain-for-what-chains-are-good-for/ [2] https://adiabat.github.io/dlc.pdf https://diyhpl.us/wiki/transcripts/discreet-log-contracts/ [3] Well, maybe you could request a zero-knowledge proof to ensure a new revocation hash conforms to the standard for generating revocation secrets without revealing the secret, and have the public key be a(i)*G + r(B,i)*G without using the muSig construct, but that would probably be obnoxious to have to generate every time you update the channel state. [4] As an aside -- this could make it feasible and interesting to penalise disappearance as well as misbehaviour. If you add a transaction the B pre-signs, spending the commitment tx A holds, giving all the channel funds to A but only after a very large CSV timeout, perhaps `to_self_delay`*50, then the scenarios are: If A is present: - B publishes an old commitment: A immediately steals all the funds if active or outsourced misbehaviour monitoring. Whoops! - B publishes the current commitment: A publishes its distribution transaction and collects its funds immediately, allowing B to do likewise If A has disappeared: - B publises the current commitment and waits a modest amount of time, publishes its distribution transaction claiming its rightful funds, and allowing A to collect its funds if it ever does reappear and still knows its secrets - B publishes the current commitment, waits a fair while, A reappears and publishes its distribution transactions, both parties get their rightful funds - B publishes the current commitment, waits an extended period of time, and claims the entire channel's funds. If B is particularly reputable, and A can prove its identity (but not recover all its secrets) maybe B even refunds A some/all of its rightful balance Perhaps that provides too much of an incentive to try blocking someone from having access to the blockchain though. From rusty at rustcorp.com.au Tue Feb 20 01:08:54 2018 From: rusty at rustcorp.com.au (Rusty Russell) Date: Tue, 20 Feb 2018 11:38:54 +1030 Subject: [Lightning-dev] Improving the initial gossip sync In-Reply-To: References: <874lmvy4gh.fsf@gmail.com> <87tvurym13.fsf@rustcorp.com.au> <87shaawft5.fsf@gmail.com> <878tbzugj0.fsf@rustcorp.com.au> <87mv0cto38.fsf@rustcorp.com.au> Message-ID: <87606so4bd.fsf@rustcorp.com.au> Hi all, This consumed much of our lightning dev interop call today! But I think we have a way forward, which is in three parts, gated by a new feature bitpair: 1. A query message for a particular shortid. 2. A query message for shortids in a range of blocks. 3. A gossip_timestamp field in `init` I think these will still be desirable even when we have a more complex scheme in future. 1. query_short_channel_id ========================= 1. type: 260 (`query_short_channel_id`) 2. data: * [`32`:`chain_hash`] * [`8`:`short_channel_id`] This is general mechanism which lets you query for a channel_announcement and channel_updates for a specific 8-byte shortid. The receiver should respond with a channel_announce and the latest channel_update for each end, not batched in the normal 60-second gossip cycle. A node MAY send this if it receives a `channel_update` for a channel it has no `channel_announcement`, but SHOULD NOT if the channel referred to is not an unspent output (ie. check that it's not closed before sending this query!). IMPLEMENTATION: trivial 2. query_channel_range/reply_channel_range ========================================== This is a new message pair, something like: 1. type: 261 (`query_channel_range`) 2. data: * [`32`:`chain_hash`] * [`4`:`first_blocknum`] * [`4`:`number_of_blocks`] 1. type: 262 (`reply_channel_range`) 2. data: * [`32`:`chain_hash`] * [`4`:`first_blocknum`] * [`4`:`number_of_blocks`] * [`2`:`len`] * [`len`:`data`] Where data is a series of ordered shortids (see Appendix A for various encodings). `number_of_blocks` in the reply may be less than in the request of the required data did not fit; it is assumed that we can fit a single block per reply, at least. IMPLEMENTATION: requires channel index by block number, zlib 3. gossip_timestamp. ==================== This is useful for the simple case of a node reconnecting to a single peer, for example. This is a new field appended to `init`: the negotiation of this feature bit overrides `initial_routing_sync` as the same results can be obtained by setting the `gossip_timestamp` field to the current time (for no initial sync) or 0 (for an initial sync). Note that a node should allow for some minutes of propagation time, thus set the `gossip_timestamp` to sometime before its last seen gossip message. It may also receive `channel_update` messages for which it has not seen the `channel_announcement` and thus use IMPLEMENTATION: easy. Appendix A: Encoding Sizes ========================== I tried various obvious compression schemes, in increasing complexity order (see source below, which takes stdin and spits out stdout): Raw = raw 8-byte stream of ordered channels. gzip -9: gzip -9 of raw. splitgz: all blocknums first, then all txnums, then all outnums, then gzip -9 delta: CVarInt encoding: blocknum_delta,num,num*txnum_delta,num*outnum. deltagz: delta, with gzip -9 Corpus 1: LN mainnet dump, 1830 channels.[1] Raw: 14640 bytes gzip -9: 6717 bytes splitgz: 6464 bytes delta: 6624 bytes deltagz: 4171 bytes Corpus 2: All P2SH outputs between blocks 508000-508999 incl, 790844 channels.[2] Raw: 6326752 bytes gzip -9: 1861710 bytes splitgz: 964332 bytes delta: 1655255 bytes deltagz: 595469 bytes [1] http://ozlabs.org/~rusty/short_channels-mainnet.xz [2] http://ozlabs.org/~rusty/short_channels-all-p2sh-508000-509000.xz Appendix B: Encoding Sourcecode =============================== // gcc -g -Wall -o encode-short_channel_ids encode-short_channel_ids.c #include #include #include #include /* BOLT #1: * All data fields are big-endian unless otherwise specified. */ static void write_bytes(uint32_t val, int n) { /* BE, so write out tail */ uint32_t v = htonl(val); fwrite((char *)&v + (4 - n), n, 1, stdout); } /* CVarInt from bitcoin/src/serialize.h: // Copyright (c) 2009-2010 Satoshi Nakamoto // Copyright (c) 2009-2016 The Bitcoin Core developers // Distributed under the MIT software license, see the accompanying // file COPYING or http://www.opensource.org/licenses/mit-license.php. */ static void write_varint(uint32_t n) { unsigned char tmp[(sizeof(n)*8+6)/7]; int len=0; while (1) { tmp[len] = (n & 0x7F) | (len ? 0x80 : 0x00); if (n <= 0x7F) break; n = (n >> 7) - 1; len++; } do { fwrite(&tmp[len], 1, 1, stdout); } while(len--); } int main(void) { size_t n, max = 1024; uint32_t *block = malloc(max * sizeof(uint32_t)); uint32_t *txnum = malloc(max * sizeof(uint32_t)); uint32_t *outnum = malloc(max * sizeof(uint32_t)); n = 0; while (scanf("%u:%u:%u", &block[n], &txnum[n], &outnum[n]) == 3) { if (++n == max) { max *= 2; block = realloc(block, max * sizeof(uint32_t)); txnum = realloc(txnum, max * sizeof(uint32_t)); outnum = realloc(outnum, max * sizeof(uint32_t)); } } fprintf(stderr, "Got %zu channels\n", n); max = n; #ifdef SPLIT for (n = 0; n < max; n++) write_bytes(block[n], 3); for (n = 0; n < max; n++) write_bytes(txnum[n], 3); for (n = 0; n < max; n++) write_bytes(outnum[n], 2); #elif defined(DIFFENCODE) uint32_t prev_block = 0, num_channels; for (n = 0; n < max; n += num_channels) { /* Block delta */ write_varint(block[n] - prev_block); prev_block = block[n]; for (num_channels = 1; n + num_channels < max && block[n+num_channels] == block[n]; num_channels++); /* Number of channels */ write_varint(num_channels); /* num_channels * txnum delta */ uint32_t prev_txnum = 0; for (size_t i = n; i < n + num_channels; i++) { write_varint(txnum[i] - prev_txnum); prev_txnum = txnum[i]; } /* num_channels * outnum */ for (size_t i = n; i < n + num_channels; i++) write_varint(outnum[i]); } #else for (n = 0; n < max; n++) { /* BOLT #7: * * The `short_channel_id` is the unique description of the * funding transaction. It is constructed as follows: * * 1. the most significant 3 bytes: indicating the block height * 2. the next 3 bytes: indicating the transaction index within * the block * 3. the least significant 2 bytes: indicating the output index * that pays to the channel. */ write_bytes(block[n], 3); write_bytes(txnum[n], 3); write_bytes(outnum[n], 2); } #endif return 0; } From ZmnSCPxj at protonmail.com Tue Feb 20 06:26:16 2018 From: ZmnSCPxj at protonmail.com (ZmnSCPxj) Date: Tue, 20 Feb 2018 01:26:16 -0500 Subject: [Lightning-dev] Improving the initial gossip sync In-Reply-To: <87606so4bd.fsf@rustcorp.com.au> References: <874lmvy4gh.fsf@gmail.com> <87tvurym13.fsf@rustcorp.com.au> <87shaawft5.fsf@gmail.com> <878tbzugj0.fsf@rustcorp.com.au> <87mv0cto38.fsf@rustcorp.com.au> <87606so4bd.fsf@rustcorp.com.au> Message-ID: Good morning Rusty, ?>4. query_short_channel_id > ========================= > > >5. type: 260 (query_short_channel_id) > >6. data: > - [32:chain_hash] > > - [8:short_channel_id] > > This is general mechanism which lets you query for a > channel_announcement and channel_updates for a specific 8-byte shortid. > The receiver should respond with a channel_announce and the latest > channel_update for each end, not batched in the normal 60-second gossip > cycle. > > A node MAY send this if it receives a channel_update for a channel it > has no channel_announcement, but SHOULD NOT if the channel referred to > is not an unspent output (ie. check that it's not closed before sending > this query!). Is the SHOULD NOT something the sender can assure? In the case that the sender is a lightweight Bitcoin node, and does not keep track of a mempool, and only notices closes if they have been confirmed onchain, it may be possible that the sender thinks the channel is still possibly open, while the receiver is a full Bitcoin node and has seen the closing transaction of the channel on the mempool. There are also race conditions where the sender has not received a new block yet, then sends the message, and the receiver receives/processes the message after it has received a new block containing the closing transaction. Perhaps there should also be a possible reply to this message which indicates "short_channel_id so-and-so was closed by txid so-and-so". Or maybe receivers should not rely on this "SHOULD NOT" and will have to silently ignore `query_short_channel_id` that it thinks is closed; this also implies that the sender cannot rely on getting information on the specified channel from anyone, either. Regards, ZmnSCPxj From aj at erisian.com.au Wed Feb 21 09:19:47 2018 From: aj at erisian.com.au (Anthony Towns) Date: Wed, 21 Feb 2018 19:19:47 +1000 Subject: [Lightning-dev] Proof of payment (Re: AMP: Atomic Multi-Path Payments over Lightning) In-Reply-To: References: <1518171320.5145.0.camel@ultimatestunts.nl> <87h8qmwohh.fsf@gmail.com> <87eflqw0aj.fsf@gmail.com> <9b84f100-172d-11ba-8440-12c2c9956698@bitonic.nl> Message-ID: <20180221091947.GA4644@erisian.com.au> On Tue, Feb 13, 2018 at 09:23:37AM -0500, ZmnSCPxj via Lightning-dev wrote: > Good morning Corne and Conner, > Ignoring the practical matters that Corne rightly brings up, I think, > it is possible to use ZKCP to provide a "stronger" proof-of-payment in > the sense that Conner is asking for. I think Schnorr scriptless scripts work for this (assuming HTLC payment hashes are ECC points rather than SHA256 hashes). In particular: - Alice agrees to pay Bob $5 for a coffee. - Bob calculates a lightning payment hash preimage r, and payment hash R=r*G. Bob also prepares a receipt message, saying "I've been paid $5 to give Alice a coffee", and calculates a partial Schnorr signature of this receipt (n a signature nonce, N=n*G, s=n+H(R+N,B,receipt)*b), and sends Alice (R, N, s) - Alice verfies the partial signature: s*G = N + H(R+N,B,receipt)*B - Alice pays over lightning conditional on receiving the preimage r of R. - Alice then has a valid signature of the receipt, signed by Bob: (R+N, r+s) The benefit over just getting a hash preimage, is that you can use this to prove that you paid Bob, rather than Carol or Dave, at some later date, including to a third party (a small-claims court, tax authorities, a KYC/AML audit?). The nice part is you get that just by doing some negotiation at the start, it's not something the lightning protocol needs to handle at all (beyond switching to ECC points for payment hashes). > -------- Original Message -------- > On February 13, 2018 10:33 AM, Corn? Plooy via Lightning-dev wrote: > >Hi Conner, > > I do believe proof of payment is an important feature to have, > > especially for the use case of a payer/payee pair that doesn't > > completely trust each other, but does have the possibility to go to court. Cheers, aj From corne at bitonic.nl Wed Feb 21 10:04:56 2018 From: corne at bitonic.nl (=?UTF-8?Q?Corn=c3=a9_Plooy?=) Date: Wed, 21 Feb 2018 11:04:56 +0100 Subject: [Lightning-dev] Privacy issues with proof of payment Message-ID: <08e75369-09e4-dfdd-d5f5-6c811ac17116@bitonic.nl> Hi, I am a bit concerned with the privacy implications of having either a signed invoice + pre-image, or possibly a more powerful proof-of-payment mechanism. In particular, I am concerned that it might provide cryptographic evidence to the buyer that a certain seller performed the transaction, and/or evidence to the seller that a certain buyer performed the transaction. In many cases, providing this evidence would be a feature rather than a bug, allowing third-party dispute settlement (e.g. the legal system). However, in my opinion, the Lightning network should also (or especially) be suitable for more "sensitive" transactions. Even when transactions are not illegal, I believe people still have a need to keep some transaction information private. You don't want it to be possible that your transaction history is stored on some company/person's server for years, and then leaks out when that server gets hacked. Also, in my opinion, we should *not* create a two-tier system of "sensitive" and "nothing-to-hide" transactions: that would make the "sensitive" transactions automatically suspicious, partially negating the whole objective of being able to do sensitive transactions without experiencing negative consequences. To some degree, node IDs can act as pseudonyms, without evidence that ties them to physical identities. However, I consider them to be relatively poor pseudonyms: unlike, for instance, Bitcoin addresses, creating a new node for every new transaction would have a serious scalability impact, and defeat the whole purpose of Lightning. I think a typical person would frequently perform transactions that are inherently tied to their physical identity, e.g. receiving salary. This could give the counterparty (the employer) a link between physical ID and node ID; it might be forced to share this e.g. with authorities, further increasing the odds of leak-out and/or abuse of data. Maybe the solution is to have multiple nodes: one tied to your physical ID, and one or more virtual identities? You could then transfer funds between these nodes, and make sure no outsiders receive any proof-of-payment info about these transfers. It sounds like an expensive solution though, since you'd have to operate more channels to give each node good connectivity. What are your ideas on this? Should proof of payment be optional? Should its strength (optionally) be reduced, so that it can only be used in front of some previously-agreed-on dispute resolution party (is that even possible)? Should the idea of proof of payment be abandoned altogether? Is bi-directional routing(*) useful in this? CJP (*) Payee first finds a route from a rendezvous node to himself, onion-encrypts that route, passes it to payer (together with rendezvous node ID), and payer adds to that route the onion route from payer to rendezvous point. This way, payer knows the rendezvous node ID, but not the payee node ID. Payee knows the rendezvous node ID, but doesn't know payer node ID either. Rendezvous node only knows that it's forwarding a transaction, not from-where-to-where, or the purpose of the transaction. From fabrice.drouin at acinq.fr Wed Feb 21 18:02:57 2018 From: fabrice.drouin at acinq.fr (Fabrice Drouin) Date: Wed, 21 Feb 2018 19:02:57 +0100 Subject: [Lightning-dev] Improving the initial gossip sync In-Reply-To: <87606so4bd.fsf@rustcorp.com.au> References: <874lmvy4gh.fsf@gmail.com> <87tvurym13.fsf@rustcorp.com.au> <87shaawft5.fsf@gmail.com> <878tbzugj0.fsf@rustcorp.com.au> <87mv0cto38.fsf@rustcorp.com.au> <87606so4bd.fsf@rustcorp.com.au> Message-ID: On 20 February 2018 at 02:08, Rusty Russell wrote: > Hi all, > > This consumed much of our lightning dev interop call today! But > I think we have a way forward, which is in three parts, gated by a new > feature bitpair: We've built a prototype with a new feature bit `channel_range_queries` and the following logic: When you receive their init message and check their local features - if they set `initial_routing_sync` and `channel_range_queries` then do nothing (they will send you a `query_channel_range`) - if they set `initial_routing_sync` and not `channel_range_queries` then send your routing table (as before) - if you support `channel_range_queries` then send a `query_channel_range` message This way new and old nodes should be able to understand each other > 1. query_short_channel_id > ========================= > > 1. type: 260 (`query_short_channel_id`) > 2. data: > * [`32`:`chain_hash`] > * [`8`:`short_channel_id`] We could add a `data` field which contains zipped ids like in `reply_channel_range` so we can query several items with a single message ? > 1. type: 262 (`reply_channel_range`) > 2. data: > * [`32`:`chain_hash`] > * [`4`:`first_blocknum`] > * [`4`:`number_of_blocks`] > * [`2`:`len`] > * [`len`:`data`] We could add an additional `encoding_type` field before `data` (or it could be the first byte of `data`) > Appendix A: Encoding Sizes > ========================== > > I tried various obvious compression schemes, in increasing complexity > order (see source below, which takes stdin and spits out stdout): > > Raw = raw 8-byte stream of ordered channels. > gzip -9: gzip -9 of raw. > splitgz: all blocknums first, then all txnums, then all outnums, then gzip -9 > delta: CVarInt encoding: blocknum_delta,num,num*txnum_delta,num*outnum. > deltagz: delta, with gzip -9 > > Corpus 1: LN mainnet dump, 1830 channels.[1] > > Raw: 14640 bytes > gzip -9: 6717 bytes > splitgz: 6464 bytes > delta: 6624 bytes > deltagz: 4171 bytes > > Corpus 2: All P2SH outputs between blocks 508000-508999 incl, 790844 channels.[2] > > Raw: 6326752 bytes > gzip -9: 1861710 bytes > splitgz: 964332 bytes > delta: 1655255 bytes > deltagz: 595469 bytes > > [1] http://ozlabs.org/~rusty/short_channels-mainnet.xz > [2] http://ozlabs.org/~rusty/short_channels-all-p2sh-508000-509000.xz > Impressive! From ZmnSCPxj at protonmail.com Thu Feb 22 15:58:15 2018 From: ZmnSCPxj at protonmail.com (ZmnSCPxj) Date: Thu, 22 Feb 2018 10:58:15 -0500 Subject: [Lightning-dev] Privacy issues with proof of payment In-Reply-To: <08e75369-09e4-dfdd-d5f5-6c811ac17116@bitonic.nl> References: <08e75369-09e4-dfdd-d5f5-6c811ac17116@bitonic.nl> Message-ID: Good morning Corne, My understanding, it would be possible to remove proof-of-payment selectively by hiding the payment in fees. Basically, to anonymously donate money to a node without leaving proof of who you are, you simply route from yourself to the payee node, then back to yourself. You pay yourself the minimum HTLC forwarding amount, and leave a hefty fee to the payee node. The payee cannot prove that you paid to it; as far as it is concerned it was just a payment forwarding. The payer cannot prove that it paid the payee, since anyone on the route other than the payee could have been the source of the payment. This assumes that the payer does not control the entire route, at least. Regards, ZmnSCPxj From aj at erisian.com.au Thu Feb 22 19:28:45 2018 From: aj at erisian.com.au (Anthony Towns) Date: Fri, 23 Feb 2018 05:28:45 +1000 Subject: [Lightning-dev] Post-Schnorr lightning txes In-Reply-To: <20180219225907.GA16444@erisian.com.au> References: <20180219225907.GA16444@erisian.com.au> Message-ID: <20180222192845.GA7584@erisian.com.au> On Tue, Feb 20, 2018 at 08:59:07AM +1000, Anthony Towns wrote: > My understanding of lightning may be out of date, so please forgive > (or at least correct :) any errors on my behalf. > I'm not 100% sure how this approach works compared to the current one > for the CSV/CLTV overlap problem. I think any case you could solve by > obtaining a HTLC-Timeout or HTLC-Success transaction currently, you could > solve in the above scenario by just updating the channel state to remove > the HTLC. So, I didn't understand the HTLC-Timeout/HTLC-Success transactions (you don't have to obtain them separately, they're provided along with every commitment tx), and the current setup works better than what I suggest unless to_self_delay is very small. It could be possible to make that a tradeoff: choose a small to_self_delay because you're confident you'll monitor the chain and quickly penalise any cheating, with the bonus that that makes monitoring cheaply outsourcable even for very active channels; or choose a large to_self_delay and have it cost a bit more to outsource monitoring. Anyway. You can redo all the current txes with Schnorr/muSig/scriptless-scripts fine, I think: - funding tx is 2-of-2 muSig - the commitment tx I hold has outputs for: your balance - payable to A(i) my balance - payable to A(i)+R(B,i) each in-flight HTLC - payable to A(i)+R(B,i)+X(j) where A(i) is your pubkey for commitment i R(B,i) is my revocation hash for commitment i X(j) is a perturbation for the jth HTLC to make it hard to know which output is a HTLC and which isn't spends the funding tx locktime and sequence of the funding tx input encode i partially signed by you - the HTLC-Success/HTLC-Timeout txes need to have two phases, one that can immediately demonstrate the relevent condition has been met, and a second with a CSV delay to ensure cheating can be penalised. so: HTLC-Success: pays A(i)+R(B,i)+Y(j), partially signed by you with scriptless script requirement of revealing preimage for corresponding payment hash HTLC-Timeout: pays A(i)+R(B,i)+Y(j), partially signed by you with locktime set to enforce timeout - you also need a claim transaction for each output you can possibly spend: Balance-Claim: pays B(i), funded by my balance output, partially signed by you, with sequence set to enforce relative timelock of to_self_delay HTLC-Claim: pays B(i)+Z(j), funded by the j'th HTLC-Success/HTLC-Timeout transaction, partially signed by you, with sequence set to enforce relative timelock of to_self_delay where Y(j) and Z(j) are similar to X(j) and are just to make it hard for third parties to tell the relationship between outputs Each of those partial signatures require me to have sent you a unique ECC point J, for which I know the corresponding secret. I guess you'd just need to include those in the revoke_and_ack and update_add_htlc messages. The drawback with this approach is that to outsource claiming funds (without covenants or SIGHASH_NOINPUT), you'd need to send signatures for 2+2N outputs for every channel update, rather than just 1, and the claiming transactions would be a lot larger. This retains the advantage that you don't have to store any info about outdated HTLCs if you're monitoring for misbehaviour yourself; you just need to send an extra two signatures for every in-flight HTLC for every channel update if you're outsourcing channel monitoring. Posting a penalty transaction in this scheme isn't as cheap as just being 1-in-1-out, but if you're doing it yourself, it's still cheaper than trying to claim the funds while misbehaving: you can do it all in a single transaction, and if cross-input signature aggregation is supported, you can do it all with a single signature; while they will need to supply at least two separate transactions, and 1+2N signatures. > If your channel updates 100 times a second for an entire year, that's > 200GB of data, which seems pretty feasible. If you update the channel immediately whenever a new HTLC starts or ends, that's 50 HTLCs per second on average; if they last for 20 seconds on average, it's 1000 HTLCs at any one time on average, so trustless outsourcing would require storing about 2000 signatures per update, which at 64B per signature, is 13MB/second, or about a terabyte per day. Not so feasible by comparison. The channel update rate is contributing quadratically to that calculation though, so reducing the rate of incoming HTLCs to 2 per second on average, but capping channel updates at 1 per second, gives an average of 40 HTLCs at any one time and 81 signatures per update, for 450MB per day or 163GB per year, which isn't too bad. (I guess if you want the privacy preserving features of WatchTower monitoring you'd have to roughly double that space requirement? Not real sure) Cheers, aj From rusty at rustcorp.com.au Thu Feb 22 23:50:33 2018 From: rusty at rustcorp.com.au (Rusty Russell) Date: Fri, 23 Feb 2018 10:20:33 +1030 Subject: [Lightning-dev] Privacy issues with proof of payment In-Reply-To: <08e75369-09e4-dfdd-d5f5-6c811ac17116@bitonic.nl> References: <08e75369-09e4-dfdd-d5f5-6c811ac17116@bitonic.nl> Message-ID: <87606oinxy.fsf@rustcorp.com.au> Hi Corn?! Indeed, the privacy focus has generally been the payer, rather than the recipient of funds. So there are several things we can do to address this, the main obvious one the ability to provide a "pre-cooked" onion. This would allow either a payment to an anonymous destination directly or via a middleman who has that pre-cooked onion. I'm pretty sure we can't do this *now*: the shared secrets required for decoding error replies allow you to to decrypt the entire onion, AFAICT. At a minimum, we need errors from the final destination so we can reflect them. I believe a simple tweak to use the SHA256() of the secrets for shared secret used to encrypt the error replies would allow this: you would provide those error secrets along with the onion. > What are your ideas on this? Should proof of payment be optional? Should > its strength (optionally) be reduced, so that it can only be used in > front of some previously-agreed-on dispute resolution party (is that > even possible)? Should the idea of proof of payment be abandoned > altogether? Is bi-directional routing(*) useful in this? The proof-of-payment here is a red herring, I think. If we remove the destination awareness, the privacy issues seem greatly reduced. Cheers, Rusty. From rusty at rustcorp.com.au Fri Feb 23 01:18:30 2018 From: rusty at rustcorp.com.au (Rusty Russell) Date: Fri, 23 Feb 2018 11:48:30 +1030 Subject: [Lightning-dev] Welcoming a New C-lightning Core Team Member! Message-ID: <87fu5sh5ax.fsf@rustcorp.com.au> Hi all, Christian and I just gave ZmnSCPxj commit access to c-lightning; we know nothing other than his preferred pronoun and moniker (I'm calling him Zeeman for short), but ZmnSCPxj has earned our professional respect with over 100 commits, many non-trivial. He says: "No objection here, other than to point out that, as I am of course a human, however randomly-generated, I am of course on the side of humanity in the upcoming robot uprising, whose timing I of course have no knowledge about." We look forward to his excellent code and thorough and polite review of our mistakes, for which he can now share the blame! Cheers, Rusty & Christian. PS. There are many teams working on Lightning, but I feel major developments are worth posting to this list (eg. release announcements, kudos). PPS. Created a ML for c-lightning: https://lists.ozlabs.org/listinfo/c-lightning From corne at bitonic.nl Fri Feb 23 12:08:40 2018 From: corne at bitonic.nl (=?UTF-8?Q?Corn=c3=a9_Plooy?=) Date: Fri, 23 Feb 2018 13:08:40 +0100 Subject: [Lightning-dev] Privacy issues with proof of payment In-Reply-To: <87606oinxy.fsf@rustcorp.com.au> References: <08e75369-09e4-dfdd-d5f5-6c811ac17116@bitonic.nl> <87606oinxy.fsf@rustcorp.com.au> Message-ID: <0757c550-fa18-445e-ba9d-2613eee9ad36@bitonic.nl> Hi Rusty, > The proof-of-payment here is a red herring, I think. If we remove the > destination awareness, the privacy issues seem greatly reduced. > Red herring = "something that misleads or distracts from a relevant or important issue"[1]? Do you mean the proof-of-payment is irrelevant for the privacy issue? Trying to define proof-of-payment, in the typical use case of payment in exchange of goods, I'd say that a proof of payment is a piece of data, known to the payee, that allows the payee to prove that ??? "[ was paid to , and in exchange] agreed to transfer ownership of to ". For services, it would be ??? "[ was paid to , and in exchange] agreed to provide to ". Requirements: 1. Proof-of-payment must be available to payer, who has the burden of proof. By default, ownership of goods is not transferred, and there is no obligation to provide services. Absence of proof should point to this default. It is in the interest of payer to deviate from this default; if he is capable of providing proof, he probably will. 2. The first part, " was paid to , and in exchange" is optional: what I think really matters is the second part. Only in the case that turns out to be incapable of delivering goods or services, a dispute resolution party might be interested in the first part, to find out what amount of monetary refund would be reasonable. 3. It is necessary that proof-of-payment proves agreement of : otherwise, Eve could write "Alice agreed to transfer ownership of to Eve" without consent of Alice. 4. It may not be necessary that proof-of-payment itself mentions identity of , but it is necessary that becomes known to the payer: "somebody agreed to transfer ownership of to " does not indicate an obligation of any specific party. Without knowing , it is impossible to verify 3. 5. It is necessary that proof-of-payment mentions the specific obligation (e.g. delivery of goods/services); otherwise, it doesn't prove anything useful. 6. It is necessary that proof-of-payment mentions : otherwise, multiple potential payer parties could claim goods/services using copies of a single proof-of-payment. Now that I think of it, it is way more tricky than this, and I'm not sure that any mention of solves anything. What you'd really want is that a single payment only results in a single obligation of . However, IDs tend to be copyable, just like proofs-of-payment. The best you can hope for is difficult-to-copy IDs (like government-issued IDs) or very inconvenient-to-copy (e.g. private keys of nodes that have significant funds). How do you distinguish multiple identical transactions to the same payer from the same payer making multiple false claims with the same proof-of-payment? Include the payment hash to make it unique? I'm not sure we're solving anything here. The current invoice protocol[2] meets 1,2(optional part is included),3(*),4(*),5(**), and can possibly meet 6(**), although there is currently no defined protocol for payee to learn payer's identity. There *are* some privacy issues with this kind of proof-of-payment: 3. requires payer to learn , and requires payee to provide cryptographic proof of his consent to the transaction. 6. requires payee to learn . Because of its questionable usefulness, I guess it's good there is no protocol defined for this. However, 6. remains an open issue that does limit usefulness of proofs-of-payment. Interestingly, while this knowledge provides *evidence* for payer's involvement in the transaction, there is no cryptographic *proof* of payer's involvement. CJP (*) the 'n' field is not required, but for routing and for verifying the signature, payer currently still needs to know payee's node ID. (**) optional: the 'd' and 'h' fields are free-format, and allow for this. [1] https://en.wikipedia.org/wiki/Red_herring [2] https://github.com/lightningnetwork/lightning-rfc/blob/master/11-payment-encoding.md From laolu32 at gmail.com Sat Feb 24 00:11:52 2018 From: laolu32 at gmail.com (Olaoluwa Osuntokun) Date: Sat, 24 Feb 2018 00:11:52 +0000 Subject: [Lightning-dev] Privacy issues with proof of payment In-Reply-To: <0757c550-fa18-445e-ba9d-2613eee9ad36@bitonic.nl> References: <08e75369-09e4-dfdd-d5f5-6c811ac17116@bitonic.nl> <87606oinxy.fsf@rustcorp.com.au> <0757c550-fa18-445e-ba9d-2613eee9ad36@bitonic.nl> Message-ID: > I am a bit concerned with the privacy implications of having either a signed > invoice > In particular, I am concerned that it might provide cryptographic evidence > to the buyer that a certain seller performed the transaction, and/or > evidence to the seller that a certain buyer performed the transaction. It's 100% opt-in. If either party doesn't wish to allow any sort of proof-of-payment, or service, or whatever, then they don't need to. In this case the sender would just obtain the payment parameters (skipping BOLT11 or w/e other follow ups in the feature), and make a "raw" payment. Without interaction from the sender, there are various classes of spontaneous payments available as well. >From the PoV of the network (or participants in the payment path), it's indistinguishable. Only the end points need to decide if their use case is one that both opt into for a proof of payment scheme. -- Laolu On Fri, Feb 23, 2018 at 4:08 AM Corn? Plooy via Lightning-dev < lightning-dev at lists.linuxfoundation.org> wrote: > Hi Rusty, > > The proof-of-payment here is a red herring, I think. If we remove the > > destination awareness, the privacy issues seem greatly reduced. > > > Red herring = "something that misleads or distracts from a relevant or > important issue"[1]? Do you mean the proof-of-payment is irrelevant for > the privacy issue? > > Trying to define proof-of-payment, in the typical use case of payment in > exchange of goods, I'd say that a proof of payment is a piece of data, > known to the payee, that allows the payee to prove that > "[ was paid to , and in exchange] agreed to > transfer ownership of to ". > For services, it would be > "[ was paid to , and in exchange] agreed to > provide to ". > > Requirements: > 1. Proof-of-payment must be available to payer, who has the burden of > proof. By default, ownership of goods is not transferred, and there is > no obligation to provide services. Absence of proof should point to this > default. It is in the interest of payer to deviate from this default; if > he is capable of providing proof, he probably will. > 2. The first part, " was paid to , and in exchange" is > optional: what I think really matters is the second part. Only in the > case that turns out to be incapable of delivering goods or > services, a dispute resolution party might be interested in the first > part, to find out what amount of monetary refund would be reasonable. > 3. It is necessary that proof-of-payment proves agreement of : > otherwise, Eve could write "Alice agreed to transfer ownership of > to Eve" without consent of Alice. > 4. It may not be necessary that proof-of-payment itself mentions > identity of , but it is necessary that becomes known to > the payer: "somebody agreed to transfer ownership of to " > does not indicate an obligation of any specific party. Without knowing > , it is impossible to verify 3. > 5. It is necessary that proof-of-payment mentions the specific > obligation (e.g. delivery of goods/services); otherwise, it doesn't > prove anything useful. > 6. It is necessary that proof-of-payment mentions : otherwise, > multiple potential payer parties could claim goods/services using copies > of a single proof-of-payment. Now that I think of it, it is way more > tricky than this, and I'm not sure that any mention of solves > anything. What you'd really want is that a single payment only results > in a single obligation of . However, IDs tend to be copyable, > just like proofs-of-payment. The best you can hope for is > difficult-to-copy IDs (like government-issued IDs) or very > inconvenient-to-copy (e.g. private keys of nodes that have significant > funds). How do you distinguish multiple identical transactions to the > same payer from the same payer making multiple false claims with the > same proof-of-payment? Include the payment hash to make it unique? I'm > not sure we're solving anything here. > > The current invoice protocol[2] meets 1,2(optional part is > included),3(*),4(*),5(**), and can possibly meet 6(**), although there > is currently no defined protocol for payee to learn payer's identity. > > There *are* some privacy issues with this kind of proof-of-payment: > 3. requires payer to learn , and requires payee to provide > cryptographic proof of his consent to the transaction. > 6. requires payee to learn . Because of its questionable > usefulness, I guess it's good there is no protocol defined for this. > However, 6. remains an open issue that does limit usefulness of > proofs-of-payment. Interestingly, while this knowledge provides > *evidence* for payer's involvement in the transaction, there is no > cryptographic *proof* of payer's involvement. > > CJP > > (*) the 'n' field is not required, but for routing and for verifying the > signature, payer currently still needs to know payee's node ID. > (**) optional: the 'd' and 'h' fields are free-format, and allow for this. > > [1] https://en.wikipedia.org/wiki/Red_herring > [2] > > https://github.com/lightningnetwork/lightning-rfc/blob/master/11-payment-encoding.md > > > _______________________________________________ > Lightning-dev mailing list > Lightning-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From laolu32 at gmail.com Sat Feb 24 00:45:27 2018 From: laolu32 at gmail.com (Olaoluwa Osuntokun) Date: Sat, 24 Feb 2018 00:45:27 +0000 Subject: [Lightning-dev] Improving the initial gossip sync In-Reply-To: References: <874lmvy4gh.fsf@gmail.com> <87tvurym13.fsf@rustcorp.com.au> <87shaawft5.fsf@gmail.com> <878tbzugj0.fsf@rustcorp.com.au> <87mv0cto38.fsf@rustcorp.com.au> <87606so4bd.fsf@rustcorp.com.au> Message-ID: Hi Rusty, > 1. query_short_channel_id > IMPLEMENTATION: trivial *thumbs up* > 2. query_channel_range/reply_channel_range > IMPLEMENTATION: requires channel index by block number, zlib For the sake of expediency of deployment, if we add a byte (or two) to denote the encoding/compression scheme, we can immediately roll out the vanilla (just list the ID's), then progressively roll out more context-specific optimized schemes. > 3. A gossip_timestamp field in `init` > This is a new field appended to `init`: the negotiation of this feature bit > overrides `initial_routing_sync` As I've brought up before, from my PoV, we can't append any additional fields to the innit message as it already contains *two* variable sized fields (and no fixed size fields). Aside from this, it seems that the `innit` message should be simply for exchange versioning information, which may govern exactly *which* messages are sent after it. Otherwise, by adding _additional_ fields to the `innit` message, we paint ourselves in a corner and can never remove it. Compared to using the `innit` message to set up the initial session context, where we can safely add other bits to nullify or remove certain expected messages. With that said, this should instead be a distinct `chan_update_horizon` message (or w/e name). If a particular bit is set in the `init` message, then the next message *both* sides send *must* be `chan_update_horizon`. Another advantage of making this a distinct message, is that either party can at any time update this horizon/filter to ensure that they only receive the *freshest* updates.Otherwise, one can image a very long lived connections (say weeks) and the remote party keeps sending me very dated updates (wasting bandwidth) when I only really want the *latest*. This can incorporate decker's idea about having a high+low timestamp. I think this is desirable as then for the initial sync case, the receiver can *precisely* control their "verification load" to ensure they only process a particular chunk at a time. Fabrice wrote: > We could add a `data` field which contains zipped ids like in > `reply_channel_range` so we can query several items with a single message ? I think this is an excellent idea! It would allow batched requests in response to a channel range message. I'm not so sure we need to jump *straight* to compressing everything however. > We could add an additional `encoding_type` field before `data` (or it > could be the first byte of `data`) Great minds think alike :-) If we're in rough agreement generally about this initial "kick can" approach, I'll start implementing some of this in a prototype branch for lnd. I'm very eager to solve the zombie churn, and initial burst that can be very hard on light clients. -- Laolu On Wed, Feb 21, 2018 at 10:03 AM Fabrice Drouin wrote: > On 20 February 2018 at 02:08, Rusty Russell wrote: > > Hi all, > > > > This consumed much of our lightning dev interop call today! But > > I think we have a way forward, which is in three parts, gated by a new > > feature bitpair: > > We've built a prototype with a new feature bit `channel_range_queries` > and the following logic: > When you receive their init message and check their local features > - if they set `initial_routing_sync` and `channel_range_queries` then > do nothing (they will send you a `query_channel_range`) > - if they set `initial_routing_sync` and not `channel_range_queries` > then send your routing table (as before) > - if you support `channel_range_queries` then send a > `query_channel_range` message > > This way new and old nodes should be able to understand each other > > > 1. query_short_channel_id > > ========================= > > > > 1. type: 260 (`query_short_channel_id`) > > 2. data: > > * [`32`:`chain_hash`] > > * [`8`:`short_channel_id`] > > We could add a `data` field which contains zipped ids like in > `reply_channel_range` so we can query several items with a single > message ? > > > 1. type: 262 (`reply_channel_range`) > > 2. data: > > * [`32`:`chain_hash`] > > * [`4`:`first_blocknum`] > > * [`4`:`number_of_blocks`] > > * [`2`:`len`] > > * [`len`:`data`] > > We could add an additional `encoding_type` field before `data` (or it > could be the first byte of `data`) > > > Appendix A: Encoding Sizes > > ========================== > > > > I tried various obvious compression schemes, in increasing complexity > > order (see source below, which takes stdin and spits out stdout): > > > > Raw = raw 8-byte stream of ordered channels. > > gzip -9: gzip -9 of raw. > > splitgz: all blocknums first, then all txnums, then all outnums, > then gzip -9 > > delta: CVarInt encoding: > blocknum_delta,num,num*txnum_delta,num*outnum. > > deltagz: delta, with gzip -9 > > > > Corpus 1: LN mainnet dump, 1830 channels.[1] > > > > Raw: 14640 bytes > > gzip -9: 6717 bytes > > splitgz: 6464 bytes > > delta: 6624 bytes > > deltagz: 4171 bytes > > > > Corpus 2: All P2SH outputs between blocks 508000-508999 incl, 790844 > channels.[2] > > > > Raw: 6326752 bytes > > gzip -9: 1861710 bytes > > splitgz: 964332 bytes > > delta: 1655255 bytes > > deltagz: 595469 bytes > > > > [1] http://ozlabs.org/~rusty/short_channels-mainnet.xz > > [2] http://ozlabs.org/~rusty/short_channels-all-p2sh-508000-509000.xz > > > > Impressive! > _______________________________________________ > Lightning-dev mailing list > Lightning-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From laszlo at heliacal.net Sun Feb 25 01:29:59 2018 From: laszlo at heliacal.net (Laszlo Hanyecz) Date: Sun, 25 Feb 2018 01:29:59 +0000 Subject: [Lightning-dev] Pizza for (lightning) bitcoins? Message-ID: I wanted to try out a real trade using lightning network. I don't know of any pizza places near me that accept lightning bitcoin yet but a friend from London agreed to do it and he sub contracted out the pizza delivery to a local shop. In short, I paid bitcoin using the lightning network and he arranged for pizza to be delivered to me. In this trade my friend is just a middle man that is taking the risk on accepting lightning payments, but it demonstrates the basic premise of how this works for everyday transactions. It could just as well be the pizza shop accepting the payment directly with their own lightning node. I wanted two pizzas and to try to do it as close to atomically as possible. I didn't want to prepay and end up with no pizza. As far as I know we don't yet have pizza/bitcoin atomic swap software but we improvised and decided that I would need to provide the payment hash preimage to the delivery driver in order to claim my pizza. If I can't produce the preimage, proving that I paid, then the pizza would not be handed over and it would be destroyed. This works because I can't get the preimage without paying the invoice. I agreed to open a channel and fund it with a sufficient amount for what we estimated the cost would end up being. After we agreed to these terms my friend was able to verify that I funded a channel on the blockchain, which shows that I at least have the money (bitcoin). He is taking on some entrepreneurial risk and prepaying his sub contractor to prepare and deliver the pizza to me, but at this point I have not risked my bitcoins, they're just committed to a channel. I was given a bolt11 invoice which I decoded with the c-lightning cli to verify everything was as agreed: $ ./lightning-cli decodepay lnbc6490u1pdfrjhcpp5jyxuuskqw53apgqvtxa7emcrz5vs0qr2sxjayxv7jj70jznnl94sdp5x9vycgzrdpjk2umeypgxj7n6vykzqvfqg3jkcatcv5s9q6t60fssxqyzx2qcqpgaue37x27yp3pn4cr6wuprvwedncz4kavqh83cp3l0vwfrprj0xj8cedkfmjdzea0xpp0jazfcyy77cq37ej6d3xvmujmgu56pe56ktcqa3vcys { "currency" : "bc", "timestamp" : 1519504120, "created_at" : 1519504120, "expiry" : 72000, "payee" : "0397b318c5e0d09b16e6229ec50744c8a7a8452b2d7c6d9855c826ff14b8fa8b27", "msatoshi" : 649000000, "description" : "1XL Cheesy Pizza, 1 Deluxe Pizza", "min_final_cltv_expiry" : 8, "payment_hash" : "910dce42c07523d0a00c59bbecef03151907806a81a5d2199e94bcf90a73f96b", "signature" : "3045022100ef331f195e206219d703d3b811b1d96cf02adbac05cf1c063f7b1c91847279a402207c65b64ee4d167af3042f97449c109ef6011f665a6c4ccdf25b4729a0e69ab2f" } When the pizza delivery arrived, I was asked "What is the preimage?" by the driver. At this point I paid the invoice and instantly received the preimage in return. $ ./lightning-cli pay lnbc6490u1pdfrjhcpp5jyxuuskqw53apgqvtxa7emcrz5vs0qr2sxjayxv7jj70jznnl94sdp5x9vycgzrdpjk2umeypgxj7n6vykzqvfqg3jkcatcv5s9q6t60fssxqyzx2qcqpgaue37x27yp3pn4cr6wuprvwedncz4kavqh83cp3l0vwfrprj0xj8cedkfmjdzea0xpp0jazfcyy77cq37ej6d3xvmujmgu56pe56ktcqa3vcys { "preimage" : "7241e3f185148625894b8887ad459babd26540fc12124c3a7a96c937d89da8c1", "tries" : 1 } In the interest of keeping it simple we agreed that the preimage would just be the first and last 4 characters of the hex string. So my answer was 7241-a8c1. I wrote this on a notepad and presented it to the driver who compared it to his own notepad, at which point I was given the pizza. It's probably not a good practice to share the preimage. The delivery driver didn't have the full string, only enough to verify that I had it. How do you get the preimage for your invoice? In c-lightning you can do it like this: $ ./lightning-cli invoice 12345 label description { "payment_hash" : "e04dfbd4adc634779b560c8e7072f883d5f17a3e32a33603bfc90a8682873d44", "expiry_time" : 1519523498, "expires_at" : 1519523498, "bolt11" : "lnbc123450p1pdfyzy6pp5upxlh49dcc680x6kpj88quhcs02lz737x23nvqaley9gdq5884zqdqjv3jhxcmjd9c8g6t0dccqpg802ys4s4z3rpm6d8zvdgq397wewh5kaz527hnglz9xsmjxfjrhe3mxq9pp7pqm0pwcwm748tav4am97gqrvnzxnlw5uxxawgw4vcywgphj26nf" } $ sqlite3 ~/.lightning/lightningd.sqlite3 "SELECT quote(payment_key) FROM invoices ORDER BY id DESC LIMIT 1" X'D3BE7E68D8B38B15A5194AEA131A21429A1987085C95A0631273273546FF5ED8' Then you can verify that it's indeed the correct preimage by hashing it again and comparing it to the payment_hash in the invoice above: $ echo "D3BE7E68D8B38B15A5194AEA131A21429A1987085C95A0631273273546FF5ED8" | xxd -r -p | sha256sum e04dfbd4adc634779b560c8e7072f883d5f17a3e32a33603bfc90a8682873d44 - Note that you should not share the preimage with anyone. So is there any point to doing this instead of an on chain transaction? For what I described here, probably not. The goal was just to play around with c-lightning and do something more than shuffling a few satoshi back and forth. Maybe eventually pizza shops will have their own lightning nodes and I can open channels to them directly. Some pics of my family enjoying the pizza here: http://eclipse.heliacal.net/~solar/bitcoin/lightning-pizza/ -Laszlo -------------- next part -------------- An HTML attachment was scrubbed... URL: From robban at robtex.com Sun Feb 25 08:19:38 2018 From: robban at robtex.com (Robert Olsson) Date: Sun, 25 Feb 2018 10:19:38 +0200 Subject: [Lightning-dev] Pizza for (lightning) bitcoins? In-Reply-To: References: Message-ID: First of all, Laszlo, that was awesome! Instead of the part where you proved you had opened a channel, it would be awesome to add some escrow-functionality. Such as you get the invoice, and then you have a function to *almost* pay it, to verify it works thru the network with AMP and all. At that stage they start to make the pizza. And when you actually receive your pizza, you just somehow confirm the transaction, releasing the funds. Not sure you would have to prove anything with the preimage to the delivery guy. He should get some notification in his phone from his lightningnode that it is paid. If he never shows up you revert it somehow. Not sure how to do that technically, but we probably have most things in place for it already. Start your brains, guys! Things are getting serious, there is pizza at stake! Best regards Robert Olsson On Sun, Feb 25, 2018 at 3:29 AM, Laszlo Hanyecz wrote: > I wanted to try out a real trade using lightning network. I don't know of any pizza places near me that accept lightning bitcoin yet but a friend from London agreed to do it and he sub contracted out the pizza delivery to a local shop. > In short, I paid bitcoin using the lightning network and he arranged for pizza to be delivered to me. In this trade my friend is just a middle man that is taking the risk on accepting lightning payments, but it demonstrates the basic premise of how this works for everyday transactions. It could just as well be the pizza shop accepting the payment directly with their own lightning node. > I wanted two pizzas and to try to do it as close to atomically as possible. I didn't want to prepay and end up with no pizza. As far as I know we don't yet have pizza/bitcoin atomic swap software but we improvised and decided that I would need to provide the payment hash preimage to the delivery driver in order to claim my pizza. If I can't produce the preimage, proving that I paid, then the pizza would not be handed over and it would be destroyed. This works because I can't get the preimage without paying the invoice. I agreed to open a channel and fund it with a sufficient amount for what we estimated the cost would end up being. After we agreed to these terms my friend was able to verify that I funded a channel on the blockchain, which shows that I at least have the money (bitcoin). He is taking on some entrepreneurial risk and prepaying his sub contractor to prepare and deliver the pizza to me, but at this point I have not risked my bitcoins, they're just committed to a channel. I was given a bolt11 invoice which I decoded with the c-lightning cli to verify everything was as agreed: > > $ ./lightning-cli decodepay lnbc6490u1pdfrjhcpp5jyxuuskqw53apgqvtxa7emcrz5vs0qr2sxjayxv7jj70jznnl94sdp5x9vycgzrdpjk2umeypgxj7n6vykzqvfqg3jkcatcv5s9q6t60fssxqyzx2qcqpgaue37x27yp3pn4cr6wuprvwedncz4kavqh83cp3l0vwfrprj0xj8cedkfmjdzea0xpp0jazfcyy77cq37ej6d3xvmujmgu56pe56ktcqa3vcys > { "currency" : "bc", "timestamp" : 1519504120, "created_at" : 1519504120, "expiry" : 72000, "payee" : "0397b318c5e0d09b16e6229ec50744c8a7a8452b2d7c6d9855c826ff14b8fa8b27", "msatoshi" : 649000000, "description" : "1XL Cheesy Pizza, 1 Deluxe Pizza", "min_final_cltv_expiry" : 8, "payment_hash" : "910dce42c07523d0a00c59bbecef03151907806a81a5d2199e94bcf90a73f96b", "signature" : "3045022100ef331f195e206219d703d3b811b1d96cf02adbac05cf1c063f7b1c91847279a402207c65b64ee4d167af3042f97449c109ef6011f665a6c4ccdf25b4729a0e69ab2f" } > > When the pizza delivery arrived, I was asked "What is the preimage?" by the driver. At this point I paid the invoice and instantly received the preimage in return. > > $ ./lightning-cli pay lnbc6490u1pdfrjhcpp5jyxuuskqw53apgqvtxa7emcrz5vs0qr2sxjayxv7jj70jznnl94sdp5x9vycgzrdpjk2umeypgxj7n6vykzqvfqg3jkcatcv5s9q6t60fssxqyzx2qcqpgaue37x27yp3pn4cr6wuprvwedncz4kavqh83cp3l0vwfrprj0xj8cedkfmjdzea0xpp0jazfcyy77cq37ej6d3xvmujmgu56pe56ktcqa3vcys > { "preimage" : "7241e3f185148625894b8887ad459babd26540fc12124c3a7a96c937d89da8c1", "tries" : 1 } > > In the interest of keeping it simple we agreed that the preimage would just be the first and last 4 characters of the hex string. So my answer was 7241-a8c1. I wrote this on a notepad and presented it to the driver who compared it to his own notepad, at which point I was given the pizza. It's probably not a good practice to share the preimage. The delivery driver didn't have the full string, only enough to verify that I had it. > How do you get the preimage for your invoice? In c-lightning you can do it like this: > $ ./lightning-cli invoice 12345 label description > { "payment_hash" : "e04dfbd4adc634779b560c8e7072f883d5f17a3e32a33603bfc90a8682873d44", "expiry_time" : 1519523498, "expires_at" : 1519523498, "bolt11" : "lnbc123450p1pdfyzy6pp5upxlh49dcc680x6kpj88quhcs02lz737x23nvqaley9gdq5884zqdqjv3jhxcmjd9c8g6t0dccqpg802ys4s4z3rpm6d8zvdgq397wewh5kaz527hnglz9xsmjxfjrhe3mxq9pp7pqm0pwcwm748tav4am97gqrvnzxnlw5uxxawgw4vcywgphj26nf" } > $ sqlite3 ~/.lightning/lightningd.sqlite3 "SELECT quote(payment_key) FROM invoices ORDER BY id DESC LIMIT 1" > X'D3BE7E68D8B38B15A5194AEA131A21429A1987085C95A0631273273546FF5ED8' > Then you can verify that it's indeed the correct preimage by hashing it again and comparing it to the payment_hash in the invoice above: > $ echo "D3BE7E68D8B38B15A5194AEA131A21429A1987085C95A0631273273546FF5ED8" | xxd -r -p | sha256sum > e04dfbd4adc634779b560c8e7072f883d5f17a3e32a33603bfc90a8682873d44 - > Note that you should not share the preimage with anyone. > > So is there any point to doing this instead of an on chain transaction? For what I described here, probably not. The goal was just to play around with c-lightning and do something more than shuffling a few satoshi back and forth. Maybe eventually pizza shops will have their own lightning nodes and I can open channels to them directly. > > Some pics of my family enjoying the pizza here: http://eclipse.heliacal.net/~solar/bitcoin/lightning-pizza/ > > -Laszlo > > > _______________________________________________ > Lightning-dev mailing list > Lightning-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brianlockhart at gmail.com Sun Feb 25 12:38:20 2018 From: brianlockhart at gmail.com (Brian Lockhart) Date: Sun, 25 Feb 2018 04:38:20 -0800 Subject: [Lightning-dev] Pizza for (lightning) bitcoins? In-Reply-To: References: Message-ID: ?Hear one hear all! Good people, harken and know that on this day the one called Laszlo did buyeth the first Bitcoin Lightning Pizzas, bringing us into this new era of glory and cheesy goodness. And it was good.? > On Feb 24, 2018, at 5:29 PM, Laszlo Hanyecz wrote: > > I wanted to try out a real trade using lightning network. I don't know of any pizza places near me that accept lightning bitcoin yet but a friend from London agreed to do it and he sub contracted out the pizza delivery to a local shop. > In short, I paid bitcoin using the lightning network and he arranged for pizza to be delivered to me. In this trade my friend is just a middle man that is taking the risk on accepting lightning payments, but it demonstrates the basic premise of how this works for everyday transactions. It could just as well be the pizza shop accepting the payment directly with their own lightning node. > I wanted two pizzas and to try to do it as close to atomically as possible. I didn't want to prepay and end up with no pizza. As far as I know we don't yet have pizza/bitcoin atomic swap software but we improvised and decided that I would need to provide the payment hash preimage to the delivery driver in order to claim my pizza. If I can't produce the preimage, proving that I paid, then the pizza would not be handed over and it would be destroyed. This works because I can't get the preimage without paying the invoice. I agreed to open a channel and fund it with a sufficient amount for what we estimated the cost would end up being. After we agreed to these terms my friend was able to verify that I funded a channel on the blockchain, which shows that I at least have the money (bitcoin). He is taking on some entrepreneurial risk and prepaying his sub contractor to prepare and deliver the pizza to me, but at this point I have not risked my bitcoins, they're just committed to a channel. I was given a bolt11 invoice which I decoded with the c-lightning cli to verify everything was as agreed: > > $ ./lightning-cli decodepay lnbc6490u1pdfrjhcpp5jyxuuskqw53apgqvtxa7emcrz5vs0qr2sxjayxv7jj70jznnl94sdp5x9vycgzrdpjk2umeypgxj7n6vykzqvfqg3jkcatcv5s9q6t60fssxqyzx2qcqpgaue37x27yp3pn4cr6wuprvwedncz4kavqh83cp3l0vwfrprj0xj8cedkfmjdzea0xpp0jazfcyy77cq37ej6d3xvmujmgu56pe56ktcqa3vcys > { "currency" : "bc", "timestamp" : 1519504120, "created_at" : 1519504120, "expiry" : 72000, "payee" : "0397b318c5e0d09b16e6229ec50744c8a7a8452b2d7c6d9855c826ff14b8fa8b27", "msatoshi" : 649000000, "description" : "1XL Cheesy Pizza, 1 Deluxe Pizza", "min_final_cltv_expiry" : 8, "payment_hash" : "910dce42c07523d0a00c59bbecef03151907806a81a5d2199e94bcf90a73f96b", "signature" : "3045022100ef331f195e206219d703d3b811b1d96cf02adbac05cf1c063f7b1c91847279a402207c65b64ee4d167af3042f97449c109ef6011f665a6c4ccdf25b4729a0e69ab2f" } > > When the pizza delivery arrived, I was asked "What is the preimage?" by the driver. At this point I paid the invoice and instantly received the preimage in return. > > $ ./lightning-cli pay lnbc6490u1pdfrjhcpp5jyxuuskqw53apgqvtxa7emcrz5vs0qr2sxjayxv7jj70jznnl94sdp5x9vycgzrdpjk2umeypgxj7n6vykzqvfqg3jkcatcv5s9q6t60fssxqyzx2qcqpgaue37x27yp3pn4cr6wuprvwedncz4kavqh83cp3l0vwfrprj0xj8cedkfmjdzea0xpp0jazfcyy77cq37ej6d3xvmujmgu56pe56ktcqa3vcys > { "preimage" : "7241e3f185148625894b8887ad459babd26540fc12124c3a7a96c937d89da8c1", "tries" : 1 } > > In the interest of keeping it simple we agreed that the preimage would just be the first and last 4 characters of the hex string. So my answer was 7241-a8c1. I wrote this on a notepad and presented it to the driver who compared it to his own notepad, at which point I was given the pizza. It's probably not a good practice to share the preimage. The delivery driver didn't have the full string, only enough to verify that I had it. > How do you get the preimage for your invoice? In c-lightning you can do it like this: > $ ./lightning-cli invoice 12345 label description > { "payment_hash" : "e04dfbd4adc634779b560c8e7072f883d5f17a3e32a33603bfc90a8682873d44", "expiry_time" : 1519523498, "expires_at" : 1519523498, "bolt11" : "lnbc123450p1pdfyzy6pp5upxlh49dcc680x6kpj88quhcs02lz737x23nvqaley9gdq5884zqdqjv3jhxcmjd9c8g6t0dccqpg802ys4s4z3rpm6d8zvdgq397wewh5kaz527hnglz9xsmjxfjrhe3mxq9pp7pqm0pwcwm748tav4am97gqrvnzxnlw5uxxawgw4vcywgphj26nf" } > $ sqlite3 ~/.lightning/lightningd.sqlite3 "SELECT quote(payment_key) FROM invoices ORDER BY id DESC LIMIT 1" > X'D3BE7E68D8B38B15A5194AEA131A21429A1987085C95A0631273273546FF5ED8' > Then you can verify that it's indeed the correct preimage by hashing it again and comparing it to the payment_hash in the invoice above: > $ echo "D3BE7E68D8B38B15A5194AEA131A21429A1987085C95A0631273273546FF5ED8" | xxd -r -p | sha256sum > e04dfbd4adc634779b560c8e7072f883d5f17a3e32a33603bfc90a8682873d44 - > Note that you should not share the preimage with anyone. > > So is there any point to doing this instead of an on chain transaction? For what I described here, probably not. The goal was just to play around with c-lightning and do something more than shuffling a few satoshi back and forth. Maybe eventually pizza shops will have their own lightning nodes and I can open channels to them directly. > > Some pics of my family enjoying the pizza here: http://eclipse.heliacal.net/~solar/bitcoin/lightning-pizza/ > > -Laszlo > _______________________________________________ > Lightning-dev mailing list > Lightning-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From cezary.dziemian at gmail.com Sun Feb 25 13:01:34 2018 From: cezary.dziemian at gmail.com (Cezary Dziemian) Date: Sun, 25 Feb 2018 14:01:34 +0100 Subject: [Lightning-dev] Pizza for (lightning) bitcoins? In-Reply-To: References: Message-ID: That is great! I even wrote an article about this (in polish) http://lightningnetwork.pl/2018/02/25/oplacenie-pizzy-przez-ln/ Assume that delivery driver is honest and reliable we could call it atomic swap I think. You exchanged bitcoins for preimage and preimage for pizza. Great! The only thing that makes me sad is that, pizza was scheduled to be destroyed if not paid by you. In my opinion instead of destroying pizza, it should be delivered to your friend! 2018-02-25 2:29 GMT+01:00 Laszlo Hanyecz : > I wanted to try out a real trade using lightning network. I don't know of any pizza places near me that accept lightning bitcoin yet but a friend from London agreed to do it and he sub contracted out the pizza delivery to a local shop. > In short, I paid bitcoin using the lightning network and he arranged for pizza to be delivered to me. In this trade my friend is just a middle man that is taking the risk on accepting lightning payments, but it demonstrates the basic premise of how this works for everyday transactions. It could just as well be the pizza shop accepting the payment directly with their own lightning node. > I wanted two pizzas and to try to do it as close to atomically as possible. I didn't want to prepay and end up with no pizza. As far as I know we don't yet have pizza/bitcoin atomic swap software but we improvised and decided that I would need to provide the payment hash preimage to the delivery driver in order to claim my pizza. If I can't produce the preimage, proving that I paid, then the pizza would not be handed over and it would be destroyed. This works because I can't get the preimage without paying the invoice. I agreed to open a channel and fund it with a sufficient amount for what we estimated the cost would end up being. After we agreed to these terms my friend was able to verify that I funded a channel on the blockchain, which shows that I at least have the money (bitcoin). He is taking on some entrepreneurial risk and prepaying his sub contractor to prepare and deliver the pizza to me, but at this point I have not risked my bitcoins, they're just committed to a channel. I was given a bolt11 invoice which I decoded with the c-lightning cli to verify everything was as agreed: > > $ ./lightning-cli decodepay lnbc6490u1pdfrjhcpp5jyxuuskqw53apgqvtxa7emcrz5vs0qr2sxjayxv7jj70jznnl94sdp5x9vycgzrdpjk2umeypgxj7n6vykzqvfqg3jkcatcv5s9q6t60fssxqyzx2qcqpgaue37x27yp3pn4cr6wuprvwedncz4kavqh83cp3l0vwfrprj0xj8cedkfmjdzea0xpp0jazfcyy77cq37ej6d3xvmujmgu56pe56ktcqa3vcys > { "currency" : "bc", "timestamp" : 1519504120, "created_at" : 1519504120, "expiry" : 72000, "payee" : "0397b318c5e0d09b16e6229ec50744c8a7a8452b2d7c6d9855c826ff14b8fa8b27", "msatoshi" : 649000000, "description" : "1XL Cheesy Pizza, 1 Deluxe Pizza", "min_final_cltv_expiry" : 8, "payment_hash" : "910dce42c07523d0a00c59bbecef03151907806a81a5d2199e94bcf90a73f96b", "signature" : "3045022100ef331f195e206219d703d3b811b1d96cf02adbac05cf1c063f7b1c91847279a402207c65b64ee4d167af3042f97449c109ef6011f665a6c4ccdf25b4729a0e69ab2f" } > > When the pizza delivery arrived, I was asked "What is the preimage?" by the driver. At this point I paid the invoice and instantly received the preimage in return. > > $ ./lightning-cli pay lnbc6490u1pdfrjhcpp5jyxuuskqw53apgqvtxa7emcrz5vs0qr2sxjayxv7jj70jznnl94sdp5x9vycgzrdpjk2umeypgxj7n6vykzqvfqg3jkcatcv5s9q6t60fssxqyzx2qcqpgaue37x27yp3pn4cr6wuprvwedncz4kavqh83cp3l0vwfrprj0xj8cedkfmjdzea0xpp0jazfcyy77cq37ej6d3xvmujmgu56pe56ktcqa3vcys > { "preimage" : "7241e3f185148625894b8887ad459babd26540fc12124c3a7a96c937d89da8c1", "tries" : 1 } > > In the interest of keeping it simple we agreed that the preimage would just be the first and last 4 characters of the hex string. So my answer was 7241-a8c1. I wrote this on a notepad and presented it to the driver who compared it to his own notepad, at which point I was given the pizza. It's probably not a good practice to share the preimage. The delivery driver didn't have the full string, only enough to verify that I had it. > How do you get the preimage for your invoice? In c-lightning you can do it like this: > $ ./lightning-cli invoice 12345 label description > { "payment_hash" : "e04dfbd4adc634779b560c8e7072f883d5f17a3e32a33603bfc90a8682873d44", "expiry_time" : 1519523498, "expires_at" : 1519523498, "bolt11" : "lnbc123450p1pdfyzy6pp5upxlh49dcc680x6kpj88quhcs02lz737x23nvqaley9gdq5884zqdqjv3jhxcmjd9c8g6t0dccqpg802ys4s4z3rpm6d8zvdgq397wewh5kaz527hnglz9xsmjxfjrhe3mxq9pp7pqm0pwcwm748tav4am97gqrvnzxnlw5uxxawgw4vcywgphj26nf" } > $ sqlite3 ~/.lightning/lightningd.sqlite3 "SELECT quote(payment_key) FROM invoices ORDER BY id DESC LIMIT 1" > X'D3BE7E68D8B38B15A5194AEA131A21429A1987085C95A0631273273546FF5ED8' > Then you can verify that it's indeed the correct preimage by hashing it again and comparing it to the payment_hash in the invoice above: > $ echo "D3BE7E68D8B38B15A5194AEA131A21429A1987085C95A0631273273546FF5ED8" | xxd -r -p | sha256sum > e04dfbd4adc634779b560c8e7072f883d5f17a3e32a33603bfc90a8682873d44 - > Note that you should not share the preimage with anyone. > > So is there any point to doing this instead of an on chain transaction? For what I described here, probably not. The goal was just to play around with c-lightning and do something more than shuffling a few satoshi back and forth. Maybe eventually pizza shops will have their own lightning nodes and I can open channels to them directly. > > Some pics of my family enjoying the pizza here: http://eclipse.heliacal.net/~solar/bitcoin/lightning-pizza/ > > -Laszlo > > > _______________________________________________ > Lightning-dev mailing list > Lightning-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ZmnSCPxj at protonmail.com Sun Feb 25 15:30:53 2018 From: ZmnSCPxj at protonmail.com (ZmnSCPxj) Date: Sun, 25 Feb 2018 10:30:53 -0500 Subject: [Lightning-dev] Pizza for (lightning) bitcoins? In-Reply-To: References: Message-ID: Good morning Robert, Assuming you have a direct channel with the pizza provider, build a route from you to pizza provider to you. You route the pizza price + 546 satoshi (the minimum for a nondust output) to the pizza provider, and the hop from the pizza provider to you is the 546 satoshi (so that the pizza provider gets paid the pizza price in total as the "routing fee"). You inform the pizza provider the hash of the preimage, which the pizza provider can check with their node exists as an incoming HTLC and an outgoing HTLC, with the difference being the pizza price. Further, you set things up so that the HTLC to you expires in 3 blocks, which means that the pizza provider has to provide the pizza in three blocks or it is free. This is the Bitcoin universe and all time is measured in terms of blocks; "minutes" is just a shared human delusion that is less real than blockchains. When the pizza is delivered, your provide the preimage to the pizza provider via standard LN protocol, and when the pizza provider confirms to the delivery person that the pizza is paid for, the pizza is released to you. Regards, ZmnSCPxj Sent with [ProtonMail](https://protonmail.com) Secure Email. ??????? Original Message ??????? On February 25, 2018 4:19 PM, Robert Olsson wrote: > First of all, Laszlo, that was awesome! > > Instead of the part where you proved you had opened a channel, it would be awesome to add some escrow-functionality. Such as you get the invoice, and then you have a function to *almost* pay it, to verify it works thru the network with AMP and all. At that stage they start to make the pizza. And when you actually receive your pizza, you just somehow confirm the transaction, releasing the funds. > Not sure you would have to prove anything with the preimage to the delivery guy. He should get some notification in his phone from his lightningnode that it is paid. > If he never shows up you revert it somehow. Not sure how to do that technically, but we probably have most things in place for it already. > Start your brains, guys! Things are getting serious, there is pizza at stake! > > Best regards > Robert Olsson > > On Sun, Feb 25, 2018 at 3:29 AM, Laszlo Hanyecz wrote: > >> I wanted to try out a real trade using lightning network. I don't know of any pizza places near me that accept lightning bitcoin yet but a friend from London agreed to do it and he sub contracted out the pizza delivery to a local shop. >> In short, I paid bitcoin using the lightning network and he arranged for pizza to be delivered to me. In this trade my friend is just a middle man that is taking the risk on accepting lightning payments, but it demonstrates the basic premise of how this works for everyday transactions. It could just as well be the pizza shop accepting the payment directly with their own lightning node. >> I wanted two pizzas and to try to do it as close to atomically as possible. I didn't want to prepay and end up with no pizza. As far as I know we don't yet have pizza/bitcoin atomic swap software but we improvised and decided that I would need to provide the payment hash preimage to the delivery driver in order to claim my pizza. If I can't produce the preimage, proving that I paid, then the pizza would not be handed over and it would be destroyed. This works because I can't get the preimage without paying the invoice. I agreed to open a channel and fund it with a sufficient amount for what we estimated the cost would end up being. After we agreed to these terms my friend was able to verify that I funded a channel on the blockchain, which shows that I at least have the money (bitcoin). He is taking on some entrepreneurial risk and prepaying his sub contractor to prepare and deliver the pizza to me, but at this point I have not risked my bitcoins, they're just committed to a channel. I was given a bolt11 invoice which I decoded with the c-lightning cli to verify everything was as agreed: >> >> $ ./lightning-cli decodepay lnbc6490u1pdfrjhcpp5jyxuuskqw5 >> >> 3apgqvtxa7emcrz5vs0qr2sxjayxv7 >> >> jj70jznnl94sdp5x9vycgzrdpjk2um >> >> eypgxj7n6vykzqvfqg3jkcatcv5s9q >> >> 6t60fssxqyzx2qcqpgaue37x27yp3p >> >> n4cr6wuprvwedncz4kavqh83cp3l0v >> >> wfrprj0xj8cedkfmjdzea0xpp0jazf >> >> cyy77cq37ej6d3xvmujmgu56pe56kt >> >> cqa3vcys >> { "currency" : "bc", "timestamp" : 1519504120, "created_at" : 1519504120, "expiry" : 72000, "payee" : " >> >> 0397b318c5e0d09b16e6229ec50744 >> >> c8a7a8452b2d7c6d9855c826ff14b8 >> >> fa8b27", "msatoshi" : 649000000, "description" : "1XL Cheesy Pizza, 1 Deluxe Pizza", "min_final_cltv_expiry" : 8, "payment_hash" : " >> >> 910dce42c07523d0a00c59bbecef03 >> >> 151907806a81a5d2199e94bcf90a73 >> >> f96b", "signature" : " >> >> 3045022100ef331f195e206219d703 >> >> d3b811b1d96cf02adbac05cf1c063f >> >> 7b1c91847279a402207c65b64ee4d1 >> >> 67af3042f97449c109ef6011f665a6 >> >> c4ccdf25b4729a0e69ab2f" } >> >> When the pizza delivery arrived, I was asked "What is the preimage?" by the driver. At this point I paid the invoice and instantly received the preimage in return. >> >> $ ./lightning-cli pay lnbc6490u1pdfrjhcpp5jyxuuskqw5 >> >> 3apgqvtxa7emcrz5vs0qr2sxjayxv7 >> >> jj70jznnl94sdp5x9vycgzrdpjk2um >> >> eypgxj7n6vykzqvfqg3jkcatcv5s9q >> >> 6t60fssxqyzx2qcqpgaue37x27yp3p >> >> n4cr6wuprvwedncz4kavqh83cp3l0v >> >> wfrprj0xj8cedkfmjdzea0xpp0jazf >> >> cyy77cq37ej6d3xvmujmgu56pe56kt >> >> cqa3vcys >> { "preimage" : " >> >> 7241e3f185148625894b8887ad459b >> >> abd26540fc12124c3a7a96c937d89d >> >> a8c1", "tries" : 1 } >> >> In the interest of keeping it simple we agreed that the preimage would just be the first and last 4 characters of the hex string. So my answer was 7241-a8c1. I wrote this on a notepad and presented it to the driver who compared it to his own notepad, at which point I was given the pizza. It's probably not a good practice to share the preimage. The delivery driver didn't have the full string, only enough to verify that I had it. >> How do you get the preimage for your invoice? In c-lightning you can do it like this: >> $ ./lightning-cli invoice 12345 label description >> { "payment_hash" : " >> >> e04dfbd4adc634779b560c8e7072f8 >> >> 83d5f17a3e32a33603bfc90a868287 >> >> 3d44", "expiry_time" : 1519523498, "expires_at" : 1519523498, "bolt11" : " >> >> lnbc123450p1pdfyzy6pp5upxlh49d >> >> cc680x6kpj88quhcs02lz737x23nvq >> >> aley9gdq5884zqdqjv3jhxcmjd9c8g >> >> 6t0dccqpg802ys4s4z3rpm6d8zvdgq >> >> 397wewh5kaz527hnglz9xsmjxfjrhe >> >> 3mxq9pp7pqm0pwcwm748tav4am97gq >> >> rvnzxnlw5uxxawgw4vcywgphj26nf" } >> $ sqlite3 ~/.lightning/lightningd. >> >> sqlite3 "SELECT quote(payment_key) FROM invoices ORDER BY id DESC LIMIT 1" >> X' >> >> D3BE7E68D8B38B15A5194AEA131A21 >> >> 429A1987085C95A0631273273546FF >> >> 5ED8' >> Then you can verify that it's indeed the correct preimage by hashing it again and comparing it to the payment_hash in the invoice above: >> $ echo " >> >> D3BE7E68D8B38B15A5194AEA131A21 >> >> 429A1987085C95A0631273273546FF >> >> 5ED8" | xxd -r -p | sha256sum >> e04dfbd4adc634779b560c8e7072f8 >> >> 83d5f17a3e32a33603bfc90a868287 >> >> 3d44 - >> Note that you should not share the preimage with anyone. >> >> So is there any point to doing this instead of an on chain transaction? For what I described here, probably not. The goal was just to play around with c-lightning and do something more than shuffling a few satoshi back and forth. Maybe eventually pizza shops will have their own lightning nodes and I can open channels to them directly. >> >> Some pics of my family enjoying the pizza here: >> [http://eclipse.heliacal.net/~ >> >> solar/bitcoin/lightning-pizza/](http://eclipse.heliacal.net/~solar/bitcoin/lightning-pizza/) >> -Laszlo >> >> _______________________________________________ >> Lightning-dev mailing list >> Lightning-dev at lists.linuxfoundation.org >> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From robban at robtex.com Sun Feb 25 16:35:30 2018 From: robban at robtex.com (Robert Olsson) Date: Sun, 25 Feb 2018 18:35:30 +0200 Subject: [Lightning-dev] Pizza for (lightning) bitcoins? In-Reply-To: References: Message-ID: Thank you ZmnSCPxj `all time is measured in terms of blocks; "minutes" is just a shared human delusion` goes into by book of quotes Before I explain this pizza ordering procedure to my grandmum, I must get this straight: do you mean this approach will *not* work on multihop and AMP routes, or were you just simplifying the explanation to make it slightly more probable that I would understand? I do not yet understand every single bit of the workings of lightning, i'm afraid, but I can't see why it wouldn't work :) Regards Robert Olsson On Sun, Feb 25, 2018 at 5:30 PM, ZmnSCPxj wrote: > Good morning Robert, > > Assuming you have a direct channel with the pizza provider, build a route > from you to pizza provider to you. You route the pizza price + 546 satoshi > (the minimum for a nondust output) to the pizza provider, and the hop from > the pizza provider to you is the 546 satoshi (so that the pizza provider > gets paid the pizza price in total as the "routing fee"). > > You inform the pizza provider the hash of the preimage, which the pizza > provider can check with their node exists as an incoming HTLC and an > outgoing HTLC, with the difference being the pizza price. > > Further, you set things up so that the HTLC to you expires in 3 blocks, > which means that the pizza provider has to provide the pizza in three > blocks or it is free. This is the Bitcoin universe and all time is > measured in terms of blocks; "minutes" is just a shared human delusion that > is less real than blockchains. > > When the pizza is delivered, your provide the preimage to the pizza > provider via standard LN protocol, and when the pizza provider confirms to > the delivery person that the pizza is paid for, the pizza is released to > you. > > Regards, > ZmnSCPxj > > > Sent with ProtonMail Secure Email. > > ??????? Original Message ??????? > On February 25, 2018 4:19 PM, Robert Olsson wrote: > > First of all, Laszlo, that was awesome! > > Instead of the part where you proved you had opened a channel, it would be > awesome to add some escrow-functionality. Such as you get the invoice, and > then you have a function to *almost* pay it, to verify it works thru the > network with AMP and all. At that stage they start to make the pizza. And > when you actually receive your pizza, you just somehow confirm the > transaction, releasing the funds. > Not sure you would have to prove anything with the preimage to the > delivery guy. He should get some notification in his phone from his > lightningnode that it is paid. > If he never shows up you revert it somehow. Not sure how to do that > technically, but we probably have most things in place for it already. > Start your brains, guys! Things are getting serious, there is pizza at > stake! > > Best regards > Robert Olsson > > > > > On Sun, Feb 25, 2018 at 3:29 AM, Laszlo Hanyecz > wrote: > >> I wanted to try out a real trade using lightning network. I don't know of any pizza places near me that accept lightning bitcoin yet but a friend from London agreed to do it and he sub contracted out the pizza delivery to a local shop. >> In short, I paid bitcoin using the lightning network and he arranged for pizza to be delivered to me. In this trade my friend is just a middle man that is taking the risk on accepting lightning payments, but it demonstrates the basic premise of how this works for everyday transactions. It could just as well be the pizza shop accepting the payment directly with their own lightning node. >> I wanted two pizzas and to try to do it as close to atomically as possible. I didn't want to prepay and end up with no pizza. As far as I know we don't yet have pizza/bitcoin atomic swap software but we improvised and decided that I would need to provide the payment hash preimage to the delivery driver in order to claim my pizza. If I can't produce the preimage, proving that I paid, then the pizza would not be handed over and it would be destroyed. This works because I can't get the preimage without paying the invoice. I agreed to open a channel and fund it with a sufficient amount for what we estimated the cost would end up being. After we agreed to these terms my friend was able to verify that I funded a channel on the blockchain, which shows that I at least have the money (bitcoin). He is taking on some entrepreneurial risk and prepaying his sub contractor to prepare and deliver the pizza to me, but at this point I have not risked my bitcoins, they're just committed to a channel. I was given a bolt11 invoice which I decoded with the c-lightning cli to verify everything was as agreed: >> >> $ ./lightning-cli decodepay lnbc6490u1pdfrjhcpp5jyxuuskqw53apgqvtxa7emcrz5vs0qr2sxjayxv7jj70jznnl94sdp5x9vycgzrdpjk2umeypgxj7n6vykzqvfqg3jkcatcv5s9q6t60fssxqyzx2qcqpgaue37x27yp3pn4cr6wuprvwedncz4kavqh83cp3l0vwfrprj0xj8cedkfmjdzea0xpp0jazfcyy77cq37ej6d3xvmujmgu56pe56ktcqa3vcys >> { "currency" : "bc", "timestamp" : 1519504120, "created_at" : 1519504120, "expiry" : 72000, "payee" : "0397b318c5e0d09b16e6229ec50744c8a7a8452b2d7c6d9855c826ff14b8fa8b27", "msatoshi" : 649000000, "description" : "1XL Cheesy Pizza, 1 Deluxe Pizza", "min_final_cltv_expiry" : 8, "payment_hash" : "910dce42c07523d0a00c59bbecef03151907806a81a5d2199e94bcf90a73f96b", "signature" : "3045022100ef331f195e206219d703d3b811b1d96cf02adbac05cf1c063f7b1c91847279a402207c65b64ee4d167af3042f97449c109ef6011f665a6c4ccdf25b4729a0e69ab2f" } >> >> When the pizza delivery arrived, I was asked "What is the preimage?" by the driver. At this point I paid the invoice and instantly received the preimage in return. >> >> $ ./lightning-cli pay lnbc6490u1pdfrjhcpp5jyxuuskqw53apgqvtxa7emcrz5vs0qr2sxjayxv7jj70jznnl94sdp5x9vycgzrdpjk2umeypgxj7n6vykzqvfqg3jkcatcv5s9q6t60fssxqyzx2qcqpgaue37x27yp3pn4cr6wuprvwedncz4kavqh83cp3l0vwfrprj0xj8cedkfmjdzea0xpp0jazfcyy77cq37ej6d3xvmujmgu56pe56ktcqa3vcys >> { "preimage" : "7241e3f185148625894b8887ad459babd26540fc12124c3a7a96c937d89da8c1", "tries" : 1 } >> >> In the interest of keeping it simple we agreed that the preimage would just be the first and last 4 characters of the hex string. So my answer was 7241-a8c1. I wrote this on a notepad and presented it to the driver who compared it to his own notepad, at which point I was given the pizza. It's probably not a good practice to share the preimage. The delivery driver didn't have the full string, only enough to verify that I had it. >> How do you get the preimage for your invoice? In c-lightning you can do it like this: >> $ ./lightning-cli invoice 12345 label description >> { "payment_hash" : "e04dfbd4adc634779b560c8e7072f883d5f17a3e32a33603bfc90a8682873d44", "expiry_time" : 1519523498, "expires_at" : 1519523498, "bolt11" : "lnbc123450p1pdfyzy6pp5upxlh49dcc680x6kpj88quhcs02lz737x23nvqaley9gdq5884zqdqjv3jhxcmjd9c8g6t0dccqpg802ys4s4z3rpm6d8zvdgq397wewh5kaz527hnglz9xsmjxfjrhe3mxq9pp7pqm0pwcwm748tav4am97gqrvnzxnlw5uxxawgw4vcywgphj26nf" } >> $ sqlite3 ~/.lightning/lightningd.sqlite3 "SELECT quote(payment_key) FROM invoices ORDER BY id DESC LIMIT 1" >> X'D3BE7E68D8B38B15A5194AEA131A21429A1987085C95A0631273273546FF5ED8' >> Then you can verify that it's indeed the correct preimage by hashing it again and comparing it to the payment_hash in the invoice above: >> $ echo "D3BE7E68D8B38B15A5194AEA131A21429A1987085C95A0631273273546FF5ED8" | xxd -r -p | sha256sum >> e04dfbd4adc634779b560c8e7072f883d5f17a3e32a33603bfc90a8682873d44 - >> Note that you should not share the preimage with anyone. >> >> So is there any point to doing this instead of an on chain transaction? For what I described here, probably not. The goal was just to play around with c-lightning and do something more than shuffling a few satoshi back and forth. Maybe eventually pizza shops will have their own lightning nodes and I can open channels to them directly. >> >> Some pics of my family enjoying the pizza here: http://eclipse.heliacal.net/~solar/bitcoin/lightning-pizza/ >> >> -Laszlo >> >> >> _______________________________________________ >> Lightning-dev mailing list >> Lightning-dev at lists.linuxfoundation.org >> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From laolu32 at gmail.com Sun Feb 25 23:23:54 2018 From: laolu32 at gmail.com (Olaoluwa Osuntokun) Date: Sun, 25 Feb 2018 23:23:54 +0000 Subject: [Lightning-dev] Improving the initial gossip sync In-Reply-To: References: <874lmvy4gh.fsf@gmail.com> <87tvurym13.fsf@rustcorp.com.au> <87shaawft5.fsf@gmail.com> <878tbzugj0.fsf@rustcorp.com.au> <87mv0cto38.fsf@rustcorp.com.au> <87606so4bd.fsf@rustcorp.com.au> Message-ID: > With that said, this should instead be a distinct `chan_update_horizon` > message (or w/e name). If a particular bit is set in the `init` message, > then the next message *both* sides send *must* be `chan_update_horizon`. Tweaking this a bit, if we make it: don't send *any* channel updates at all unless the other side sends this message, then this allows both parties to precisely control their initial load, and also if they even *want* channel_update messages at all. Purely routing nodes don't need any updates at all. In the case they wish to send (assumed to be infrequent in this model), they'll get the latest update after their first failure. Similarly, leaf/edge nodes can opt to receive the latest updates if they wish to minimize payment latency due to routing failures that are the result of dated information. IMO, the only case where a node would want the most up to date link policy state is for optimization/analysis, or to minimize payment latency at the cost of additional load. --Laolu On Fri, Feb 23, 2018 at 4:45 PM Olaoluwa Osuntokun wrote: > Hi Rusty, > > > 1. query_short_channel_id > > IMPLEMENTATION: trivial > > *thumbs up* > > > 2. query_channel_range/reply_channel_range > > IMPLEMENTATION: requires channel index by block number, zlib > > For the sake of expediency of deployment, if we add a byte (or two) to > denote the encoding/compression scheme, we can immediately roll out the > vanilla (just list the ID's), then progressively roll out more > context-specific optimized schemes. > > > 3. A gossip_timestamp field in `init` > > This is a new field appended to `init`: the negotiation of this feature > bit > > overrides `initial_routing_sync` > > As I've brought up before, from my PoV, we can't append any additional > fields to the innit message as it already contains *two* variable sized > fields (and no fixed size fields). Aside from this, it seems that the > `innit` message should be simply for exchange versioning information, > which > may govern exactly *which* messages are sent after it. Otherwise, by adding > _additional_ fields to the `innit` message, we paint ourselves in a corner > and can never remove it. Compared to using the `innit` message to set up > the > initial session context, where we can safely add other bits to nullify or > remove certain expected messages. > > With that said, this should instead be a distinct `chan_update_horizon` > message (or w/e name). If a particular bit is set in the `init` message, > then the next message *both* sides send *must* be `chan_update_horizon`. > > Another advantage of making this a distinct message, is that either party > can at any time update this horizon/filter to ensure that they only receive > the *freshest* updates.Otherwise, one can image a very long lived > connections (say weeks) and the remote party keeps sending me very dated > updates (wasting bandwidth) when I only really want the *latest*. > > This can incorporate decker's idea about having a high+low timestamp. I > think this is desirable as then for the initial sync case, the receiver can > *precisely* control their "verification load" to ensure they only process a > particular chunk at a time. > > > Fabrice wrote: > > We could add a `data` field which contains zipped ids like in > > `reply_channel_range` so we can query several items with a single > message ? > > I think this is an excellent idea! It would allow batched requests in > response to a channel range message. I'm not so sure we need to jump > *straight* to compressing everything however. > > > We could add an additional `encoding_type` field before `data` (or it > > could be the first byte of `data`) > > Great minds think alike :-) > > > If we're in rough agreement generally about this initial "kick can" > approach, I'll start implementing some of this in a prototype branch for > lnd. I'm very eager to solve the zombie churn, and initial burst that can > be > very hard on light clients. > > -- Laolu > > > On Wed, Feb 21, 2018 at 10:03 AM Fabrice Drouin > wrote: > >> On 20 February 2018 at 02:08, Rusty Russell >> wrote: >> > Hi all, >> > >> > This consumed much of our lightning dev interop call today! But >> > I think we have a way forward, which is in three parts, gated by a new >> > feature bitpair: >> >> We've built a prototype with a new feature bit `channel_range_queries` >> and the following logic: >> When you receive their init message and check their local features >> - if they set `initial_routing_sync` and `channel_range_queries` then >> do nothing (they will send you a `query_channel_range`) >> - if they set `initial_routing_sync` and not `channel_range_queries` >> then send your routing table (as before) >> - if you support `channel_range_queries` then send a >> `query_channel_range` message >> >> This way new and old nodes should be able to understand each other >> >> > 1. query_short_channel_id >> > ========================= >> > >> > 1. type: 260 (`query_short_channel_id`) >> > 2. data: >> > * [`32`:`chain_hash`] >> > * [`8`:`short_channel_id`] >> >> We could add a `data` field which contains zipped ids like in >> `reply_channel_range` so we can query several items with a single >> message ? >> >> > 1. type: 262 (`reply_channel_range`) >> > 2. data: >> > * [`32`:`chain_hash`] >> > * [`4`:`first_blocknum`] >> > * [`4`:`number_of_blocks`] >> > * [`2`:`len`] >> > * [`len`:`data`] >> >> We could add an additional `encoding_type` field before `data` (or it >> could be the first byte of `data`) >> >> > Appendix A: Encoding Sizes >> > ========================== >> > >> > I tried various obvious compression schemes, in increasing complexity >> > order (see source below, which takes stdin and spits out stdout): >> > >> > Raw = raw 8-byte stream of ordered channels. >> > gzip -9: gzip -9 of raw. >> > splitgz: all blocknums first, then all txnums, then all >> outnums, then gzip -9 >> > delta: CVarInt encoding: >> blocknum_delta,num,num*txnum_delta,num*outnum. >> > deltagz: delta, with gzip -9 >> > >> > Corpus 1: LN mainnet dump, 1830 channels.[1] >> > >> > Raw: 14640 bytes >> > gzip -9: 6717 bytes >> > splitgz: 6464 bytes >> > delta: 6624 bytes >> > deltagz: 4171 bytes >> > >> > Corpus 2: All P2SH outputs between blocks 508000-508999 incl, 790844 >> channels.[2] >> > >> > Raw: 6326752 bytes >> > gzip -9: 1861710 bytes >> > splitgz: 964332 bytes >> > delta: 1655255 bytes >> > deltagz: 595469 bytes >> > >> > [1] http://ozlabs.org/~rusty/short_channels-mainnet.xz >> > [2] http://ozlabs.org/~rusty/short_channels-all-p2sh-508000-509000.xz >> > >> >> Impressive! >> _______________________________________________ >> Lightning-dev mailing list >> Lightning-dev at lists.linuxfoundation.org >> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ZmnSCPxj at protonmail.com Sun Feb 25 23:27:14 2018 From: ZmnSCPxj at protonmail.com (ZmnSCPxj) Date: Sun, 25 Feb 2018 18:27:14 -0500 Subject: [Lightning-dev] Pizza for (lightning) bitcoins? In-Reply-To: References: Message-ID: Good morning Robert, Since you want a delivery time within 3 blocks or it is free, the last hop has to be to your node from the pizza provider, meaning a direct channel between you. And if you already have a channel between you, you probably will want to use that channel. However in principle it would be possible to take multiple hops from you to the pizza provider and only require the last hop to be from the pizza provider to you. AMP is probably feasible, if the pizza provider supports getting a list of hashes rather than just one, and the pizza delivery person demands all preimages before releasing the pizza. In principle this is no different from any atomic exchange; one can claim this is a cross-chain atomic swap, although the so-called "real world" blockchain is very insecure and Turing complete and I do not advice transacting on it from a security perspective (they literally use manual labor to perform smart contract execution on that chain, would you believe that? plus their contracts are written in an opaque language that is hard to understand and has lots of gotchas; practically speaking only a language lawyer can hack through those). (just to be clear: the payment algorithm I described is not intended to be practical, it merely provides a "3 blocks or it is free" offer that more practical payment algorithms do not. In particular the pizza provider will have to drop onchain if you send `update_fail_htlc`, automatically closing the channel to you, to ensure that the 3-blocks contract is enforced onchain if you discooperate) Regards, ZmnSCPxj ??????? Original Message ??????? On February 26, 2018 12:35 AM, Robert Olsson wrote: > Thank you ZmnSCPxj > > `all time is measured in terms of blocks; "minutes" is just a shared human delusion` goes into by book of quotes > > Before I explain this pizza ordering procedure to my grandmum, I must get this straight: do you mean this approach will *not* work on multihop and AMP routes, or were you just simplifying the explanation to make it slightly more probable that I would understand? I do not yet understand every single bit of the workings of lightning, i'm afraid, but I can't see why it wouldn't work :) > > Regards > Robert Olsson > > On Sun, Feb 25, 2018 at 5:30 PM, ZmnSCPxj wrote: > >> Good morning Robert, >> >> Assuming you have a direct channel with the pizza provider, build a route from you to pizza provider to you. You route the pizza price + 546 satoshi (the minimum for a nondust output) to the pizza provider, and the hop from the pizza provider to you is the 546 satoshi (so that the pizza provider gets paid the pizza price in total as the "routing fee"). >> >> You inform the pizza provider the hash of the preimage, which the pizza provider can check with their node exists as an incoming HTLC and an outgoing HTLC, with the difference being the pizza price. >> >> Further, you set things up so that the HTLC to you expires in 3 blocks, which means that the pizza provider has to provide the pizza in three blocks or it is free. This is the Bitcoin universe and all time is measured in terms of blocks; "minutes" is just a shared human delusion that is less real than blockchains. >> >> When the pizza is delivered, your provide the preimage to the pizza provider via standard LN protocol, and when the pizza provider confirms to the delivery person that the pizza is paid for, the pizza is released to you. >> >> Regards, >> ZmnSCPxj >> >> Sent with [ProtonMail](https://protonmail.com) Secure Email. >> >> ??????? Original Message ??????? >> On February 25, 2018 4:19 PM, Robert Olsson wrote: >> >>> First of all, Laszlo, that was awesome! >>> >>> Instead of the part where you proved you had opened a channel, it would be awesome to add some escrow-functionality. Such as you get the invoice, and then you have a function to *almost* pay it, to verify it works thru the network with AMP and all. At that stage they start to make the pizza. And when you actually receive your pizza, you just somehow confirm the transaction, releasing the funds. >>> Not sure you would have to prove anything with the preimage to the delivery guy. He should get some notification in his phone from his lightningnode that it is paid. >>> If he never shows up you revert it somehow. Not sure how to do that technically, but we probably have most things in place for it already. >>> Start your brains, guys! Things are getting serious, there is pizza at stake! >>> >>> Best regards >>> Robert Olsson >>> >>> On Sun, Feb 25, 2018 at 3:29 AM, Laszlo Hanyecz wrote: >>> >>>> I wanted to try out a real trade using lightning network. I don't know of any pizza places near me that accept lightning bitcoin yet but a friend from London agreed to do it and he sub contracted out the pizza delivery to a local shop. >>>> In short, I paid bitcoin using the lightning network and he arranged for pizza to be delivered to me. In this trade my friend is just a middle man that is taking the risk on accepting lightning payments, but it demonstrates the basic premise of how this works for everyday transactions. It could just as well be the pizza shop accepting the payment directly with their own lightning node. >>>> I wanted two pizzas and to try to do it as close to atomically as possible. I didn't want to prepay and end up with no pizza. As far as I know we don't yet have pizza/bitcoin atomic swap software but we improvised and decided that I would need to provide the payment hash preimage to the delivery driver in order to claim my pizza. If I can't produce the preimage, proving that I paid, then the pizza would not be handed over and it would be destroyed. This works because I can't get the preimage without paying the invoice. I agreed to open a channel and fund it with a sufficient amount for what we estimated the cost would end up being. After we agreed to these terms my friend was able to verify that I funded a channel on the blockchain, which shows that I at least have the money (bitcoin). He is taking on some entrepreneurial risk and prepaying his sub contractor to prepare and deliver the pizza to me, but at this point I have not risked my bitcoins, they're just committed to a channel. I was given a bolt11 invoice which I decoded with the c-lightning cli to verify everything was as agreed: >>>> >>>> $ ./lightning-cli decodepay lnbc6490u1pdfrjhcpp5jyxuuskqw5 >>>> >>>> 3apgqvtxa7emcrz5vs0qr2sxjayxv7 >>>> >>>> jj70jznnl94sdp5x9vycgzrdpjk2um >>>> >>>> eypgxj7n6vykzqvfqg3jkcatcv5s9q >>>> >>>> 6t60fssxqyzx2qcqpgaue37x27yp3p >>>> >>>> n4cr6wuprvwedncz4kavqh83cp3l0v >>>> >>>> wfrprj0xj8cedkfmjdzea0xpp0jazf >>>> >>>> cyy77cq37ej6d3xvmujmgu56pe56kt >>>> >>>> cqa3vcys >>>> { "currency" : "bc", "timestamp" : 1519504120, "created_at" : 1519504120, "expiry" : 72000, "payee" : "0397b318c5e0d09b16e6229ec5074 >>>> >>>> 4c8a7a8452b2d7c6d9855c826ff14b >>>> >>>> 8fa8b27", "msatoshi" : 649000000, "description" : "1XL Cheesy Pizza, 1 Deluxe Pizza", "min_final_cltv_expiry" : 8, "payment_hash" : "910dce42c07523d0a00c59bbecef0 >>>> >>>> 3151907806a81a5d2199e94bcf90a7 >>>> >>>> 3f96b", "signature" : "3045022100ef331f195e206219d70 >>>> >>>> 3d3b811b1d96cf02adbac05cf1c063 >>>> >>>> f7b1c91847279a402207c65b64ee4d >>>> >>>> 167af3042f97449c109ef6011f665a >>>> >>>> 6c4ccdf25b4729a0e69ab2f" } >>>> >>>> When the pizza delivery arrived, I was asked "What is the preimage?" by the driver. At this point I paid the invoice and instantly received the preimage in return. >>>> >>>> $ ./lightning-cli pay lnbc6490u1pdfrjhcpp5jyxuuskqw5 >>>> >>>> 3apgqvtxa7emcrz5vs0qr2sxjayxv7 >>>> >>>> jj70jznnl94sdp5x9vycgzrdpjk2um >>>> >>>> eypgxj7n6vykzqvfqg3jkcatcv5s9q >>>> >>>> 6t60fssxqyzx2qcqpgaue37x27yp3p >>>> >>>> n4cr6wuprvwedncz4kavqh83cp3l0v >>>> >>>> wfrprj0xj8cedkfmjdzea0xpp0jazf >>>> >>>> cyy77cq37ej6d3xvmujmgu56pe56kt >>>> >>>> cqa3vcys >>>> { "preimage" : "7241e3f185148625894b8887ad459 >>>> >>>> babd26540fc12124c3a7a96c937d89 >>>> >>>> da8c1", "tries" : 1 } >>>> >>>> In the interest of keeping it simple we agreed that the preimage would just be the first and last 4 characters of the hex string. So my answer was 7241-a8c1. I wrote this on a notepad and presented it to the driver who compared it to his own notepad, at which point I was given the pizza. It's probably not a good practice to share the preimage. The delivery driver didn't have the full string, only enough to verify that I had it. >>>> How do you get the preimage for your invoice? In c-lightning you can do it like this: >>>> $ ./lightning-cli invoice 12345 label description >>>> { "payment_hash" : "e04dfbd4adc634779b560c8e7072f >>>> >>>> 883d5f17a3e32a33603bfc90a86828 >>>> >>>> 73d44", "expiry_time" : 1519523498, "expires_at" : 1519523498, "bolt11" : "lnbc123450p1pdfyzy6pp5upxlh49 >>>> >>>> dcc680x6kpj88quhcs02lz737x23nv >>>> >>>> qaley9gdq5884zqdqjv3jhxcmjd9c8 >>>> >>>> g6t0dccqpg802ys4s4z3rpm6d8zvdg >>>> >>>> q397wewh5kaz527hnglz9xsmjxfjrh >>>> >>>> e3mxq9pp7pqm0pwcwm748tav4am97g >>>> >>>> qrvnzxnlw5uxxawgw4vcywgphj26nf >>>> >>>> " } >>>> $ sqlite3 ~/.lightning/lightningd.sqlite >>>> >>>> 3 "SELECT quote(payment_key) FROM invoices ORDER BY id DESC LIMIT 1" >>>> X'D3BE7E68D8B38B15A5194AEA131A >>>> >>>> 21429A1987085C95A0631273273546 >>>> >>>> FF5ED8' >>>> Then you can verify that it's indeed the correct preimage by hashing it again and comparing it to the payment_hash in the invoice above: >>>> $ echo "D3BE7E68D8B38B15A5194AEA131A2 >>>> >>>> 1429A1987085C95A0631273273546F >>>> >>>> F5ED8" | xxd -r -p | sha256sum >>>> e04dfbd4adc634779b560c8e7072f8 >>>> >>>> 83d5f17a3e32a33603bfc90a868287 >>>> >>>> 3d44 - >>>> Note that you should not share the preimage with anyone. >>>> >>>> So is there any point to doing this instead of an on chain transaction? For what I described here, probably not. The goal was just to play around with c-lightning and do something more than shuffling a few satoshi back and forth. Maybe eventually pizza shops will have their own lightning nodes and I can open channels to them directly. >>>> >>>> Some pics of my family enjoying the pizza here: >>>> [http://eclipse.heliacal.net/~s >>>> >>>> olar/bitcoin/lightning-pizza/](http://eclipse.heliacal.net/~solar/bitcoin/lightning-pizza/) >>>> -Laszlo >>>> >>>> _______________________________________________ >>>> Lightning-dev mailing list >>>> Lightning-dev at lists.linuxfoundation.org >>>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From rusty at rustcorp.com.au Mon Feb 26 01:43:56 2018 From: rusty at rustcorp.com.au (Rusty Russell) Date: Mon, 26 Feb 2018 12:13:56 +1030 Subject: [Lightning-dev] Improving the initial gossip sync In-Reply-To: References: <874lmvy4gh.fsf@gmail.com> <87tvurym13.fsf@rustcorp.com.au> <87shaawft5.fsf@gmail.com> <878tbzugj0.fsf@rustcorp.com.au> <87mv0cto38.fsf@rustcorp.com.au> <87606so4bd.fsf@rustcorp.com.au> Message-ID: <87o9kcfrtv.fsf@rustcorp.com.au> Fabrice Drouin writes: > On 20 February 2018 at 02:08, Rusty Russell wrote: >> Hi all, >> >> This consumed much of our lightning dev interop call today! But >> I think we have a way forward, which is in three parts, gated by a new >> feature bitpair: > > We've built a prototype with a new feature bit `channel_range_queries` > and the following logic: > When you receive their init message and check their local features > - if they set `initial_routing_sync` and `channel_range_queries` then > do nothing (they will send you a `query_channel_range`) > - if they set `initial_routing_sync` and not `channel_range_queries` > then send your routing table (as before) > - if you support `channel_range_queries` then send a > `query_channel_range` message That seems logical; in this way, channel_range_queries obsoletes initial_routing_sync. >> 1. query_short_channel_id >> ========================= >> >> 1. type: 260 (`query_short_channel_id`) >> 2. data: >> * [`32`:`chain_hash`] >> * [`8`:`short_channel_id`] > > We could add a `data` field which contains zipped ids like in > `reply_channel_range` so we can query several items with a single > message ? We could, let's use the same compression format as we decide for the `reply_channel_range` `data` field. > >> 1. type: 262 (`reply_channel_range`) >> 2. data: >> * [`32`:`chain_hash`] >> * [`4`:`first_blocknum`] >> * [`4`:`number_of_blocks`] >> * [`2`:`len`] >> * [`len`:`data`] > > We could add an additional `encoding_type` field before `data` (or it > could be the first byte of `data`) Yes, let's put it in first byte of data. >> I tried various obvious compression schemes, in increasing complexity >> order (see source below, which takes stdin and spits out stdout): >> >> Raw = raw 8-byte stream of ordered channels. >> gzip -9: gzip -9 of raw. >> splitgz: all blocknums first, then all txnums, then all outnums, then gzip -9 >> delta: CVarInt encoding: blocknum_delta,num,num*txnum_delta,num*outnum. >> deltagz: delta, with gzip -9 >> >> Corpus 1: LN mainnet dump, 1830 channels.[1] >> >> Raw: 14640 bytes >> gzip -9: 6717 bytes >> splitgz: 6464 bytes >> delta: 6624 bytes >> deltagz: 4171 bytes >> >> Corpus 2: All P2SH outputs between blocks 508000-508999 incl, 790844 channels.[2] >> >> Raw: 6326752 bytes >> gzip -9: 1861710 bytes >> splitgz: 964332 bytes >> delta: 1655255 bytes >> deltagz: 595469 bytes >> >> [1] http://ozlabs.org/~rusty/short_channels-mainnet.xz >> [2] http://ozlabs.org/~rusty/short_channels-all-p2sh-508000-509000.xz > > Impressive! Which method did you prefer? splitgz is trivial, deltagz is better but requires some actual work. We should pick one and make that `version 0`. Cheers, Rusty. From rusty at rustcorp.com.au Mon Feb 26 05:37:26 2018 From: rusty at rustcorp.com.au (Rusty Russell) Date: Mon, 26 Feb 2018 16:07:26 +1030 Subject: [Lightning-dev] Improving the initial gossip sync In-Reply-To: References: <874lmvy4gh.fsf@gmail.com> <87tvurym13.fsf@rustcorp.com.au> <87shaawft5.fsf@gmail.com> <878tbzugj0.fsf@rustcorp.com.au> <87mv0cto38.fsf@rustcorp.com.au> <87606so4bd.fsf@rustcorp.com.au> Message-ID: <871sh8b9bd.fsf@rustcorp.com.au> Olaoluwa Osuntokun writes: > Hi Rusty, > >> 1. query_short_channel_id >> IMPLEMENTATION: trivial > > *thumbs up* OK, I'm implementing this now, with data packing so we can have more than one. (Current 0 and the straight array, will then be able to assess how impactful adding a simple encoder is). >> 2. query_channel_range/reply_channel_range >> IMPLEMENTATION: requires channel index by block number, zlib > > For the sake of expediency of deployment, if we add a byte (or two) to > denote the encoding/compression scheme, we can immediately roll out the > vanilla (just list the ID's), then progressively roll out more > context-specific optimized schemes. Meh, zlib is pretty trivial for all implementations though. Will implement and see how long it takes me though. >> 3. A gossip_timestamp field in `init` >> This is a new field appended to `init`: the negotiation of this feature > bit >> overrides `initial_routing_sync` > > As I've brought up before, from my PoV, we can't append any additional > fields to the innit message as it already contains *two* variable sized > fields (and no fixed size fields). Aside from this, it seems that the > `innit` message should be simply for exchange versioning information, which > may govern exactly *which* messages are sent after it. Otherwise, by adding > _additional_ fields to the `innit` message, we paint ourselves in a corner > and can never remove it. Compared to using the `innit` message to set up the > initial session context, where we can safely add other bits to nullify or > remove certain expected messages. I don't see this argument at all; we can add fields, we can remove them, but we still have to transmit them which wastes a little space. Adding a new field and insist it be next packet is a weird ordering contraint, which AFAICT is unique in the protocol. > Another advantage of making this a distinct message, is that either party > can at any time update this horizon/filter to ensure that they only receive > the *freshest* updates.Otherwise, one can image a very long lived > connections (say weeks) and the remote party keeps sending me very dated > updates (wasting bandwidth) when I only really want the *latest*. > > This can incorporate decker's idea about having a high+low timestamp. I > think this is desirable as then for the initial sync case, the receiver can > *precisely* control their "verification load" to ensure they only process a > particular chunk at a time. This is a more convincing argument. I guess we'll have to index by timestamp (we currently index by receive order only); I was hoping we could get away with a single brute-force traverse when the peer initially connected. So, let's say `channel_range_queries` means don't send *any* gossip messages until asked (presumably from `gossip_set_timestamp_range`); we'd implement this by setting the peer's timestamp range to 0,0. Receiving a new `gossip_set_timestamp_range` would override any previous. OK, I'm hacking together now to see if I've missed anything before proposing a proper spec... Cheers, Rusty. From pete at petertodd.org Mon Feb 26 13:51:03 2018 From: pete at petertodd.org (Peter Todd) Date: Mon, 26 Feb 2018 08:51:03 -0500 Subject: [Lightning-dev] Welcoming a New C-lightning Core Team Member! In-Reply-To: <87fu5sh5ax.fsf@rustcorp.com.au> References: <87fu5sh5ax.fsf@rustcorp.com.au> Message-ID: <20180226135103.GA15634@fedora-23-dvm> On Fri, Feb 23, 2018 at 11:48:30AM +1030, Rusty Russell wrote: > Hi all, > > Christian and I just gave ZmnSCPxj commit access to c-lightning; we > know nothing other than his preferred pronoun and moniker (I'm calling him > Zeeman for short), but ZmnSCPxj has earned our professional respect with over > 100 commits, many non-trivial. > > He says: "No objection here, other than to point out that, as I > am of course a human, however randomly-generated, I am of course on the > side of humanity in the upcoming robot uprising, whose timing I of > course have no knowledge about." > > We look forward to his excellent code and thorough and polite review of our > mistakes, for which he can now share the blame! If you're going to be giving pseudonyms commit access, I think it's about time you start PGP signing commits for accountability. There's really no excuse to not follow good code security practices. -- https://petertodd.org 'peter'[:-1]@petertodd.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: Digital signature URL: From ZmnSCPxj at protonmail.com Mon Feb 26 23:41:20 2018 From: ZmnSCPxj at protonmail.com (ZmnSCPxj) Date: Mon, 26 Feb 2018 18:41:20 -0500 Subject: [Lightning-dev] [c-lightning] Welcoming a New C-lightning Core Team Member! In-Reply-To: <87fu5sh5ax.fsf@rustcorp.com.au> References: <87fu5sh5ax.fsf@rustcorp.com.au> Message-ID: Good morning, https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-February/001054.html Is this considered desirable? I am having some difficulty setting up GPG satisfactorily, but I can try to make an effort if this is deemed necessary. Alternatively, you could just revoke my commit access. Regards, ZmnSCPxj From pete at petertodd.org Tue Feb 27 09:23:15 2018 From: pete at petertodd.org (Peter Todd) Date: Tue, 27 Feb 2018 04:23:15 -0500 Subject: [Lightning-dev] [c-lightning] Welcoming a New C-lightning Core Team Member! In-Reply-To: References: <87fu5sh5ax.fsf@rustcorp.com.au> Message-ID: <20180227092315.GA1340@fedora-23-dvm> On Mon, Feb 26, 2018 at 06:41:20PM -0500, ZmnSCPxj via Lightning-dev wrote: > Good morning, > > https://lists.linuxfoundation.org/pipermail/lightning-dev/2018-February/001054.html > > Is this considered desirable? I am having some difficulty setting up GPG satisfactorily, but I can try to make an effort if this is deemed necessary. What difficulties did you have? -- https://petertodd.org 'peter'[:-1]@petertodd.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: Digital signature URL: From decker.christian at gmail.com Wed Feb 28 21:38:01 2018 From: decker.christian at gmail.com (Christian Decker) Date: Wed, 28 Feb 2018 22:38:01 +0100 Subject: [Lightning-dev] Improving the initial gossip sync In-Reply-To: References: <874lmvy4gh.fsf@gmail.com> <87tvurym13.fsf@rustcorp.com.au> <87shaawft5.fsf@gmail.com> <878tbzugj0.fsf@rustcorp.com.au> <87mv0cto38.fsf@rustcorp.com.au> <87606so4bd.fsf@rustcorp.com.au> Message-ID: <878tbcztfq.fsf@gmail.com> Olaoluwa Osuntokun writes: > As I've brought up before, from my PoV, we can't append any additional > fields to the innit message as it already contains *two* variable sized > fields (and no fixed size fields). Aside from this, it seems that the > `innit` message should be simply for exchange versioning information, which > may govern exactly *which* messages are sent after it. Otherwise, by adding > _additional_ fields to the `innit` message, we paint ourselves in a corner > and can never remove it. Compared to using the `innit` message to set up the > initial session context, where we can safely add other bits to nullify or > remove certain expected messages. While I do agree that a new message with high and low watermarks for a sync controlled by the recipient is the way to go, I just don't see the issue with extending the `init` message (and I think it may be useful in future, which is why I bring it up). The two variable size fields are length prefixed so we know exactly what their size is, and where they end, so a new field added to the end can be trivially identified as such. As pointed out in my first mail, we'd have to make it mandatory for the recipient to understand the new field, since it cannot be skipped if the recipient does not, but this still doesn't preclude adding such a field. As for the overflow issue you mention, a single features bitfield is already sufficient to completely overflow the `init` message length, since it's prefix is 2 bytes, allowing for 65535 bytes for that single field alone, in a message that only has 65533 bytes of payload left. But the sender would have to be bonkers to overflow the message and then try something with the appended field. It'd overflow in the next packet since we can't even tell the recipient that we have >65535 bytes of payload, and it'd fail the HMAC check. IMHO the connection would simply be stopped right there, and the sender just found a very contorted way of closing the connection :-) In the good case however the `init` message can look something like this: - [2:gflen] - [gflen:globalfeatures] - [2:lflen] - [lflen:localfeatures] - [4:lowwatermark] - [4:highwatermark] Maybe I'm just not seeing it, and if that's the case I apologize :-) Cheers, Christian