[Lightning-dev] Ionization Protocol: Flood Routing

Rusty Russell rusty at rustcorp.com.au
Wed Sep 23 04:59:00 UTC 2015


Mats Jerratsch <matsjj at gmail.com> writes:
>> On Mon, Sep 21, 2015 at 11:46:13AM +0930, Rusty Russell wrote:
>>> We regularly choose a dozen "beacon" nodes at random (using proximity to
>>> the SHA of latest block hash or something).  Everyone propagates the
>>> cheapest route to & from those nodes (which is pretty efficient, similar
>>> to your scheme).
>
> I don't know, using the beacon kind of technique does seem a little
> bit cumbersome and you really have to think about a lot of possible
> attacks that might render the whole network unusable, because all
> beacons are maliciously sabotaging the network..

Indeed.  Random selection helps, here, but analysis will be interesting.

>> If you do know the entire graph, you don't need to give away any
>> information about who you want to pay prior to sending the transaction.
>> Knowing the graph is potentially interesting for commercial and academic
>> reasons beyond wanting privacy. (Knowing the fees others charge helps you
>> work out what fees you should charge; but just querying your neighbours'
>> routes is probably sufficient to work that out too)
>
> I initially thought about going with a Node Directory. That is, a
> central server collecting the routing data that could be sent there by
> choice. We can work out with signature and if both nodes submit the
> route towards the other node, whether this is indeed correct. But as
> this isn't too much data in the first place, I am very much in favour
> of just using some gossip-protocol.

Short term I'm thinking we'll have an IRC channel (a-la early bitcoin)
and everyone will advertise their channels there.  This is design for
the next, more ambitious phase.

> Furthermore, we can also include
> some byte for a reputation / web-of-trust system. It is completely
> optional, if you are able to figure out how to route the payment
> across your nodes you are not forced to use this system, and I don't
> think it will have severe privacy problems.

I think reputation systems will become an overlay, if the basic system
proves vulnerable.  It's nicer if we don't have to though, as reputation
is both hard to encode in normal behaviour, and deeply centralizing.

> Payer and payee can further work out a rendezvous node. The payee will
> send the encrypted routing data from that node on to himself and the
> payer can build the rest of the routing from his viewpoint.

Yes, like R hash and destination node, you'd send some
routes-from-nodes for one or more rendezvous.

The advantage of this system is that it's less revealing, even if you
have to ask how to get to those.

> Having this data distributed really allows for making payments without
> leaving too many visible traces on the network.
>
> I would also spread data about how much can be spent and received with
> this route and fees for receiving and spending. Again, this data can
> be very vague and does not need to reflect the real world. You can say
> you support payments up to 10k satoshis, even though the channel has
> 5BTC in it.

My basic model for fees is base + percentage.  AJ has been thinking
harder about exactly how a node would set these...

> Is there any plan how we handle connections currently? I am thinking
> about sending the current IP address with this update, such that nodes
> can connect to each other after one broke down again, and it would
> further support dynamic IPs as well. Could be possible to extend this
> to hidden service notation, although this would need some more bytes
> to store.

So, I'm assuming the basic network comms is organized along channel
lines; you have to communicate to the other end of the channel anyway,
so that makes sense.

This doesn't solve "how do I connect to the network" or "how do I start
a channel with this particular node", though.  For that, I've floated
the idea of reusing the bittorrent DHT, which has extensions to allow
just this.

> The protocol on my side does look like this currently:
>
> General beacon for our node
>  - pubkey [32]
>  - timestamp [4]
>  - IP/connectionDetails [8+...]
>  - signature [70]

Yep, makes sense.  Pubkey is usually 33 bytes, BTW.

You can squeze some more bytes out of you want:
1) Signature should be 64 bytes (never DER encode).
2) Pubkey can be hashed bitcoin-address style, and recovered from sig.

> Routing information
>  - pubkey_from_us [32]
>  - pubkey_from_you [32]
>  - fee_sending [4]
>  - max_sending [4]
>  - current_reputation [1]
>  - timestamp [4]
>  - signature [70]

If we're using protobufs, they're extensible, so you can add reputation
later if we need it.

You also want two signatures, I think: one from each side.  Though it
makes sense to have a separate teardown message which only needs one
sig.

> These two would be spread as one message.
> This allows to keep two separate databases, and be able to bootstrap
> new nodes with this information, while we do not have to store old
> beacon data.

> As reputation will be stored for quite some time, we should really
> save them with separate signatures. (This will make it more difficult
> / costly to share the complete database with new nodes, and we need to
> store old data (reputation should be stored for some time) to validate
> the signatures.

Unclear what reputation means, here?

> I was calculating with two updates a day, but only 100k nodes. Each
> node would make a connection to around 10 other nodes, resulting in
> 114+10*147 = 1.5kB per update. Each node would only use around 3kb/s
> up/down and would do around 3 signature verifications per second in
> idle mode, while allowing everyone a full copy of the routing. Making
> sure messages spread efficiently is another problem, as we have so
> many tiny messages. Storing the hashes you received in a bloom table
> and asking other peers if they want to receive it as well does work,
> but just the hashes of all messages do add up already..
>
> In the end, the nodes themselves don't really need the routing
> information, we just need a way to transport the information to the
> clients that make payments. Nodes will just stupidly obey the orders
> given in the blob of data they'll receive.

BTW I've not been separating nodes and clients in my head-design.  Of
course, some nodes might not ever generate payments, and some nodes
might never route others' payments, but it increases privacy vastly if
they *can*, so I'm trying to think about them that way.

> This design opens up for some DDoS attacks, as we don't want other
> nodes to just spam information all day long that would be relayed
> through the complete network. Furthermore, one attacker can just
> emulate a complete network that is vouching for him, and he can use
> those 'nodes' to push spam through the network as well. This does
> screw a little bit with the reputation system, but I think it would be
> possible to detect a cluster of nodes that is only connected to the
> rest of the network through that one attacker (some graph theory I
> guess). Furthermore, we can check nodes by pinging them randomly, and
> dropping any reputation if we cannot reach them.

Yes, reputation is hard.

What do you think about the idea of using the anchor transactions in the
blockchain as a map of the network?  You could use an OP_RETURN to stash
two pubkeysx, that is the two node IDs.

Cheers,
Rusty.


More information about the Lightning-dev mailing list