[Lightning-dev] [BOLT Draft] Onion Routing Spec

Christian Decker decker.christian at gmail.com
Thu Aug 4 17:05:04 UTC 2016


Sending to the mailing list since Laolu pointed out I only sent my reply to
him. Sorry for that.

On Sat, Jul 30, 2016, 15:25 Christian Decker <decker.christian at gmail.com>
wrote:

> On Wed, Jul 27, 2016 at 8:14 PM Olaoluwa Osuntokun <laolu32 at gmail.com>
> wrote:
>
>> Hi Christian, welcome to the mailing list!
>>
> Hi Laolu,
>
> glad to be here :-)
>
>>
>> Excellent work! I've been meaning to re-visit my implementation since I
>> finished it last October but have been busy tending to other lnd related
>> items.
>> Thanks for the cleanups and optimizations you've added to my initial
>> Sphinx
>> implementation. Once we've finished fleshing out the initial
>> specification, I'd
>> be happy to get those changes merged!
>>
>> I've taken a look at the diff against the existing implementation, plus
>> the
>> current spec draft. I'd say the current spec draft is the *clearest*
>> description of Sphinx I've encountered, especially the padding bits. IMO,
>> the
>> notation within the paper describing how to construct+process the
>> mix-header is
>> rather opaque.
>>
> Actually, your implementation helped a lot in getting a clear picture of
> how Sphinx is supposed to work, so thanks for taking the time to implement
> the paper in the first place. Glad you like my writeup, hopefully it is as
> clear to new implementors as well :-)
>
>>
>> I see you've modified the encryption scheme of the end-to-end payload,
>> switching from the arbitrary block-size block cipher (LIONESS) to purely a
>> stream cipher (ChaCha20). Initially I was under the impression that this
>> might
>> compromise one of the security arguments of the scheme, but on closer
>> inspection this is perfectly fine if one extends the header MAC to the
>> entire
>> payload as you've done. If one was staying true to the original
>> construction,
>> then this would eliminate the possibility of Single-Use Reply Blocks
>> (SURB's)
>> since the sender would be unable to construct the reply mix-header as she
>> would
>> be unaware of they responder's message. It's clear to me now that a MAC
>> covering
>> the end-to-end payload was omitted in the original version since the
>> proof of
>> computational indistinguishability of replies vs responses depends on the
>> structure+processing being identical for both messages types. However I
>> don't
>> see us having any use for SURB's so this is an excellent change.
>> Additionally,
>> modifications to the end-to-end payload will instantly cause packet
>> corruption,
>> stopping invalid packets from propagating through the network.
>>
>
> I'm going back and forth about including the payloads in the header HMAC.
> I think we have three options here:
>
>  1) Include the payload in the header HMAC computation
>  2) Include an additional set of HMACs for the payload that can be checked
> at each hop
>  3) Just use an encryption scheme that also verifies the integrity, e.g.,
> ChaCha20-Poly1305, on the last hop and normal ChaCha20 on all preceeding
> hops.
>
> The first option is small, and it discards tampered packets as soon as the
> next hop checks its HMAC. The downside is indeed that we lose a lot of
> flexibility. Besides losing the ability to provide a SURB to the final
> recipient, we also lose the ability to do anonymous rendezvous meetings,
> where the final recipient provides half of the route in the form of a
> precompiled header (something that Hornet is using).
>
> The second option is quite costly just to have the drop-early feature. It
> would add 400 bytes to each packet, assuming 20 byte HMACs and 20 hops. On
> the other hand forwarding a packet to the final recipient when we could
> discard it early is quite costly since each hop locks up some funds in the
> form of an associated HTLC.
>
> The third option is just the case in which we'd forward the packet to the
> final recipient, which can then decide whether the payload was tampered
> with or not. Costly in terms of HTLCs being created and funds being locked
> up, but hopefully they'd be released again immediately.
>
> Both the per-hop checkable schemes, combined with nodes signing the
> packets they forward, would give us the ability to identify misbehaving
> nodes and denounce them: if we receive a packet we check the integrity and
> if it doesn't match then we can prove to others that the node forwarded
> something broken with its signature, or it did not check the packet itself.
>
>
>> I really like the addition of the per-hop payload! It's a change to the
>> original construction that I've seriously considered proposing. Such a
>> payload
>> should prove to be very useful in the future for information such as:
>> limits on
>> the per-hop absolute timeout, fee information, etc.
>>
>> The addition of a version byte is also very welcome. This'll streamline
>> future
>> modifications we may make to the mix-header format in the future, such as
>> increasing the size of the per-hop payload, or switching to an alternative
>> format to encoding the "next hop" address.
>>
>
> Good idea, the next hop address basically does just have to make sense to
> the node processing the packet, so maybe we can use some form of short tag
> or index to specify which existing channel to use, instead of using the
> globally unique address.
>
>>
>> The current draft doesn't specify the processor's action in the scenario
>> that
>> they're unable to locate the next hop node within their local routing
>> table.
>> Just to be explicit, I think a final paragraph should be inserted under
>> the
>> "Packet Forwarding" section detailing the abort procedure.
>>
>
> Good catch, I'll amend the spec to specify that unroutable packets are
> dropped and the sender signaled.
>
>>
>>
>> > However, they could easily be made variable should we decide that
>> sending
>> > mostly empty messages is wasteful.
>>
>> I strongly think we should maintain the size uniformity of the packet
>> throughout processing, changes in payload size between hop can give away
>> information w.r.t a node's position within the route.
>>
>> We might want to consider dropping the end-to-end payload altogether
>> though. I
>> can't really think of a clear use case for the e2e payload within our
>> specific
>> application.  That would save us 1k bytes, reducing the size of the full
>> mix-header to 1234 bytes. Accounting for the additional fields within an
>> HTLC
>> "add" message, plus some additional overhead, this should keep us below
>> typical
>> MTU sizes, avoiding fragmentation of HTLC "add" messages.
>>
>
> There is a tradeoff between small packets and keeping the size uniform. I
> think we could bucketize sizes, e.g., have multiples of 32 bytes or 64
> bytes for the fields, to have packets with similar sized payloads have the
> same packet size. That would allow us to also drop the e2e payload by
> setting a size of 0, and still be able to forward it, should we ever find a
> use for it.
>
>>
>> > This specification is limited to version 0 packets and the structure of
>> > future version may change. The receiving node then splits the packet
>> into its
>> > fields.
>>
>> Packets with a non-zero version byte should be immediately rejected, as
>> well as
>> packets which aren't *exactly* 2258 bytes (or 1234 bytes if we drop the
>> e2e
>> payload).
>>
>
> I don't think we need a size check: the fixed size actually allows us to
> serialize directly on the wire, without a size prefix, so if we read less
> than 2258 bytes then we simply continue reading, if we read more, then we
> crossed a packet boundary and ought to split. But maybe that is mixing
> transport layer and packet specification?
>
>>
>> > The resulting HMAC is compared with the HMAC from the packet. Should the
>> > computed HMAC and the HMAC from the packet differ then the node MUST
>> abort
>> > processing and report a route failure.
>>
>> Perhaps we should explicitly specify that the HMAC equality check MUST be
>> performed without leaking timing information (constant time comparison)? I
>> can't think of a precise potential vulnerability otherwise since the
>> scheme
>> uses an encrypt-then-MAC construction with a semantically secure
>> encryption
>> scheme. I don't see any clear downsides in specifying that the comparison
>> be
>> made in constant.
>>
>
> I don't see a chance of this being used either, each secret key used in
> the HMAC computation is used just once. But better be safe than sorry. I'll
> add it to the spec.
>
>>
>> > The sender computes a route {n_1, ..., n_{r-1}, n_r}, where n_1 is a
>> peer of
>> > the sender and n_r is the recipient.
>>
>> In order to eliminate ambiguity, I think this should be more explicit,
>> specifying that "n_1" is a *direct neighbor* of the sender
>>
>> Amended and clarified that a peer is a direct neighbor in the overlay
> network.
>
>
>> > A special HMAC value of 20 0x00 bytes indicates that the currently
>> > processing hop is the intended recipient and that the packet should not
>> be
>> > forwarded. At this point the end-to-end payload is fully decrypted and
>> the
>> > route has terminated.
>>
>> It seems that with the current construction, then the "next hop" address
>> will
>> also be zero bytes if a packet processor is the last hop in the route.
>> Alternatively, if the sender is aware that the receiver is actually a
>> "virtual
>> channel", then an additional address could be used instead of the
>> zero-address
>> to facilitate de-multiplexing at the last hop to the destination virtual
>> channel.
>>
>
> Sounds good, I thought I'd use the HMAC to signal so we still have the
> first 20 bytes free to use, adding a de-multiplexing address might be one
> use.
>
>>
>> > In the pocessing phase the secret is the node's private key...
>>
>> Typo here, it should read "In the processing phase..."
>>
>> I think two key onion-routing related aspects are under specified within
>> the
>> current draft: replay protection, and key rotation. Although we might
>> want to
>> place details concerning key rotation in a separate document covering the
>> initial routing protocol as the two are closely related.
>>
>> First, lets talk replay protection. The current draft specifies that:
>>
>> > The node MUST keep a log of previously used shared secrets. Should the
>> shared
>> > secret already be in the log it MUST abort processing the packet and
>> report a
>> > route failure, since this is likely a replay attack, otherwise the
>> shared
>> > secret is added to the log
>>
>> This is definitely necessary, however as dictated this would require
>> nodes to
>> allocate a potentially *unbounded* amount of storage to the shared secret
>> "seen" log. I think we can allow nodes to periodically truncate this log
>> by
>> adding an additional session time stamp to the mix-header, either placed
>> directly after the version byte, or within the per-hop payload.
>>
>> With this absolute timestamp, each entry within the "seen" log becomes a
>> two-tuple: the shared secret itself, and the corresponding timestamp
>> specified
>> within the mix-header. Before the absolute timestamp has passed, the entry
>> within the log remains, and mix-headers received with duplicated shared
>> secret
>> are rejected. If we enforce an upper bound on the "session lifetime", then
>> nodes can periodically prune this log, discarding obsolete shared secrets.
>> Once an entry has been pruned, although a node may not know if a shared
>> secret
>> is being duplicated, they can reject expired sessions according to the
>> timestamp achieving a similar affect.
>>
>> Reasonable session times may be something around 30-minutes to an hour or
>> two.
>>
>> With this scheme, I think that we can achieve near perfect replay
>> protection
>> without unbounded storage.
>>
>
> We have to be careful when using timestamps in the packet as it makes
> individual hops collatable. One solution is again to bucketize timestamps
> included in the packet such that we have enough packets with the same
> timestamps to avoid having an attacker associate individual hops in a
> route. The alternative is to have a blinded timestamp per hop, i.e., in the
> routing info, but that automatically blows the size up to 20x. So my
> proposal would be to include a timestamp rounded up to the closest hour and
> have a sliding window of accepted timestamps of +/- 1 hour, remembering the
> secrets for that period and rejecting anything that is too far in the
> future or too far in the past. The more coarse the bigger the less likely
> an attacker is to guess which packets belong to the same route, but the
> more storage is required on the node's side.
>
>>
>> On to the second aspect: key rotation. Ignoring the possible time-stamped
>> log
>> solution, the (possibly) only other way to allow nodes to prune their
>> shared
>> secret log is to periodically rotate keys. Once a node rotates a key, it
>> can
>> safely delete its *entire* previous shared secret log, as replay attacks
>> will
>> fail on the HMAC check. Independent of replay attack prevention, key
>> rotation
>> is useful in order to provide a degree of forward secrecy. Without key
>> rotation, when a node is compromised by the adversary (assuming the node
>> keeps
>> *all* prior mix-headers), the adversary learns of the next-node within the
>> route, and also the per-hop payload for the compromised node. With key
>> rotation, assuming the prior keys are deleted, then the adversary is only
>> able
>> to decrypt partially ciphertexts from the current epoch.
>>
>> So then a question arises: how do we perform key rotation within the
>> network
>> globally with loose synchronization? I say loose synchronization since if
>> rotations aren't synchronized to a degree, then the payments of source
>> nodes
>> may fail as an intermediate hop is unable to process the packet since it
>> used an
>> obsolete onion key. Therefore published onion keys should come in pairs
>> (with
>> overlapping lifetimes), and also be authenticated by a node's identity
>> key.
>>
>> A key rotation scheme might look something like the following:
>>     * A node publishes two keys, along with a block hash of a block
>> beyond a
>>       "safe" re-org distance, and a signature (under the identity pubkey)
>>       covering the advertisement.
>>     * The first key is intended for use until N blocks after the specified
>>       block hash, with nodes switching to the second key afterwards.
>>     * At the N/2 point, the original node publishes a new advertisement
>> with
>>       the second key from the original advertisement listed as the
>> "first" key,
>>       and a new fresh quasi-ephemeral onion key. The original node
>> performs
>>       this rotation at intervals at the mid-point of expiration of the
>> oldest
>>       key.
>>     * Nodes who receive this new advertisement (aware of the previous)
>> record
>>       this as the node's next rotation key. Nodes who receive this
>>       advertisement, unaware of the previous treat this as the node's
>> initial
>>       pair of quasi-ephemeral onion keys.
>>
>> With this scheme, rotations are synchronized very loosely, perhaps in the
>> timespan of half-days, days, etc. There is a new cost however, when
>> processing
>> packets, a node must attempt to derive the shared secret with both active
>> onion
>> keys. Alternatively, instead of block hashes, we can use some other time
>> based
>> beacon as a switch over point in order to accommodate peers on multiple
>> blockchains.
>>
>
> I like the loosely synched approach very much, but do we need explicit
> announcements at all? We could just use the announced key, i.e., the one
> that participated in the channel setup, as a root key for HD key
> derivation. The derivation path could then be based on floor(blockheight /
> 144) so that we rotate keys every day, and don't need any additional
> communication to announce new public keys. The switchover can be a bit
> messy, but we could just accept both old and new keys for a period around
> switchover and try both, or add the date in the packet, which would anyway
> be common to all packets on that day.
>
> A downside of using HD key derivation is that, should the root key be
> compromised then we cannot switch keys to new ones without a teardown, but
> in that case we'd be in a world of pain anyway since these keys allow
> spending the node's coins :-) Not adding the date allows us to switch keys
> quicker and each node could announce its own rotation period, to keep local
> storage low, but it adds some more computation as we determine the intended
> public key by trial and error.
>
>>
>> I'll take a few more passes through the current draft spec, as well are
>> your
>> commits in your fork of my original implementation, following up with any
>> other
>> questions/comments.
>>
>> I'm also trying to figure out how to enable intermediate nodes to reply
> to a packet, e.g., if capacities are insufficient or the next node is
> unreachable, by recycling the routing info. Currently we'd probably forward
> reply along the HTLC path associated with the forward path over the
> encrypted channel.
>
> So far I had no luck trying to build a return path into the header while
> forwarding. Maybe we could continue blinding the ephemeral key on the
> return path, and have a mechanism to tell the node the total blinding
> factor along the path so that it can encrypt something in the routing info
> for the return path? That would neatly combine Hornet and Sphinx,
> eliminating the initial roundtrip to setup forwarding segments.
>
> Thanks again for the detailed feedback, looking forward to read your
> thoughts on this :-)
>
> Cheers,
> Christian
>
> -- Laolu
>>
>>
>> On Mon, Jul 25, 2016 at 9:23 AM Christian Decker <
>> decker.christian at gmail.com> wrote:
>>
>>> Hi all,
>>>
>>> I took the liberty of taking apart Olaoluwa's Sphinx implementation and
>>> I came up with a spec draft that I'd like to propose [1]. It should roughly
>>> be Sphinx, pinning down the various key-generation and stream generation
>>> algorithms, and adding a per-hop payload.
>>>
>>> The per-hop payload is used to give instructions to individual hops,
>>> i.e., how many coins to forward to the next hop. This means that the
>>> end-to-end payload, i.e., the message in the Sphinx protocol, is currently
>>> unused and could be omitted.
>>>
>>> The payloads are currently fixed size (20 bytes per-hop and 1024 bytes
>>> for end-to-end payload) to avoid making messages collatable by their size.
>>> However, they could easily be made variable should we decide that
>>> sending mostly empty messages is wasteful.
>>>
>>> The spec is implemented in Go [2] and in C [3]. The Go version is an
>>> adaptation of Olaoluwa's implementation, with some minor speedups, removing
>>> some duplicate information, stripping the header, and switching to ChaCha20
>>> for stream generation and encryption. I also added a small commandline tool
>>> that allows you to write packets to stdout so that we can feed an onion
>>> generated by the C version to the Go implementation and vice-versa :-)
>>>
>>> Feedback is very welcome. If people like the draft I'll create
>>> pull-requests for the spec and the implementations, but I'd like to keep
>>> the discussion on the mailing list :-)
>>>
>>> Cheers,
>>> Christian
>>>
>>> [1]
>>> https://github.com/cdecker/lightning-rfc/blob/master/bolts/onion-protocol.md
>>> [2] https://github.com/cdecker/lightning-onion/tree/chacha20
>>> [3] https://github.com/cdecker/lightning/tree/chacha20
>>>
>> _______________________________________________
>>> Lightning-dev mailing list
>>> Lightning-dev at lists.linuxfoundation.org
>>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev
>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/lightning-dev/attachments/20160804/23e60f7a/attachment-0001.html>


More information about the Lightning-dev mailing list