# [Lightning-dev] Towards a gridlike Lightning Network

Benjamin Mord ben at mord.family
Fri Apr 20 20:43:20 UTC 2018

```Good afternoon ZmnSCPxj,

"I do not see a bloom filter?"

Well, if you look at it kinda sideways, you are using a bloom filter in
your March 23rd proposal. As originally defined, I think the "false
positives" in bloom filtering were the unfortunate cost of performance. In
BIP 37, the false positives become desirable, although still are 'false' in
that their only function is to serve as red herrings. But (omitting i for
clarity), your proposal takes BIP 37's spin on bloom filters one step
further to actually take the 'false positives' as the very definition of
our desired set, since what you are "searching for" is just your own public
key, which ends up being the least interesting result within that set.

" Regarding 24 vs 23, the condition for 23 allows a 3 members of a 5-member
neighborhood to think they form a single 3-member neighborhood, while the
remaining 2 members think they are in a 5-member neighborhood that includes
the other 3 members who have formed a 3-member neighborhood."

Oh, I see. But the reason that occurs is because different nodes are
considering different numbers of high-order bits. If everyone used the same
number of high-order bits, then it would become an equivalence
relationship, with which we can partition the network. This is because
then, a=b, would imply b=a, and also a=b and b=c, would imply a=c.
https://en.wikipedia.org/wiki/Equivalence_relation#Partition

I think algorithm from March 24 is broken actually, on second look, but I
understand now what you are trying to achieve. You want to allow local
judgement over best local cell size, and yet somehow end up with precise
uniform agreement on who is in which cells, because cycles require such
precision. But if you throw in that the network is dynamic, knowledge is
imperfect, and malicious behavior may occur, then I think strict
equivalence relationships and cycles become brittle. Perhaps we should
generalize the equivalence relationship into a distance function, so that
we can start thinking of this as a metric space which we want to fill with
some sort of structure. Perhaps then we can design efficient yet robustly
"fuzzy" structures. Perhaps we want a fuzzy fractal of some sort. Hmm...

"Intermediate nodes already know two hops?  The incoming and outgoing hop?

Yes, nodes would need to know one hope more, since the idea would be to
attract competition to the high-usage yet high-fee links.

Thanks,
Ben

On Thu, Apr 19, 2018 at 11:24 PM, ZmnSCPxj <ZmnSCPxj at protonmail.com> wrote:

> Good morning Benjamin,
>
>
> I think there are two distinct concepts here. The first is the
> identification of a 'neighborhood', and the second is the establishment of
> an order within that neighborhood for purpose of cycle formation.
>
> Your use of bloom filters to define a neighborhood, is I think the most
> valuable contribution. Formation of neighborhoods with high connectivity,
> with sparse but redundant connections among these neighborhoods, does seem
> like an economically efficient approach to maintaining useful competition
> and redundancy. If there are any graph theorists or category theorists on
> the list, perhaps they could offer some formal validation or optimization.
> For this, I prefer your March 23 proposal over March 24, I'm curious what
> improvement is intended in March 24 vs 23?
>
>
> I do not see a bloom filter? But then I am not a mathematician so it is
> possible I fail to see how the Bloom filter arises from the algorithm I
> described.
>
> Regarding 24 vs 23, the condition for 23 allows a 3 members of a 5-member
> neighborhood to think they form a single 3-member neighborhood, while the
> remaining 2 members think they are in a 5-member neighborhood that includes
> the other 3 members who have formed a 3-member neighborhood.
>
>
> The emergent definition and maintenance of a unique ordering for cycle
> establishment within a neighborhood is, I think, a much more ambitious
> undertaking. I'm not sure how we efficiently make that robust in a dynamic
> context, except perhaps with interactive coordination among the members
> operating off something other than just static global data. Otherwise
> different members would have different ideas about cycle order, depending
> on when they first joined. I also don't see how cycles recover when someone
> leaves.
>
> As people come and go, cycles will break. As the lightning network grows
> overall, neighborhoods identified by one setting of the bloom filter will
> become undesirably large. Perhaps a less ambitious but more robust
> heuristic would be one where probability of establishing a channel is
> proportional to the number of bits in common in the pubkey hash, normalized
> by the number of nodes currently observed?
>
>
> I believe that is what the algorithm already does? It dynamically sizes
> neighborhoods to be small, with high probability of neighborhoods to be
> 3->5 members.
>
> This heuristic would automatically adjust granularity over time as
> lightning membership grows and shrinks. Nodes could periodically reevaluate
> their channel allocations as the overall network grows or shrinks.
>
>
> The algorithm does not consider what happens when we have a cycle already
> existing, and a new member joins or an existing one wishes to leave.  There
> is no way to inform this.  My expectation is that people will just close
> channels that they no longer  find useful; this makes funds available
> onchain.  Then a process notices there are onchain funds, and calls this
> algorithm to get a proposed channel; this adapts to whatever the topology
> is right now.
>
> It is not clear when we should close channels.  For one, gossip requires
> that a node first open a channel to somebody, before its existence is
> acknowledged and gossiped across the network: this is intended to prevent
> spammers from spinning up nodes without actually intending to join the
> network.  Similarly, to leave the network, we assume that nodes will at
> least get all their channels closed: channel closure is an onchain event
> visible to everyone monitoring the the blockchain (which is what all LN
> nodes SHOULD do), and once all channels of a node have closed, we MAY drop
> them from the network view (c-lightning implements this, but I do not know
> if other implementations do).
>
> So at least for *leaving* LN permanently, the leaving node SHOULD close
> all its channels (or at least their peers SHOULD close it for them in
> unilateral closes if the peer just does not respond at all anymore).  This
> recommendation that they MAY drop nodes from the network view, if that node
> has all its channels closed).  The closing will also put the channel funds
> onchain, and presumably the autopilots of its neighbors will notice the
> onchain funds, calls the algorithm to get a peer to channel to, which
> computes (hopefully) using the updated network view that has the leaving
> node removed already (this may not be true: the leaving node might not be
> able to close all channels simultaneously, and may misbehave and expect its
> neighbors to close the channels for it), and adapts correctly to the node
> leaving the network.
>
> However for a new node entering the network, there is problem.  This
> requires existing nodes to close existing channels and open new ones to the
> new node: as this costs onchain fees, there is no real incentive for them
> to do so.  I can only fall back on the informal argument: that people will
> at first experiment with the Lightning Network and commit a tiny amount of
> funds, then later they will put in more funds and thus open new channels,
> hopefully using this algorithm so that other people who come in later will
> also get new channels to them: the first channels people make will
> (eventually) not be in the neighborhood later on, but since they will open
> new channels later those will adapt to new neighborhoods of the larger
> network graph.
>
> I believe the main benefit of the algorithm I describe is that it flattens
> the number of channels a node will have and reduces centralization
> (although I believe roasbeef argues that centralization at the LN layer is
> relatively unimportant?).  If each node opens two channels to two different
> nodes, pure random selection will by chance award a few lucky nodes with 3
> or 4 or 5 or more incoming channels while about 1/4 of nodes will have no
> channels incoming, but this algorithm will force all nodes to have exactly
> two incoming and two outgoing channels, while having similar reachability
> as random selection.  Of course, that assumes everyone already knows the
> entire network beforehand, and the issue of new nodes coming in is absent.
>
>
> Were it not for the privacy goals, dynamic optimization based on actual
> usage would be possible. Nodes could track the routes of payments that flow
> through their channels and could spot fees that seem both large and
> popular, and could use this information to identify under-served nodes to
> which a direct channel might be in order. If we allowed nodes to see two
> hops of the route instead of just the one, then such optimization would
> become possible, although this compromise would require longer minimum
> routes for a given level of privacy.
>
>
> Intermediate nodes already know two hops?  The incoming and outgoing hop?